969 resultados para economic constraints
Resumo:
Three refractory coarse grained CAIs from the Efremovka CV3 chondrite, one (E65) previously shown to have formed with live Ca-41, were studied by ion microprobe for their Al-26-Mg-26 and Be-10-B-10 systematic in order to better understand the origin of Be-10. The high precision Al-Mg data and the inferred Al-26/Al-27 values attest that the precursors of the three CAIs evolved in the solar nebula over a period of few hundred thousand years before last melting-crystallization events. The initial Be-10/Be-9 ratios and delta B-10 values defined by the Be-10 isochrons for the three Efremovka CAIs are similar within errors. The CAI Be-10 abundance in published data underscores the large range for initial Be-10/Be-9 ratios. This is contrary to the relatively small range of Al-26/Al-27 variations in CAIs around the canonical ratio. Two models that could explain the origin of this large Be-10/Be-9 range are assessed from the collateral variations predicted for the initial delta B-10 values: (i) closed system decay of Be-10 from a ``canonical'' Be-10/Be-9 ratio and (ii) formation of CAIs from a mixture of solid precursors and nebula gas irradiated during up to a few hundred thousand years. The second scenario is shown to be the most consistent with the data. This shows that the major fraction of Be-10 in CAIs was produced by irradiation of refractory grains, while contributions of galactic cosmic rays trapping and early solar wind irradiation are less dominant. The case for Be-10 production by solar cosmic rays irradiation of solid refractory precursors poses a conundrum for Ca-41 because the latter is easily produced by irradiation and should be more abundant than what is observed in CAIs. Be-10 production by irradiation from solar energetic particles requires high Ca-41 abundance in early solar system, however, this is not observed in CAIs. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this paper, the sliding mode control based guidance laws to intercept stationary targets at a desired impact time are proposed. Then, it is extended to constant velocity targets using the notion of predicted interception. The desired impact time is achieved by selecting the interceptor's lateral acceleration to enforce a sliding mode on a switching surface designed using non-linear engagement dynamics. Numerical simulation results are presented to validate the proposed guidance law for different initial engagement geometries, impact times and salvo attack scenarios
Resumo:
Light neutralino dark matter can be achieved in the Minimal Supersymmetric Standard Model if staus are rather light, with mass around 100 GeV. We perform a detailed analysis of the relevant supersymmetric parameter space, including also the possibility of light selectons and smuons, and of light higgsino- or wino-like charginos. In addition to the latest limits from direct and indirect detection of dark matter, ATLAS and CMS constraints on electroweak-inos and on sleptons are taken into account using a ``simplified models'' framework. Measurements of the properties of the Higgs boson at 125 GeV, which constrain amongst others the invisible decay of the Higgs boson into a pair of neutralinos, are also implemented in the analysis. We show that viable neutralino dark matter can be achieved for masses as low as 15 GeV. In this case, light charginos close to the LEP bound are required in addition to light right-chiral staus. Significant deviations are observed in the couplings of the 125 GeV Higgs boson. These constitute a promising way to probe the light neutralino dark matter scenario in the next run of the LHC. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this paper guidance laws to intercept stationary and constant velocity targets at a desired impact angle, based on sliding mode control theory, are proposed. The desired impact angle, which is defined in terms of a desired line-of-sight (LOS) angle, is achieved in finite time by selecting the missile's lateral acceleration (latax) to enforce non-singular terminal sliding mode on a switching surface designed using this desired LOS angle and based on non-linear engagement dynamics. Numerical simulation results are presented to validate the proposed guidance laws for different initial engagement geometries and impact angles.
Resumo:
We revisit the constraints on the parameter space of the Minimal Supersymmetric Standard Model (MSSM), from charge and color breaking minima in the light of information on the Higgs from the LHC so far. We study the behavior of the scalar potential keeping two light sfermion fields along with the Higgs in the pMSSM framework and analyze the stability of the vacuum. We find that for lightest stops a parts per thousand(2) 1 TeV and small mu a parts per thousand(2) 500 GeV, the absolute stability of the potential can be attained only for . The bounds become stronger for larger values of the mu parameter. Note that this is approximately the value of Xt which maximizes the Higgs mass. Our bounds on the low scale MSSM parameters are more stringent than those reported earlier in literature. We reanalyze the stau sector as well, keeping both staus. We study the connections between the observed Higgs rates and vacuum (meta)stability. We show how a precision study of the ratio of signal strengths, (mu (gamma gamma) /mu (ZZ) ) can shed further light.
Resumo:
Guidance laws based on a conventional sliding mode ensures only asymptotic convergence. However, convergence to the desired impact angle within a finite time is important in most practical guidance applications. These finite time convergent guidance laws suffer from singularity leading to control saturation. In this paper, guidance laws to intercept targets at a desired impact angle, from any initial heading angle, without exhibiting any singularity, are presented. The desired impact angle, which is defined in terms of a desired line-of-sight angle, is achieved in finite time by selecting the interceptor's lateral acceleration to enforce nonsingular terminal sliding mode on a switching surface designed using nonlinear engagement dynamics. Numerical simulation results are presented to validate the proposed guidance laws for different initial engagement geometries and impact angles. Although the guidance laws are designed for constant speed interceptors, its robustness against the time-varying speed of interceptors is also evaluated through extensive simulation results.
Resumo:
We address the problem of reconstructing a sparse signal from its DFT magnitude. We refer to this problem as the sparse phase retrieval (SPR) problem, which finds applications in tomography, digital holography, electron microscopy, etc. We develop a Fienup-type iterative algorithm, referred to as the Max-K algorithm, to enforce sparsity and successively refine the estimate of phase. We show that the Max-K algorithm possesses Cauchy convergence properties under certain conditions, that is, the MSE of reconstruction does not increase with iterations. We also formulate the problem of SPR as a feasibility problem, where the goal is to find a signal that is sparse in a known basis and whose Fourier transform magnitude is consistent with the measurement. Subsequently, we interpret the Max-K algorithm as alternating projections onto the object-domain and measurement-domain constraint sets and generalize it to a parameterized relaxation, known as the relaxed averaged alternating reflections (RAAR) algorithm. On the application front, we work with measurements acquired using a frequency-domain optical-coherence tomography (FDOCT) experimental setup. Experimental results on measured data show that the proposed algorithms exhibit good reconstruction performance compared with the direct inversion technique, homomorphic technique, and the classical Fienup algorithm without sparsity constraint; specifically, the autocorrelation artifacts and background noise are suppressed to a significant extent. We also demonstrate that the RAAR algorithm offers a broader framework for FDOCT reconstruction, of which the direct inversion technique and the proposed Max-K algorithm become special instances corresponding to specific values of the relaxation parameter.
Resumo:
Adapting the power of secondary users (SUs) while adhering to constraints on the interference caused to primary receivers (PRxs) is a critical issue in underlay cognitive radio (CR). This adaptation is driven by the interference and transmit power constraints imposed on the secondary transmitter (STx). Its performance also depends on the quality of channel state information (CSI) available at the STx of the links from the STx to the secondary receiver and to the PRxs. For a system in which an STx is subject to an average interference constraint or an interference outage probability constraint at each of the PRxs, we derive novel symbol error probability (SEP)-optimal, practically motivated binary transmit power control policies. As a reference, we also present the corresponding SEP-optimal continuous transmit power control policies for one PRx. We then analyze the robustness of the optimal policies when the STx knows noisy channel estimates of the links between the SU and the PRxs. Altogether, our work develops a holistic understanding of the critical role played by different transmit and interference constraints in driving power control in underlay CR and the impact of CSI on its performance.
Resumo:
Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.
Resumo:
Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic omega pi form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the omega pi form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around 0.6 GeV.
Resumo:
Using the positivity of relative entropy arising from the Ryu-Takayanagi formula for spherical entangling surfaces, we obtain constraints at the nonlinear level for the gravitational dual. We calculate the Green's function necessary to compute the first order correction to the entangling surface and use this to find the relative entropy for non-constant stress tensors in a derivative expansion. We show that the Einstein value satisfies the positivity condition, while the multidimensional parameter space away from it gets constrained.
Resumo:
This article considers a semi-infinite mathematical programming problem with equilibrium constraints (SIMPEC) defined as a semi-infinite mathematical programming problem with complementarity constraints. We establish necessary and sufficient optimality conditions for the (SIMPEC). We also formulate Wolfe- and Mond-Weir-type dual models for (SIMPEC) and establish weak, strong and strict converse duality theorems for (SIMPEC) and the corresponding dual problems under invexity assumptions.
Resumo:
Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity. Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 degrees C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure. (c) 2015 Elsevier B.V. All rights reserved.
Resumo:
We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.
Resumo:
We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.