958 resultados para Constraint qualifications
Resumo:
A new technique named as model predictive spread acceleration guidance (MPSAG) is proposed in this paper. It combines nonlinear model predictive control and spread acceleration guidance philosophies. This technique is then used to design a nonlinear suboptimal guidance law for a constant speed missile against stationary target with impact angle constraint. MPSAG technique can be applied to a class of nonlinear problems, which leads to a closed form solution of the lateral acceleration (latax) history update. Guidance command assumed is the lateral acceleration (latax), applied normal to the velocity vector. The new guidance law is validated by considering the nonlinear kinematics with both lag-free as well as first order autopilot delay. The simulation results show that the proposed technique is quite promising to come up with a nonlinear guidance law that leads to both very small miss distance as well as the desired impact angle.
Nonlinear Suboptimal Guidance with Impact Angle Constraint for Slow Moving Targets in 1-D Using MPSP
Resumo:
Using a recently developed method named as model predictive static programming (MPSP), a nonlinear suboptimal guidance law for a constant speed missile against a slow moving target with impact angle constraint is proposed. In this paper MPSP technique leads to a closed form solution of the latax history update for the given problem. Guidance command is the latax,which is normal to the missile velocity and the terminal constraints are miss distance and impact angle. The new guidance law is validated by considering the nonlinear kinematics with both lag-free and first order autopilot delay.
Resumo:
Fuzzy multiobjective programming for a deterministic case involves maximizing the minimum goal satisfaction level among conflicting goals of different stakeholders using Max-min approach. Uncertainty due to randomness in a fuzzy multiobjective programming may be addressed by modifying the constraints using probabilistic inequality (e.g., Chebyshev’s inequality) or by addition of new constraints using statistical moments (e.g., skewness). Such modifications may result in the reduction of the optimal value of the system performance. In the present study, a methodology is developed to allow some violation in the newly added and modified constraints, and then minimizing the violation of those constraints with the objective of maximizing the minimum goal satisfaction level. Fuzzy goal programming is used to solve the multiobjective model. The proposed methodology is demonstrated with an application in the field of Waste Load Allocation (WLA) in a river system.
Resumo:
This article compares the land use in solar energy technologies with conventional energy sources. This has been done by introducing two parameters called land transformation and land occupation. It has been shown that the land area transformed by solar energy power generation is small compared to hydroelectric power generation, and is comparable with coal and nuclear energy power generation when life-cycle transformations are considered. We estimate that 0.97% of total land area or 3.1% of the total uncultivable land area of India would be required to generate 3400 TWh/yr from solar energy power systems in conjunction with other renewable energy sources.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.
Resumo:
In this paper, we study the diversity-multiplexing-gain tradeoff (DMT) of wireless relay networks under the half-duplex constraint. It is often unclear what penalty if any, is imposed by the half-duplex constraint on the DMT of such networks. We study two classes of networks; the first class, called KPP(I) networks, is the class of networks with the relays organized in K parallel paths between the source and the destination. While we assume that there is no direct source-destination path, the K relaying paths can interfere with each other. The second class, termed as layered networks, is comprised of relays organized in layers, where links exist only between adjacent layers. We present a communication scheme based on static schedules and amplify-and-forward relaying for these networks. We also show that for KPP(I) networks with K >= 3, the proposed schemes can achieve full-duplex DMT performance, thus demonstrating that there is no performance hit on the DMT due to the half-duplex constraint. We also show that, for layered networks, a linear DMT of d(max)(1 - r)(+) between the maximum diversity d(max) and the maximum MG, r(max) = 1 is achievable. We adapt existing DMT optimal coding schemes to these networks, thus specifying the end-to-end communication strategy explicitly.
Resumo:
In this paper the cubic spline guidance law is presented for intercepting a stationary target at a desired impact angle. The guidance law is obtained from cubic spline curve based trajectory using an inverse method. The cubic spline t rajectory curve expresses the altitude as a cubic polynomial of the downrange. The guidance law is modified to achieve interception in the cases where impact angle is greater that or equal to 90◦. The guidance law is implemented in a feedback mode to maintain the desired impact angle and to reduce miss distance in the presence of lateral acceleration saturation and atmospheric distur- bances. The simulation results show that the guidance law fulfills all the requirements.
Resumo:
Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.
Resumo:
We present in this paper a new algorithm based on Particle Swarm Optimization (PSO) for solving Dynamic Single Objective Constrained Optimization (DCOP) problems. We have modified several different parameters of the original particle swarm optimization algorithm by introducing new types of particles for local search and to detect changes in the search space. The algorithm is tested with a known benchmark set and compare with the results with other contemporary works. We demonstrate the convergence properties by using convergence graphs and also the illustrate the changes in the current benchmark problems for more realistic correspondence to practical real world problems.
Resumo:
Folivory, being a dietary constraint, can affect the social time of colobines. In the present study, we compared food items and activity budgets of two closely related species of colobines inhabiting South India, i.e. the Hanuman langur (Semnopithecus hypoleucos) and Nilgiri langur (Semnopithecus johnii), to determine whether folivory had an impact on social time in these species. Our study established that Nilgiri langurs were more folivorous than Hanuman langurs. Nilgiri langurs spent much less time on social activities, but more time on resting, although the social organization of S. hypoleucos was similar to that of the Nilgiri langur. The enforced resting time for fermentation of leafy food items may have reduced the time available for social interactions, which in turn affected the social time in Nilgiri langurs. By comparing the data from previous studies on other Hanuman langur species, we found that S. hypoleucos spent a similar amount of time on social activities as Semnopithecus entellus. Hence, the social behaviour of S. entellus and S. hypoleucos is phylogenetically highly conservative. (C) 2015 S. Karger AG, Basel
Resumo:
The input-constrained erasure channel with feedback is considered, where the binary input sequence contains no consecutive ones, i.e., it satisfies the (1, infinity)-RLL constraint. We derive the capacity for this setting, which can be expressed as C-is an element of = max(0 <= p <= 0.5) (1-is an element of) H-b (p)/1+(1-is an element of) p, where is an element of is the erasure probability and Hb(.) is the binary entropy function. Moreover, we prove that a priori knowledge of the erasure at the encoder does not increase the feedback capacity. The feedback capacity was calculated using an equivalent dynamic programming (DP) formulation with an optimal average-reward that is equal to the capacity. Furthermore, we obtained an optimal encoding procedure from the solution of the DP, leading to a capacity-achieving, zero-error coding scheme for our setting. DP is, thus, shown to be a tool not only for solving optimization problems, such as capacity calculation, but also for constructing optimal coding schemes. The derived capacity expression also serves as the only non-trivial upper bound known on the capacity of the input-constrained erasure channel without feedback, a problem that is still open.
Resumo:
This paper proposes to use an extended Gaussian Scale Mixtures (GSM) model instead of the conventional ℓ1 norm to approximate the sparseness constraint in the wavelet domain. We combine this new constraint with subband-dependent minimization to formulate an iterative algorithm on two shift-invariant wavelet transforms, the Shannon wavelet transform and dual-tree complex wavelet transform (DTCWT). This extented GSM model introduces spatially varying information into the deconvolution process and thus enables the algorithm to achieve better results with fewer iterations in our experiments. ©2009 IEEE.