895 resultados para Optimal Control
Resumo:
This paper considers an aircraft collision avoidance design problem that also incorporates design of the aircraft’s return-to-course flight. This control design problem is formulated as a non-linear optimal-stopping control problem; a formulation that does not require a prior knowledge of time taken to perform the avoidance and return-to-course manoeuvre. A dynamic programming solution to the avoidance and return-to-course problem is presented, before a Markov chain numerical approximation technique is described. Simulation results are presented that illustrate the proposed collision avoidance and return-to-course flight approach.
Resumo:
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged It$\ddot{\rm o}$ equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.
Resumo:
A procedure for designing the optimal bounded control of strongly non-linear oscillators under combined harmonic and white-noise excitations for minimizing their first-passage failure is proposed. First, a stochastic averaging method for strongly non-linear oscillators under combined harmonic and white-noise excitations using generalized harmonic functions is introduced. Then, the dynamical programming equations and their boundary and final time conditions for the control problems of maximizing reliability and of maximizing mean first-passage time are formulated from the averaged Ito equations by using the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraint. Finally, the conditional reliability function, the conditional probability density and mean of the first-passage time of the optimally controlled system are obtained from solving the backward Kolmogorov equation and Pontryagin equation. An example is given to illustrate the proposed procedure and the results obtained are verified by using those from digital simulation. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.
Resumo:
This paper is concerned with time-domain optimal control of active suspensions. The optimal control problem formulation has been generalised by incorporating both road disturbances (ride quality) and a representation of driver inputs (handling quality) into the optimal control formulation. A regular optimal control problem as well as a risk-sensitive exponential optimal control performance index is considered. Emphasis has been given to practical considerations including the issue of state estimation in the presence of load disturbances (driver inputs). © 2012 IEEE.
Resumo:
Gough, John; Belavkin, V.P.; Smolianov, O.G., (2005) 'Hamilton?Jacobi?Bellman equations for quantum optimal feedback control', Journal of Optics B: Quantum and Semiclassical Optics 7 pp.S237-S244 RAE2008
Resumo:
OBJECTIVE
To assess the relationship between glycemic control, pre-eclampsia, and gestational hypertension in women with type 1 diabetes.
RESEARCH DESIGN AND METHODS
Pregnancy outcome (pre-eclampsia or gestational hypertension) was assessed prospectively in 749 women from the randomized controlled Diabetes and Pre-eclampsia Intervention Trial (DAPIT). HbA1c (A1C) values were available up to 6 months before pregnancy (n = 542), at the first antenatal visit (median 9 weeks) (n = 721), at 26 weeks’ gestation (n = 592), and at 34 weeks’ gestation (n = 519) and were categorized as optimal (<6.1%: referent), good (6.1–6.9%), moderate (7.0–7.9%), and poor (=8.0%) glycemic control, respectively.
RESULTS
Pre-eclampsia and gestational hypertension developed in 17 and 11% of pregnancies, respectively. Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy compared with women who did not develop pre-eclampsia (P < 0.05, respectively). In early pregnancy, A1C =8.0% was associated with a significantly increased risk of pre-eclampsia (odds ratio 3.68 [95% CI 1.17–11.6]) compared with optimal control. At 26 weeks’ gestation, A1C values =6.1% (good: 2.09 [1.03–4.21]; moderate: 3.20 [1.47–7.00]; and poor: 3.81 [1.30–11.1]) and at 34 weeks’ gestation A1C values =7.0% (moderate: 3.27 [1.31–8.20] and poor: 8.01 [2.04–31.5]) significantly increased the risk of pre-eclampsia compared with optimal control. The adjusted odds ratios for pre-eclampsia for each 1% decrement in A1C before pregnancy, at the first antenatal visit, at 26 weeks’ gestation, and at 34 weeks’ gestation were 0.88 (0.75–1.03), 0.75 (0.64–0.88), 0.57 (0.42–0.78), and 0.47 (0.31–0.70), respectively. Glycemic control was not significantly associated with gestational hypertension.
CONCLUSIONS
Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy. These data suggest that optimal glycemic control both early and throughout pregnancy may reduce the risk of pre-eclampsia in women with type 1 diabetes.
Resumo:
This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper deals with an energy pumping that occurs in a (MEMS) Gyroscope nonlinear dynamical system, modeled with a proof mass constrained to move in a plane with two resonant modes, which are nominally orthogonal. The two modes are ideally coupled only by the rotation of the gyro about the plane's normal vector. We also developed a linear optimal control design for reducing the oscillatory movement of the nonlinear systems to a stable point.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
A model for optimal chemical control of leaf area damaged by fungi population - Parameter dependence
Resumo:
We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.
Resumo:
A Maximum Principle is derived for a class of optimal control problems arising in midcourse guidance, in which certain controls are represented by measures and, the state trajectories are functions of bounded variation. The optimality conditions improves on previous optimality conditions by allowing nonsmooth data, measurable time dependence, and a possibly time varying constraint set for the conventional controls.