135 resultados para Constrained Optimal Control
Resumo:
In this paper we study the behavior of a semi-active suspension witch external vibrations. The mathematical model is proposed coupled to a magneto rheological (MR) damper. The goal of this work is stabilize of the external vibration that affect the comfort and durability an vehicle, to control these vibrations we propose the combination of two control strategies, the optimal linear control and the magneto rheological (MR) damper. The optimal linear control is a linear feedback control problem for nonlinear systems, under the optimal control theory viewpoint We also developed the optimal linear control design with the scope in to reducing the external vibrating of the nonlinear systems in a stable point. Here, we discuss the conditions that allow us to the linear optimal control for this kind of non-linear system.
Resumo:
A first order analytical model for optimal small amplitude attitude maneuvers of spacecraft with cylindrical symmetry in an elliptical orbits is presented. The optimization problem is formulated as a Mayer problem with the control torques provided by a power limited propulsion system. The state is defined by Seffet-Andoyer's variables and the control by the components of the propulsive torques. The Pontryagin Maximum Principle is applied to the problem and the optimal torques are given explicitly in Serret-Andoyer's variables and their adjoints. For small amplitude attitude maneuvers, the optimal Hamiltonian function is linearized around a reference attitude. A complete first order analytical solution is obtained by simple quadrature and is expressed through a linear algebraic system involving the initial values of the adjoint variables. A numerical solution is obtained by taking the Euler angles formulation of the problem, solving the two-point boundary problem through the shooting method, and, then, determining the Serret-Andoyer variables through Serret-Andoyer transformation. Numerical results show that the first order solution provides a good approximation to the optimal control law and also that is possible to establish an optimal control law for the artificial satellite's attitude. (C) 2003 COSPAR. Published by Elsevier B.V. Ltd. All rights reserved.
Resumo:
This paper presents the linear optimal control technique for reducing the chaotic movement of the micro-electro-mechanical Comb Drive system to a small periodic orbit. We analyze the non-linear dynamics in a micro-electro-mechanical Comb Drive and demonstrated that this model has a chaotic behavior. Chaos control problems consist of attempts to stabilize a chaotic system to an equilibrium point, a periodic orbit, or more general, about a given reference trajectory. This technique is applied in analyzes the nonlinear dynamics in an MEMS Comb drive. The simulation results show the identification by linear optimal control is very effective.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this work, we deal with a micro electromechanical system (MEMS), represented by a micro-accelerometer. Through numerical simulations, it was found that for certain parameters, the system has a chaotic behavior. The chaotic behaviors in a fractional order are also studied numerically, by historical time and phase portraits, and the results are validated by the existence of positive maximal Lyapunov exponent. Three control strategies are used for controlling the trajectory of the system: State Dependent Riccati Equation (SDRE) Control, Optimal Linear Feedback Control, and Fuzzy Sliding Mode Control. The controls proved effective in controlling the trajectory of the system studied and robust in the presence of parametric errors.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the control and synchronization of chaos by designing linear feedback controllers. The linear feedback control problem for nonlinear systems has been formulated under optimal control theory viewpoint. Asymptotic stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation thus guaranteeing both stability and optimality. The formulated theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations were provided in order to show the effectiveness of this method for the control of the chaotic Rossler system and synchronization of the hyperchaotic Rossler system. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A vector-valued impulsive control problem is considered whose dynamics, defined by a differential inclusion, are such that the vector fields associated with the singular term do not satisfy the so-called Frobenius condition. A concept of robust solution based on a new reparametrization procedure is adopted in order to derive necessary conditions of optimality. These conditions are obtained by taking a limit of those for an appropriate sequence of auxiliary standard optimal control problems approximating the original one. An example to illustrate the nature of the new optimality conditions is provided. © 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to first-order optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the Mangasarian-Fromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical Fritz-John necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC.
Resumo:
Here the results for CD4+T cells count and the viral load obtained from HIV sero-positive patients are compared with results from numerical simulations by computer. Also, the standard scheme of administration of drugs anti HIV (HAART schemes) which uses constant doses is compared with an alternative sub-optimal teatment scheme which uses variable drug dosage according to the evolution of a quantitative measure of the side effects. The quantitative analysis done here shows that it is possible to obtain, using the alternative scheme, the same performance of actual data but using variable dosage and having fewer side effects. Optimal control theory is used to solve and also to provide a prognosis related to the strategies for control of viraemia.
Resumo:
This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).
Resumo:
We consider an infinite horizon optimal impulsive control problems for which a given cost function is minimized by choosing control strategies driving the state to a point in a given closed set C ∞. We present necessary conditions of optimality in the form of a maximum principle for which the boundary condition of the adjoint variable is such that non-degeneracy due to the fact that the time horizon is infinite is ensured. These conditions are given for conventional systems in a first instance and then for impulsive control problems. They are proved by considering a family of approximating auxiliary interval conventional (without impulses) optimal control problems defined on an increasing sequence of finite time intervals. As far as we know, results of this kind have not been derived previously. © 2010 IFAC.
Resumo:
This work considers nonsmooth optimal control problems and provides two new sufficient conditions of optimality. The first condition involves the Lagrange multipliers while the second does not. We show that under the first new condition all processes satisfying the Pontryagin Maximum Principle (called MP-processes) are optimal. Conversely, we prove that optimal control problems in which every MP-process is optimal necessarily obey our first optimality condition. The second condition is more natural, but it is only applicable to normal problems and the converse holds just for smooth problems. Nevertheless, it is proved that for the class of normal smooth optimal control problems the two conditions are equivalent. Some examples illustrating the features of these sufficient concepts are presented. © 2012 Springer Science+Business Media New York.