944 resultados para Constrained Optimal Control
Resumo:
In the recent years, autonomous aerial vehicles gained large popularity in a variety of applications in the field of automation. To accomplish various and challenging tasks the capability of generating trajectories has assumed a key role. As higher performances are sought, traditional, flatness-based trajectory generation schemes present their limitations. In these approaches the highly nonlinear dynamics of the quadrotor is, indeed, neglected. Therefore, strategies based on optimal control principles turn out to be beneficial, since in the trajectory generation process they allow the control unit to best exploit the actual dynamics, and enable the drone to perform quite aggressive maneuvers. This dissertation is then concerned with the development of an optimal control technique to generate trajectories for autonomous drones. The algorithm adopted to this end is a second-order iterative method working directly in continuous-time, which, under proper initialization, guarantees quadratic convergence to a locally optimal trajectory. At each iteration a quadratic approximation of the cost functional is minimized and a decreasing direction is then obtained as a linear-affine control law, after solving a differential Riccati equation. The algorithm has been implemented and its effectiveness has been tested on the vectored-thrust dynamical model of a quadrotor in a realistic simulative setup.
Resumo:
In this paper, we discuss the problem of globally computing sub-Riemannian curves on the Euclidean group of motions SE(3). In particular, we derive a global result for special sub-Riemannian curves whose Hamiltonian satisfies a particular condition. In this paper, sub-Riemannian curves are defined in the context of a constrained optimal control problem. The maximum principle is then applied to this problem to yield an appropriate left-invariant quadratic Hamiltonian. A number of integrable quadratic Hamiltonians are identified. We then proceed to derive convenient expressions for sub-Riemannian curves in SE(3) that correspond to particular extremal curves. These equations are then used to compute sub-Riemannian curves that could potentially be used for motion planning of underwater vehicles.
Resumo:
This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.
Resumo:
This paper deals with an energy pumping that occurs in a (MEMS) Gyroscope nonlinear dynamical system, modeled with a proof mass constrained to move in a plane with two resonant modes, which are nominally orthogonal. The two modes are ideally coupled only by the rotation of the gyro about the plane's normal vector. We also developed a linear optimal control design for reducing the oscillatory movement of the nonlinear systems to a stable point.
Resumo:
The objective of this paper is to correct and improve the results obtained by Van der Ploeg (1984a, 1984b) and utilized in the theoretical literature related to feedback stochastic optimal control sensitive to constant exogenous risk-aversion (see, Jacobson, 1973, Karp, 1987 and Whittle, 1981, 1989, 1990, among others) or to the classic context of risk-neutral decision-makers (see, Chow, 1973, 1976a, 1976b, 1977, 1978, 1981, 1993). More realistic and attractive, this new approach is placed in the context of a time-varying endogenous risk-aversion which is under the control of the decision-maker. It has strong qualitative implications on the agent's optimal policy during the entire planning horizon.
Resumo:
In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
A model for optimal chemical control of leaf area damaged by fungi population - Parameter dependence
Resumo:
We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.
Resumo:
A Maximum Principle is derived for a class of optimal control problems arising in midcourse guidance, in which certain controls are represented by measures and, the state trajectories are functions of bounded variation. The optimality conditions improves on previous optimality conditions by allowing nonsmooth data, measurable time dependence, and a possibly time varying constraint set for the conventional controls.
Resumo:
In this work, the linear and nonlinear feedback control techniques for chaotic systems were been considered. The optimal nonlinear control design problem has been resolved by using Dynamic Programming that reduced this problem to a solution of the Hamilton-Jacobi-Bellman equation. In present work the linear feedback control problem has been reformulated under optimal control theory viewpoint. The formulated Theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations for the Rössler system and the Duffing oscillator are provided to show the effectiveness of this method. Copyright © 2005 by ASME.
Resumo:
In this article we introduce the concept of MP-pseudoinvexity for general nonlinear impulsive optimal control problems whose dynamics are specified by measure driven control equations. This is a general paradigm in that, both the absolutely continuous and singular components of the dynamics depend on both the state and the control variables. The key result consists in showing the sufficiency for optimality of the MP-pseudoinvexity. It is proved that, if this property holds, then every process satisfying the maximum principle is an optimal one. This result is obtained in the context of a proper solution concept that will be presented and discussed. © 2012 IEEE.
Resumo:
This paper, a micro-electro-mechanical systems (MEMS) with parametric uncertainties is considered. The non-linear dynamics in MEMS system is demonstrated with a chaotic behavior. We present the linear optimal control technique for reducing the chaotic movement of the micro-electromechanical system with parametric uncertainties to a small periodic orbit. The simulation results show the identification by linear optimal control is very effective. © 2013 Academic Publications, Ltd.
Resumo:
In this paper we study the behavior of a semi-active suspension witch external vibrations. The mathematical model is proposed coupled to a magneto rheological (MR) damper. The goal of this work is stabilize of the external vibration that affect the comfort and durability an vehicle, to control these vibrations we propose the combination of two control strategies, the optimal linear control and the magneto rheological (MR) damper. The optimal linear control is a linear feedback control problem for nonlinear systems, under the optimal control theory viewpoint We also developed the optimal linear control design with the scope in to reducing the external vibrating of the nonlinear systems in a stable point. Here, we discuss the conditions that allow us to the linear optimal control for this kind of non-linear system.