884 resultados para Optimal control problem
Resumo:
This paper presents the mathematical development of a body-centric nonlinear dynamic model of a quadrotor UAV that is suitable for the development of biologically inspired navigation strategies. Analytical approximations are used to find an initial guess of the parameters of the nonlinear model, then parameter estimation methods are used to refine the model parameters using the data obtained from onboard sensors during flight. Due to the unstable nature of the quadrotor model, the identification process is performed with the system in closed-loop control of attitude angles. The obtained model parameters are validated using real unseen experimental data. Based on the identified model, a Linear-Quadratic (LQ) optimal tracker is designed to stabilize the quadrotor and facilitate its translational control by tracking body accelerations. The LQ tracker is tested on an experimental quadrotor UAV and the obtained results are a further means to validate the quality of the estimated model. The unique formulation of the control problem in the body frame makes the controller better suited for bio-inspired navigation and guidance strategies than conventional attitude or position based control systems that can be found in the existing literature.
Resumo:
Several works in the shopping-time and in the human-capital literature, due to the nonconcavity of the underlying Hamiltonian, use Örst-order conditions in dynamic optimization to characterize necessity, but not su¢ ciency, in intertemporal problems. In this work I choose one paper in each one of these two areas and show that optimality can be characterized by means of a simple aplication of Arrowís (1968) su¢ ciency theorem.
Resumo:
A first order analytical model for optimal small amplitude attitude maneuvers of spacecraft with cylindrical symmetry in an elliptical orbits is presented. The optimization problem is formulated as a Mayer problem with the control torques provided by a power limited propulsion system. The state is defined by Seffet-Andoyer's variables and the control by the components of the propulsive torques. The Pontryagin Maximum Principle is applied to the problem and the optimal torques are given explicitly in Serret-Andoyer's variables and their adjoints. For small amplitude attitude maneuvers, the optimal Hamiltonian function is linearized around a reference attitude. A complete first order analytical solution is obtained by simple quadrature and is expressed through a linear algebraic system involving the initial values of the adjoint variables. A numerical solution is obtained by taking the Euler angles formulation of the problem, solving the two-point boundary problem through the shooting method, and, then, determining the Serret-Andoyer variables through Serret-Andoyer transformation. Numerical results show that the first order solution provides a good approximation to the optimal control law and also that is possible to establish an optimal control law for the artificial satellite's attitude. (C) 2003 COSPAR. Published by Elsevier B.V. Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the control and synchronization of chaos by designing linear feedback controllers. The linear feedback control problem for nonlinear systems has been formulated under optimal control theory viewpoint. Asymptotic stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation thus guaranteeing both stability and optimality. The formulated theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations were provided in order to show the effectiveness of this method for the control of the chaotic Rossler system and synchronization of the hyperchaotic Rossler system. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
A model for optimal chemical control of leaf area damaged by fungi population - Parameter dependence
Resumo:
We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A branch and bound algorithm is proposed to solve the H2-norm model reduction problem for continuous-time linear systems, with conditions assuring convergence to the global optimum in finite time. The lower and upper bounds used in the optimization procedure are obtained through Linear Matrix Inequalities formulations. Examples illustrate the results.
Resumo:
A vector-valued impulsive control problem is considered whose dynamics, defined by a differential inclusion, are such that the vector fields associated with the singular term do not satisfy the so-called Frobenius condition. A concept of robust solution based on a new reparametrization procedure is adopted in order to derive necessary conditions of optimality. These conditions are obtained by taking a limit of those for an appropriate sequence of auxiliary standard optimal control problems approximating the original one. An example to illustrate the nature of the new optimality conditions is provided. © 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Here the results for CD4+T cells count and the viral load obtained from HIV sero-positive patients are compared with results from numerical simulations by computer. Also, the standard scheme of administration of drugs anti HIV (HAART schemes) which uses constant doses is compared with an alternative sub-optimal teatment scheme which uses variable drug dosage according to the evolution of a quantitative measure of the side effects. The quantitative analysis done here shows that it is possible to obtain, using the alternative scheme, the same performance of actual data but using variable dosage and having fewer side effects. Optimal control theory is used to solve and also to provide a prognosis related to the strategies for control of viraemia.
Resumo:
This paper, a micro-electro-mechanical systems (MEMS) with parametric uncertainties is considered. The non-linear dynamics in MEMS system is demonstrated with a chaotic behavior. We present the linear optimal control technique for reducing the chaotic movement of the micro-electromechanical system with parametric uncertainties to a small periodic orbit. The simulation results show the identification by linear optimal control is very effective. © 2013 Academic Publications, Ltd.
Resumo:
In this work a Nonzero-Sum NASH game related to the H2 and H∞ control problems is formulated in the context of convex optimization theory. The variables of the game are limiting bounds for the H2 and H∞ norms, and the final controller is obtained as an equilibrium solution, which minimizes the `sensitivity of each norm' with respect to the other. The state feedback problem is considered and illustrated by numerical examples.
Resumo:
This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.