246 resultados para Optimal Linear Control
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
A model for optimal chemical control of leaf area damaged by fungi population - Parameter dependence
Resumo:
We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.
Resumo:
This note deals whith the problem of extrema which may occur in the step-response of a stable linear system with real zeros and poles. Some simple sufficients conditions and necessary conditions are presented for analyses when zeros located between the dominant and fastest pole does not cause extrema in the step-response. These conditions require knowledge of the pole-zero configuration of the corresponding transfer-function.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A Maximum Principle is derived for a class of optimal control problems arising in midcourse guidance, in which certain controls are represented by measures and, the state trajectories are functions of bounded variation. The optimality conditions improves on previous optimality conditions by allowing nonsmooth data, measurable time dependence, and a possibly time varying constraint set for the conventional controls.
Resumo:
In this article we introduce the concept of MP-pseudoinvexity for general nonlinear impulsive optimal control problems whose dynamics are specified by measure driven control equations. This is a general paradigm in that, both the absolutely continuous and singular components of the dynamics depend on both the state and the control variables. The key result consists in showing the sufficiency for optimality of the MP-pseudoinvexity. It is proved that, if this property holds, then every process satisfying the maximum principle is an optimal one. This result is obtained in the context of a proper solution concept that will be presented and discussed. © 2012 IEEE.
Resumo:
This paper studies the problem of applying an impulsive control in a spacecraft that is performing a Swing-By maneuver. The objective is to study the changes in velocity, energy and angular momentum for this maneuver as a function of the three usual parameters of the standard Swing-By plus the three parameters (the magnitude of the impulse, the point of its application and the angle between the impulse and the velocity of the spacecraft) that specify the impulse applied. The dynamics used is the restricted three body problem under the regularization of Lemaitre, made to increase the accuracy of the numerical integration near the primaries. The present research develops an algorithm to calculate the variation of energy and angular momentum in a maneuver where the application of the impulsive control occurs before or after the passage of the spacecraft by the periapsis, but within the sphere of influence of the secondary body and in a non-tangential direction. Using this approach, it is possible to find the best position and direction to apply the impulse to maximize the energy change of the total maneuver. The results showed that the application of the impulse at the periapsis and in the direction of motion of the spacecraft is usually not the optimal solution.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work, we deal with a micro electromechanical system (MEMS), represented by a micro-accelerometer. Through numerical simulations, it was found that for certain parameters, the system has a chaotic behavior. The chaotic behaviors in a fractional order are also studied numerically, by historical time and phase portraits, and the results are validated by the existence of positive maximal Lyapunov exponent. Three control strategies are used for controlling the trajectory of the system: State Dependent Riccati Equation (SDRE) Control, Optimal Linear Feedback Control, and Fuzzy Sliding Mode Control. The controls proved effective in controlling the trajectory of the system studied and robust in the presence of parametric errors.
Resumo:
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.