884 resultados para Optimal control problem
Resumo:
The problem of signal tracking, in the presence of a disturbance signal in the plant, is solved using a zero-variation methodology. A state feedback controller is designed in order to minimise the H-2-norm of the closed-loop system, such that the effect of the disturbance is attenuated. Then, a state estimator is designed and the modification of the zeros is used to minimise the H-infinity-norm from the reference input signal to the error signal. The error is taken to be the difference between the reference and the output signals, thereby making it a tracking problem. The design is formulated in a linear matrix inequality framework, such that the optimal solution of the stated control problem is obtained. Practical examples illustrate the effectiveness of the proposed method.
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
The main goal of this paper is to extend the generalized variational problem of Herglotz type to the more general context of the Euclidean sphere S^n. Motivated by classical results on Euclidean spaces, we derive the generalized Euler-Lagrange equation for the corresponding variational problem defined on the Riemannian manifold S^n. Moreover, the problem is formulated from an optimal control point of view and it is proved that the Euler-Lagrange equation can be obtained from the Hamiltonian equations. It is also highlighted the geodesic problem on spheres as a particular case of the generalized Herglotz problem.
Resumo:
In recent years the analysis and synthesis of (mechanical) control systems in descriptor form has been established. This general description of dynamical systems is important for many applications in mechanics and mechatronics, in electrical and electronic engineering, and in chemical engineering as well. This contribution deals with linear mechanical descriptor systems and its control design with respect to a quadratic performance criterion. Here, the notion of properness plays an important role whether the standard Riccati approach can be applied as usual or not. Properness and non-properness distinguish between the cases if the descriptor system is exclusively governed by the control input or by its higher-order time-derivatives additionally. In the unusual case of non-proper systems a quite different problem of optimal control design has to be considered. Both cases will be solved completely.
Resumo:
This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.
Resumo:
This paper considers the motion planning problem for oriented vehicles travelling at unit speed in a 3-D space. A Lie group formulation arises naturally and the vehicles are modeled as kinematic control systems with drift defined on the orthonormal frame bundles of particular Riemannian manifolds, specifically, the 3-D space forms Euclidean space E-3, the sphere S-3, and the hyperboloid H'. The corresponding frame bundles are equal to the Euclidean group of motions SE(3), the rotation group SO(4), and the Lorentz group SO (1, 3). The maximum principle of optimal control shifts the emphasis for these systems to the associated Hamiltonian formalism. For an integrable case, the extremal curves are explicitly expressed in terms of elliptic functions. In this paper, a study at the singularities of the extremal curves are given, which correspond to critical points of these elliptic functions. The extremal curves are characterized as the intersections of invariant surfaces and are illustrated graphically at the singular points. It. is then shown that the projections, of the extremals onto the base space, called elastica, at these singular points, are curves of constant curvature and torsion, which in turn implies that the oriented vehicles trace helices.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimisation and Parameter Estimation (DISOPE) which has been designed to achieve the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A method based on Broyden's ideas is used for approximating some derivative trajectories required. Ways for handling con straints on both manipulated and state variables are described. Further, a method for coping with batch-to- batch dynamic variations in the process, which are common in practice, is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch processes. The algorithm is success fully applied to a benchmark problem consisting of the input profile optimisation of a fed-batch fermentation process.
Resumo:
An algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimization and Parameter Estimation (DISOPE), which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimization procedure. A version of the algorithm with a linear-quadratic model-based problem, implemented in the C+ + programming language, is developed and applied to illustrative simulation examples. An analysis of the optimality and convergence properties of the algorithm is also presented.
Resumo:
An iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified simplified model based problem with parameter updating in such a manner that the correct solution of the original nonlinear problem is achieved.
Resumo:
In this paper, a discrete time dynamic integrated system optimisation and parameter estimation algorithm is applied to the solution of the nonlinear tracking optimal control problem. A version of the algorithm with a linear-quadratic model-based problem is developed and implemented in software. The algorithm implemented is tested with simulation examples.
Resumo:
A novel iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified linear quadratic model based problem with parameter updating in such a manner that the correct solution of the original non-linear problem is achieved. The resulting algorithm has a particular advantage in that the solution is achieved without the need to solve the differential algebraic equations . Convergence aspects are discussed and a simulation example is described which illustrates the performance of the technique. 1. Introduction When modelling industrial processes often the resulting equations consist of coupled differential and algebraic equations (DAEs). In many situations these equations are nonlinear and cannot readily be directly reduced to ordinary differential equations.
Resumo:
In this Letter, an optimal control strategy that directs the chaotic motion of the Rossler system to any desired fixed point is proposed. The chaos control problem is then formulated as being an infinite horizon optimal control nonlinear problem that was reduced to a solution of the associated Hamilton-Jacobi-Bellman equation. We obtained its solution among the correspondent Lyapunov functions of the considered dynamical system. (C) 2004 Elsevier B.V All rights reserved.
Resumo:
We consider free time optimal control problems with pointwise set control constraints u(t) ∈ U(t). Here we derive necessary conditions of optimality for those problem where the set U(t) is defined by equality and inequality control constraints. The main ingredients of our analysis are a well known time transformation and recent results on necessary conditions for mixed state-control constraints. ©2010 IEEE.
Resumo:
In this article we introduce the concept of MP-pseudoinvexity for general nonlinear impulsive optimal control problems whose dynamics are specified by measure driven control equations. This is a general paradigm in that, both the absolutely continuous and singular components of the dynamics depend on both the state and the control variables. The key result consists in showing the sufficiency for optimality of the MP-pseudoinvexity. It is proved that, if this property holds, then every process satisfying the maximum principle is an optimal one. This result is obtained in the context of a proper solution concept that will be presented and discussed. © 2012 IEEE.