953 resultados para Discrete-continuous optimal control problems
Resumo:
This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.
Resumo:
The relationship between minimum variance and minimum expected quadratic loss feedback controllers for linear univariate discrete-time stochastic systems is reviewed by taking the approach used by Caines. It is shown how the two methods can be regarded as providing identical control actions as long as a noise-free measurement state-space model is employed.
Resumo:
In this paper, a discrete time dynamic integrated system optimisation and parameter estimation algorithm is applied to the solution of the nonlinear tracking optimal control problem. A version of the algorithm with a linear-quadratic model-based problem is developed and implemented in software. The algorithm implemented is tested with simulation examples.
Resumo:
We derive energy-norm a posteriori error bounds, using gradient recovery (ZZ) estimators to control the spatial error, for fully discrete schemes for the linear heat equation. This appears to be the �rst completely rigorous derivation of ZZ estimators for fully discrete schemes for evolution problems, without any restrictive assumption on the timestep size. An essential tool for the analysis is the elliptic reconstruction technique.Our theoretical results are backed with extensive numerical experimentation aimed at (a) testing the practical sharpness and asymptotic behaviour of the error estimator against the error, and (b) deriving an adaptive method based on our estimators. An extra novelty provided is an implementation of a coarsening error "preindicator", with a complete implementation guide in ALBERTA in the appendix.
Resumo:
Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.
Resumo:
This paper aims with the use of linear matrix inequalities approach (LMIs) for application in active vibration control problems in smart strutures. A robust controller for active damping in a panel was designed with piezoelectrical actuators in optimal locations for illustration of the main proposal. It was considered, in the simulations of the closed-loop, a model identified by eigensystem realization algorithm (ERA) and reduced by modal decomposition. We tested two differents techniques to solve the problem. The first one uses LMI approach by state-feedback based in an observer design, considering several simultaneous constraints as: a decay rate, limited input on the actuators, bounded output peak (output energy) and robustness to parametic uncertainties. The results demonstrated the vibration attenuation in the structure by controlling only the first modes and the increased damping in the bandwidth of interest. However, it is possible to occur spillover effects, because the design has not been done considering the dynamic uncertainties related with high frequencies modes. In this sense, the second technique uses the classical H. output feedback control, also solved by LMI approach, considering robustness to residual dynamic to overcome the problem found in the first test. The results are compared and discussed. The responses shown the robust performance of the system and the good reduction of the vibration level, without increase mass.
Resumo:
This paper presents the linear optimal control technique for reducing the chaotic movement of the micro-electro-mechanical Comb Drive system to a small periodic orbit. We analyze the non-linear dynamics in a micro-electro-mechanical Comb Drive and demonstrated that this model has a chaotic behavior. Chaos control problems consist of attempts to stabilize a chaotic system to an equilibrium point, a periodic orbit, or more general, about a given reference trajectory. This technique is applied in analyzes the nonlinear dynamics in an MEMS Comb drive. The simulation results show the identification by linear optimal control is very effective.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper deals with an energy pumping that occurs in a (MEMS) Gyroscope nonlinear dynamical system, modeled with a proof mass constrained to move in a plane with two resonant modes, which are nominally orthogonal. The two modes are ideally coupled only by the rotation of the gyro about the plane's normal vector. We also developed a linear optimal control design for reducing the oscillatory movement of the nonlinear systems to a stable point.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
A model for optimal chemical control of leaf area damaged by fungi population - Parameter dependence
Resumo:
We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.
Resumo:
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to first-order optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the Mangasarian-Fromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical Fritz-John necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC.
Resumo:
This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).