67 resultados para Reverse self-control problem

em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of signal tracking, in the presence of a disturbance signal in the plant, is solved using a zero-variation methodology. A state feedback controller is designed in order to minimise the H-2-norm of the closed-loop system, such that the effect of the disturbance is attenuated. Then, a state estimator is designed and the modification of the zeros is used to minimise the H-infinity-norm from the reference input signal to the error signal. The error is taken to be the difference between the reference and the output signals, thereby making it a tracking problem. The design is formulated in a linear matrix inequality framework, such that the optimal solution of the stated control problem is obtained. Practical examples illustrate the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hazard Analysis and Critical Control Point (HACCP) is a preventive system that intends to guarantee the safety and harmlessness of food. It improves the quality of products as it eliminates possible defects during the process, and saves costs by practically eliminating final product inspection. This work describes the typical hazards encountered on the mushroom processing line for fresh consumption. Throughout the process, only the reception stage of mushrooms has been considered a critical control point (CCP). The main hazards at this stage were: the presence of unauthorised phytosanitary products; larger doses of such products than those permitted; the presence of pathogenic bacteria or thermo-stable enterotoxins. Putting into practice such knowledge would provide any industry that processes mushrooms for fresh consumption with a self-control HACCP-based system for its own productions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work concerns the application of the optimal control theory to Dengue epidemics. The dynamics of this insect-borne disease is modelled as a set of non-linear ordinary differential equations including the effect of educational campaigns organized to motivate the population to break the reproduction cycle of the mosquitoes by avoiding the accumulation of still water in open-air recipients. The cost functional is such that it reflects a compromise between actual financial spending (in insecticides and educational campaigns) and the population health (which can be objectively measured in terms of, for instance, treatment costs and loss of productivity). The optimal control problem is solved numerically using a multiple shooting method. However, the optimal control policy is difficult to implement by the health authorities because it is not practical to adjust the investment rate continuously in time. Therefore, a suboptimal control policy is computed assuming, as the admissible set, only those controls which are piecewise constant. The performance achieved by the optimal control and the sub-optimal control policies are compared with the cases of control using only insecticides when Breteau Index is greater or equal to 5 and the case of no-control. The results show that the sub-optimal policy yields a substantial reduction in the cost, in terms of the proposed functional, and is only slightly inferior to the optimal control policy. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the control and synchronization of chaos by designing linear feedback controllers. The linear feedback control problem for nonlinear systems has been formulated under optimal control theory viewpoint. Asymptotic stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation thus guaranteeing both stability and optimality. The formulated theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations were provided in order to show the effectiveness of this method for the control of the chaotic Rossler system and synchronization of the hyperchaotic Rossler system. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Letter, an optimal control strategy that directs the chaotic motion of the Rossler system to any desired fixed point is proposed. The chaos control problem is then formulated as being an infinite horizon optimal control nonlinear problem that was reduced to a solution of the associated Hamilton-Jacobi-Bellman equation. We obtained its solution among the correspondent Lyapunov functions of the considered dynamical system. (C) 2004 Elsevier B.V All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Maximum Principle is derived for a class of optimal control problems arising in midcourse guidance, in which certain controls are represented by measures and, the state trajectories are functions of bounded variation. The optimality conditions improves on previous optimality conditions by allowing nonsmooth data, measurable time dependence, and a possibly time varying constraint set for the conventional controls.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A vector-valued impulsive control problem is considered whose dynamics, defined by a differential inclusion, are such that the vector fields associated with the singular term do not satisfy the so-called Frobenius condition. A concept of robust solution based on a new reparametrization procedure is adopted in order to derive necessary conditions of optimality. These conditions are obtained by taking a limit of those for an appropriate sequence of auxiliary standard optimal control problems approximating the original one. An example to illustrate the nature of the new optimality conditions is provided. © 2000 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the linear and nonlinear feedback control techniques for chaotic systems were been considered. The optimal nonlinear control design problem has been resolved by using Dynamic Programming that reduced this problem to a solution of the Hamilton-Jacobi-Bellman equation. In present work the linear feedback control problem has been reformulated under optimal control theory viewpoint. The formulated Theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations for the Rössler system and the Duffing oscillator are provided to show the effectiveness of this method. Copyright © 2005 by ASME.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider an infinite horizon optimal impulsive control problems for which a given cost function is minimized by choosing control strategies driving the state to a point in a given closed set C ∞. We present necessary conditions of optimality in the form of a maximum principle for which the boundary condition of the adjoint variable is such that non-degeneracy due to the fact that the time horizon is infinite is ensured. These conditions are given for conventional systems in a first instance and then for impulsive control problems. They are proved by considering a family of approximating auxiliary interval conventional (without impulses) optimal control problems defined on an increasing sequence of finite time intervals. As far as we know, results of this kind have not been derived previously. © 2010 IFAC.