900 resultados para Optimal Feedback Control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show numerically that direct delayed optoelectronic feedback can suppress hysteresis and bistability in a directly modulated semiconductor laser. The simulation of a laser with feedback is performed for a considerable range of feedback strengths and delays and the corresponding values for the areas of the hysteresis loops are calculated. It is shown that the hysteresis loop completely vanishes for certain combinations of these parameters. The regimes for the disappearance of bistability are classified globally. Different dynamical states of the laser are characterized using bifurcation diagrams and time series plots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control of the spread of dengue fever by introduction of the intracellular parasitic bacterium Wolbachia in populations of the vector Aedes aegypti, is presently one of the most promising tools for eliminating dengue, in the absence of an efficient vaccine. The success of this operation requires locally careful planning to determine the adequate number of mosquitoes carrying the Wolbachia parasite that need to be introduced into the natural population. The latter are expected to eventually replace the Wolbachia-free population and guarantee permanent protection against the transmission of dengue to human. In this paper, we propose and analyze a model describing the fundamental aspects of the competition between mosquitoes carrying Wolbachia and mosquitoes free of the parasite. We then introduce a simple feedback control law to synthesize an introduction protocol, and prove that the population is guaranteed to converge to a stable equilibrium where the totality of mosquitoes carry Wolbachia. The techniques are based on the theory of monotone control systems, as developed after Angeli and Sontag. Due to bistability, the considered input-output system has multivalued static characteristics, but the existing results are unable to prove almost-global stabilization, and ad hoc analysis has to be conducted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we deal with a micro electromechanical system (MEMS), represented by a micro-accelerometer. Through numerical simulations, it was found that for certain parameters, the system has a chaotic behavior. The chaotic behaviors in a fractional order are also studied numerically, by historical time and phase portraits, and the results are validated by the existence of positive maximal Lyapunov exponent. Three control strategies are used for controlling the trajectory of the system: State Dependent Riccati Equation (SDRE) Control, Optimal Linear Feedback Control, and Fuzzy Sliding Mode Control. The controls proved effective in controlling the trajectory of the system studied and robust in the presence of parametric errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we analyzed a bifurcational behavior of a longitudinal flight nonlinear dynamics, taking as an example the F-8 aircraft Crusader. We deal with an analysis of high angles of attack in order to stabilize the oscillations; those were close to the critical angle of the aircraft, in the flight conditions, established. We proposed a linear optimal control design applied to the considered nonlinear aircraft model below angle of stall, taking into account regions of Hopf and saddled noddle bifurcations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton-Jacobi-Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka-Volterra models are provided to show the effectiveness of this method. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the chaotic behavior of a micro-mechanical resonator with electrostatic forces on both sides is suppressed. The aim is to control the system in an orbit of the analytical solution obtained by the Method of Multiple Scales. Two control strategies are used for controlling the trajectory of the system, namely: State Dependent Riccati Equation (SDRE) Control and Optimal Linear Feedback Control (OLFC). The controls proved effectiveness in controlling the trajectory of the system. Additionally, the robustness of each strategy is tested considering the presence of parametric errors and measurement noise in control. © 2012 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the optimal linear feedback control and of the state-dependent Riccati equation control techniques applied to control and to suppress the chaotic motion in the atomic force microscope are analyzed. In addition, the sensitivity of each control technique regarding to parametric uncertainties are considered. Simulation results show the advantages and disadvantages of each technique. © 2013 Brazilian Society for Automatics - SBA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a microgrid with a high penetration level of renewable energy, energy storage use becomes more integral to the system performance due to the stochastic nature of most renewable energy sources. This thesis examines the use of droop control of an energy storage source in dc microgrids in order to optimize a global cost function. The approach involves using a multidimensional surface to determine the optimal droop parameters based on load and state of charge. The optimal surface is determined using knowledge of the system architecture and can be implemented with fully decentralized source controllers. The optimal surface control of the system is presented. Derivations of a cost function along with the implementation of the optimal control are included. Results were verified using a hardware-in-the-loop system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We compare two different approaches to the control of the dynamics of a continuously monitored open quantum system. The first is Markovian feedback, as introduced in quantum optics by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)]. The second is feedback based on an estimate of the system state, developed recently by Doherty and Jacobs [Phys. Rev. A 60, 2700 (1999)]. Here we choose to call it, for brevity, Bayesian feedback. For systems with nonlinear dynamics, we expect these two methods of feedback control to give markedly different results. The simplest possible nonlinear system is a driven and damped two-level atom, so we choose this as our model system. The monitoring is taken to be homodyne detection of the atomic fluorescence, and the control is by modulating the driving. The aim of the feedback in both cases is to stabilize the internal state of the atom as close as possible to an arbitrarily chosen pure state, in the presence of inefficient detection and other forms of decoherence. Our results (obtained without recourse to stochastic simulations) prove that Bayesian feedback is never inferior, and is usually superior, to Markovian feedback. However, it would be far more difficult to implement than Markovian feedback and it loses its superiority when obvious simplifying approximations are made. It is thus not clear which form of feedback would be better in the face of inevitable experimental imperfections.