4 resultados para Event control

em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the stability of discrete-time linear systems subject to random jumps in the parameters, described by an underlying finite-state Markov chain. In the model studied, a stopping time τ Δ is associated with the occurrence of a crucial failure after which the system is brought to a halt for maintenance. The usual stochastic stability concepts and associated results are not indicated, since they are tailored to pure infinite horizon problems. Using the concept named stochastic τ-stability, equivalent conditions to ensure the stochastic stability of the system until the occurrence of τ Δ is obtained. In addition, an intermediary and mixed case for which τ represents the minimum between the occurrence of a fix number N of failures and the occurrence of a crucial failure τ Δ is also considered. Necessary and sufficient conditions to ensure the stochastic τ-stability are provided in this setting that are auxiliary to the main result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear quadratic Gaussian control of discrete-time Markov jump linear systems is addressed in this paper, first for state feedback, and also for dynamic output feedback using state estimation. in the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (T N), or the occurrence of a crucial failure event (τ δ), after which the system paralyzed. From the constructive method used here a separation principle holds, and the solutions are given in terms of a Kalman filter and a state feedback sequence of controls. The control gains are obtained by recursions from a set of algebraic Riccati equations for the former case or by a coupled set of algebraic Riccati equation for the latter case. Copyright © 2005 IFAC.