The LQ control problem for Markovian jumps linear systems with horizon defined by stopping times


Autoria(s): Nespoli, Cristiane; Do Val, João B. R.; Cáceres, Yusef
Contribuinte(s)

Universidade Estadual Paulista (UNESP)

Data(s)

27/05/2014

27/05/2014

29/11/2004

Resumo

This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).

Formato

703-707

Identificador

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1383686

Proceedings of the American Control Conference, v. 1, p. 703-707.

0743-1619

http://hdl.handle.net/11449/67955

WOS:000224688300116

2-s2.0-8744270440

Idioma(s)

eng

Relação

Proceedings of the American Control Conference

Direitos

closedAccess

Palavras-Chave #Discrete time control systems #Feedback #Markov processes #Matrix algebra #Optimal control systems #Probability #Riccati equations #Set theory #Jump linear quadratic (JLQ) control #Markov states #Markovian jump linear systems (MJLS) #Linear control systems
Tipo

info:eu-repo/semantics/conferencePaper