3 resultados para MARKOV JUMP SYSTEMS

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste artigo propomos uma adaptação de um algoritmo baseado na evolução biológica para a obtenção do controle ótimo do problema do custo médio a longo prazo para sistemas lineares com saltos markovianos. Não há na literatura um método que forneça, comprovadamente, o controle ótimo do problema, nem estudos comparativos de diferentes métodos. O algoritmo empregado diferencia-se dos algoritmos genéticos básicos por substituir os operadores evolutivos por um sorteio de acordo com uma distribuição probabilística. Comparamos o algoritmo proposto com um método bastante utilizado para esta classe de problema, levando em consideração a relação entre os custos obtidos, o tempo de CPU e a quantidade de problemas em que o critério de parada estabelecido foi atingido.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.