Linear programming for large-scale Markov decision problems


Autoria(s): Abbasi-Yadkori, Yasin; Bartlett, Peter L.; Malek, Alan
Contribuinte(s)

Xing, E.

Jebara, T.

Data(s)

2014

Resumo

We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/88857/

Publicador

MIT Press

Relação

http://eprints.qut.edu.au/88857/1/88857.pdf

http://jmlr.org/proceedings/papers/v32/malek14.pdf

Abbasi-Yadkori, Yasin, Bartlett, Peter L., & Malek, Alan (2014) Linear programming for large-scale Markov decision problems. In Xing, E. & Jebara, T. (Eds.) JMLR Workshop and Conference Proceedings, MIT Press, Beijing, China, pp. 496-504.

Direitos

Copyright 2014 [Please consult the author]

Fonte

School of Mathematical Sciences; Science & Engineering Faculty

Tipo

Conference Paper