Optimistic linear programming gives logarithmic regret for irreducible MDPs


Autoria(s): Tewari, Ambuj; Bartlett, Peter L.
Contribuinte(s)

Platt, John

Koller, Daphne

Singer, Yoram

Rowies, Sam

Data(s)

2008

Resumo

We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.

Identificador

http://eprints.qut.edu.au/45645/

Relação

http://books.nips.cc/papers/files/nips20/NIPS2007_0673.pdf

Tewari, Ambuj & Bartlett, Peter L. (2008) Optimistic linear programming gives logarithmic regret for irreducible MDPs. In Platt, John, Koller, Daphne, Singer, Yoram, & Rowies, Sam (Eds.) Advances in Neural Information Processing Systems 20 (NIPS) , 2008, Cambridge, MA.

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #080600 INFORMATION SYSTEMS #MDPs #Optimistic Linear Programming
Tipo

Conference Paper