Closing the gap between bandit and full-information online optimization : high-probability regret bound


Autoria(s): Rakhlin, Alexander; Tewari, Ambuj; Bartlett, Peter L.
Data(s)

26/08/2007

Resumo

We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.

Identificador

http://eprints.qut.edu.au/44021/

Publicador

University of California

Relação

http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-109.pdf

Rakhlin, Alexander, Tewari, Ambuj, & Bartlett, Peter L. (2007) Closing the gap between bandit and full-information online optimization : high-probability regret bound. Technical Report, UCB/EECS-2007-109. University of California, Berkeley, California (USA).

Direitos

Copyright 2007 please consult the authors

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #OAVJ
Tipo

Report