Adaptive online gradient descent


Autoria(s): Bartlett, Peter L.; Hazan, Elad; Rakhlin, Alexander
Data(s)

14/06/2007

Resumo

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Identificador

http://eprints.qut.edu.au/44023/

Publicador

University of California

Relação

http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-82.pdf

Bartlett, Peter L., Hazan, Elad, & Rakhlin, Alexander (2007) Adaptive online gradient descent. Technical Report, UCB/EECS-2007-82. University of California, Berkeley, California.

Direitos

Copyright © 2007, by the author(s). All rights reserved.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #080100 ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING #OAVJ
Tipo

Report