Worst-case bounds for the logarithmic loss of predictors


Autoria(s): Cesa Bianchi, Nicolò; Lugosi, Gábor
Contribuinte(s)

Universitat Pompeu Fabra. Departament d'Economia i Empresa

Data(s)

15/09/2005

Resumo

We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.

Identificador

http://hdl.handle.net/10230/934

Idioma(s)

eng

Direitos

L'accés als continguts d'aquest document queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons

info:eu-repo/semantics/openAccess

<a href="http://creativecommons.org/licenses/by-nc-nd/3.0/es/">http://creativecommons.org/licenses/by-nc-nd/3.0/es/</a>

Palavras-Chave #Statistics, Econometrics and Quantitative Methods #universal prediction #universal coding #empirical processes #on-line learning #metric entropy
Tipo

info:eu-repo/semantics/workingPaper