Adaboost is consistent


Autoria(s): Bartlett, Peter L.; Traskin, Mikhail
Data(s)

01/10/2007

Resumo

The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/44014/

Publicador

Massachusetts Institute of Technology Press (MIT Press)

Relação

http://eprints.qut.edu.au/44014/1/44014P.pdf

http://jmlr.csail.mit.edu/papers/volume8/bartlett07b/bartlett07b.pdf

Bartlett, Peter L. & Traskin, Mikhail (2007) Adaboost is consistent. Journal of Machine Learning Research, 8, pp. 2347-2368.

Direitos

Copyright 2007 Peter L. Bartlett and Mikhail Traskin.

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #boosting #adaboost #consistency #OAVJ
Tipo

Journal Article