Making risk minimization tolerant to label noise


Autoria(s): Ghosh, Aritra; Manwani, Naresh; Sastry, PS
Data(s)

2015

Resumo

In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in the presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 0-1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm. (C) 2015 Elsevier B.V. All rights reserved.

Formato

application/pdf

Identificador

http://eprints.iisc.ernet.in/51625/1/Neuro_compu_160%282015%2993-107.pdf

Ghosh, Aritra and Manwani, Naresh and Sastry, PS (2015) Making risk minimization tolerant to label noise. In: NEUROCOMPUTING, 160 . pp. 93-107.

Publicador

ELSEVIER SCIENCE BV

Relação

http://dx.doi.org/10.1016/j.neucom.2014.09.081

http://eprints.iisc.ernet.in/51625/

Palavras-Chave #Electrical Engineering
Tipo

Journal Article

PeerReviewed