Noise tolerance under risk minimization


Autoria(s): Manwani, Naresh; Sastry, PS
Data(s)

01/06/2013

Resumo

In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable training set that is noise free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper, we analyze the noise-tolerance properties of risk minimization (under different loss functions). We show that risk minimization under 0-1 loss function has impressive noise-tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude this paper with some discussion on the implications of these theoretical results.

Formato

application/pdf

Identificador

http://eprints.iisc.ernet.in/46746/1/IEEE_Tran_Cyb_43-3_1146_2013.pdf

Manwani, Naresh and Sastry, PS (2013) Noise tolerance under risk minimization. In: IEEE Transactions on Cybernetics, 43 (3). pp. 1146-1151.

Publicador

IEEE-Inst Electrical Electronics Engineers Inc

Relação

http://dx.doi.org/10.1109/TSMCB.2012.2223460

http://eprints.iisc.ernet.in/46746/

Palavras-Chave #Electrical Engineering
Tipo

Journal Article

PeerReviewed