A Note on Support Vector Machines Degeneracy


Autoria(s): Rifkin, Ryan; Pontil, Massimiliano; Verri, Alessandro
Data(s)

22/10/2004

22/10/2004

11/08/1999

Resumo

When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays.

Formato

10 p.

1117769 bytes

262084 bytes

application/postscript

application/pdf

Identificador

AIM-1661

CBCL-177

http://hdl.handle.net/1721.1/7291

Idioma(s)

en_US

Relação

AIM-1661

CBCL-177

Palavras-Chave #AI #MIT #Artificial Intelligence #Support Vector Machines #Scale Sensitive Loss Function #Statistical Learning Theory.