Supervised subspace learning with multi-class Lagrangian SVM on the Grassmann manifold


Autoria(s): Pham, Duc-Son; Venkatesh, Svetha
Contribuinte(s)

Wang, Dianhui

Reynolds, Mark

Data(s)

01/01/2011

Resumo

Learning robust subspaces to maximize class discrimination is challenging, and most current works consider a weak connection between dimensionality reduction and classifier design. We propose an alternate framework wherein these two steps are combined in a joint formulation to exploit the direct connection between dimensionality reduction and classification. Specifically, we learn an optimal subspace on the Grassmann manifold jointly minimizing the classification error of an SVM classifier. We minimize the regularized empirical risk over both the hypothesis space of functions that underlies this new generalized multi-class Lagrangian SVM and the Grassmann manifold such that a linear projection is to be found. We propose an iterative algorithm to meet the dual goal of optimizing both the classifier and projection. Extensive numerical studies on challenging datasets show robust performance of the proposed scheme over other alternatives in contexts wherein limited training data is used, verifying the advantage of the joint formulation.

Identificador

http://hdl.handle.net/10536/DRO/DU:30044657

Idioma(s)

eng

Publicador

Springer-Verlag

Relação

http://dro.deakin.edu.au/eserv/DU:30044657/venkatesh-supervised-evidence-2011.pdf

http://dro.deakin.edu.au/eserv/DU:30044657/venkatesh-supervisedsubspace-2011.pdf

http://dx.doi.org/10.1007/978-3-642-25832-9_25

Direitos

2011, Springer-Verlag Berlin Heidelberg

Palavras-Chave #classification errors #classifier design #data sets #dimensionality reduction #empirical risks #Grassmann manifold #hypothesis space #iterative algorithm #Lagrangian #limited training data #linear projections #multi-class #numerical studies #optimal subspace #robust performance #subspace learning #SVM classifiers #weak connection
Tipo

Conference Paper