Efficient Algorithms for Linear Summed Error Structural SVMs


Autoria(s): Balamurugan, P; Shevade, Shirish; Babu, Ravindra T
Data(s)

2012

Resumo

Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.

Formato

application/pdf

Identificador

http://eprints.iisc.ernet.in/45528/1/2012IJCNN_IEEE_2012.pdf

Balamurugan, P and Shevade, Shirish and Babu, Ravindra T (2012) Efficient Algorithms for Linear Summed Error Structural SVMs. In: IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)/International Joint Conference on Neural Networks (IJCNN)/IEEE Congress on Evolutionary Computation (IEEE-CEC)/IEEE World Congress on Computational Intelligence (IEEE-WCCI) , JUN 10-15, 2012 , Brisbane, AUSTRALIA.

Publicador

IEEE, NEW YORK, USA

Relação

http://dx.doi.org/10.1109/IJCNN.2012.6252830

http://eprints.iisc.ernet.in/45528/

Palavras-Chave #Computer Science & Automation (Formerly, School of Automation)
Tipo

Conference Proceedings

NonPeerReviewed