Class-specific sparse codes for representing activities


Autoria(s): Umakanthan, Sabanadesan; Denman, Simon; Fookes, Clinton; Sridharan, Sridha
Data(s)

2015

Resumo

In this paper we investigate the effectiveness of class specific sparse codes in the context of discriminative action classification. The bag-of-words representation is widely used in activity recognition to encode features, and although it yields state-of-the art performance with several feature descriptors it still suffers from large quantization errors and reduces the overall performance. Recently proposed sparse representation methods have been shown to effectively represent features as a linear combination of an over complete dictionary by minimizing the reconstruction error. In contrast to most of the sparse representation methods which focus on Sparse-Reconstruction based Classification (SRC), this paper focuses on a discriminative classification using a SVM by constructing class-specific sparse codes for motion and appearance separately. Experimental results demonstrates that separate motion and appearance specific sparse coefficients provide the most effective and discriminative representation for each class compared to a single class-specific sparse coefficients.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/92902/

Relação

http://eprints.qut.edu.au/92902/1/388-Z3f7-161.pdf

DOI:10.1109/ICIP.2015.7351739

Umakanthan, Sabanadesan, Denman, Simon, Fookes, Clinton, & Sridharan, Sridha (2015) Class-specific sparse codes for representing activities. In IEEE International Conference on Image Processing (ICIP 2015), Quebec City, QC, pp. 4902-4906.

Direitos

Copyright 2015 IEEE

Fonte

School of Electrical Engineering & Computer Science; Science & Engineering Faculty

Palavras-Chave #080104 Computer Vision #080109 Pattern Recognition and Data Mining #080199 Artificial Intelligence and Image Processing not elsewhere classified
Tipo

Conference Paper