In the pursuit of effective affective computing : the relationship between features and registration


Autoria(s): Chew, Sien Wei; Lucey, Patrick J.; Lucey, Simon; Saragih, Jason M; Cohn, Jeffrey F; Matthews, Iain A; Sridharan, Sridha
Data(s)

2012

Resumo

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/51518/

Publicador

Institute of Electrical and Electronics Engineers (IEEE)

Relação

http://eprints.qut.edu.au/51518/1/features.pdf

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6196236

Chew, Sien Wei, Lucey, Patrick J., Lucey, Simon, Saragih, Jason M, Cohn, Jeffrey F, Matthews, Iain A, & Sridharan, Sridha (2012) In the pursuit of effective affective computing : the relationship between features and registration. IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, pp. 1-12.

Direitos

Copyright 2012 IEEE

Fonte

School of Electrical Engineering & Computer Science; Science & Engineering Faculty

Palavras-Chave #080100 ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING #080106 Image Processing #090609 Signal Processing #Active appearance models #Automatic facial expression recognition #Biologically-inspired appearance descriptors #Constrained local models
Tipo

Journal Article