Cascading appearance-based features for visual voice activity detection


Autoria(s): Navarathna, Rajitha; Dean, David B.; Lucey, Patrick J.; Sridharan, Sridha; Fookes, Clinton B.
Data(s)

21/07/2010

Resumo

The detection of voice activity is a challenging problem, especially when the level of acoustic noise is high. Most current approaches only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to overcome this is to use the visual modality. The current state-of-the-art visual feature extraction technique is one that uses a cascade of visual features (i.e. 2D-DCT, feature mean normalisation, interstep LDA). In this paper, we investigate the effectiveness of this technique for the task of visual voice activity detection (VAD), and analyse each stage of the cascade and quantify the relative improvement in performance gained by each successive stage. The experiments were conducted on the CUAVE database and our results highlight that the dynamics of the visual modality can be used to good effect to improve visual voice activity detection performance.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/33214/

Publicador

Auditory-visual Speech Processing (AVSP)

Relação

http://eprints.qut.edu.au/33214/1/c33214.pdf

http://www.avsp2010.org/

Navarathna, Rajitha, Dean, David B., Lucey, Patrick J., Sridharan, Sridha, & Fookes, Clinton B. (2010) Cascading appearance-based features for visual voice activity detection. In Proceedings of International Conference on Auditory-Visual Speech Processing (AVSP2010), Auditory-visual Speech Processing (AVSP) , The Prince Hakone, Hakone, Kanagawa, pp. 3-7.

Direitos

Copyright 2010 [please consult the authors]

Fonte

Faculty of Built Environment and Engineering; Information Security Institute; School of Engineering Systems

Palavras-Chave #080100 ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING #Visual Speech #Voice Activity Detection #CUAVE Database #Static Features #Dynamic Features
Tipo

Conference Paper