Visual Speech Segmentation: Using Facial Cues to Locate Word Boundaries in Continuous Speech


Autoria(s): Mitchel, Aaron; Weiss, Daniel J.
Data(s)

01/01/2014

Resumo

Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.

Identificador

https://digitalcommons.bucknell.edu/fac_journ/782

Publicador

Bucknell Digital Commons

Fonte

Faculty Journal Articles

Palavras-Chave #speech segmentation #visual prosody #audiovisual speech #language acquisition #multisensory integration #Cognitive Psychology
Tipo

text