Multimodal Integration in Statistical Learning: Evidence from the McGurk Illusion


Autoria(s): Mitchel, Aaron D.; Christiansen, Morten H.; Weiss, Daniel J.
Data(s)

01/01/2014

Resumo

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Identificador

https://digitalcommons.bucknell.edu/fac_journ/890

Publicador

Bucknell Digital Commons

Fonte

Faculty Journal Articles

Palavras-Chave #multisensory statistical learning #statistical learning mechanisms #multisensory perception #language acquisition #McGurk illusion #multisensory integration #audiovisual speech perception #Cognition and Perception #Psychology
Tipo

text