2 resultados para FACIAL ASYMMETRY
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
This study examines the links between human perceptions, cognitive biases and neural processing of symmetrical stimuli. While preferences for symmetry have largely been examined in the context of disorders such as obsessive-compulsive disorder and autism spectrum disorders, we examine various these phenomena in non-clinical subjects and suggest that such preferences are distributed throughout the typical population as part of our cognitive and neural architecture. In Experiment 1, 82 young adults reported on the frequency of their obsessive-compulsive spectrum behaviors. Subjects also performed an emotional Stroop or variant of an Implicit Association Task (the OC-CIT) developed to assess cognitive biases for symmetry. Data not only reveal that subjects evidence a cognitive conflict when asked to match images of positive affect with asymmetrical stimuli, and disgust with symmetry, but also that their slowed reaction times when asked to do so were predicted by reports of OC behavior, particularly checking behavior. In Experiment 2, 26 participants were administered an oddball Event-Related Potential task specifically designed to assess sensitivity to symmetry as well as the OC-CIT. These data revealed that reaction times on the OC-CIT were strongly predicted by frontal electrode sites indicating faster processing of an asymmetrical stimulus (unparallel lines) relative to a symmetrical stimulus (parallel lines). The results point to an overall cognitive bias linking disgust with asymmetry and suggest that such cognitive biases are reflected in neural responses to symmetrical/asymmetrical stimuli.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.