2 resultados para Autistic-like effects

em Bucknell University Digital Commons - Pensilvania - USA


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neurodevelopmental disorders can be caused by many different genetic abnormalities that are individually rare but collectively common. Specific genetic causes, including certain copy number variants and single-gene mutations, are shared among disorders that are thought to be clinically distinct. This evidence of variability in the clinical manifestations of individual genetic variants and sharing of genetic causes among clinically distinct brain disorders is consistent with the concept of developmental brain dysfunction, a term we use to describe the abnormal brain function underlying a group of neurodevelopmental and neuropsychiatric disorders and to encompass a subset of various clinical diagnoses. Although many pathogenic genetic variants are currently thought to be variably penetrant, we hypothesise that when disorders encompassed by developmental brain dysfunction are considered as a group, the penetrance will approach 100%. The penetrance is also predicted to approach 100% when the phenotype being considered is a specific trait, such as intelligence or autistic-like social impairment, and the trait could be assessed using a continuous, quantitative measure to compare probands with non-carrier family members rather than a qualitative, dichotomous trait and comparing probands with the healthy population. Copyright 2013 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.