2 resultados para Autistic personality traits
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
Aims: To determine whether or not a Learning Disability(LD) label leads to stigmatization. Study Design: This research used a 2(sex of participant) x 2(LD label)x 2 (Sex of stimulus person) factorial design. Place and Duration of Study: Bucknell University, between October 2010 and April 2011. Methodology: Sample: We included 200 participants (137 women and 63 men, ranging in age from 18 – 75 years, M = 26.41. Participants rated the stimulus individual on 27 personality traits, 8 Life success measures, and the Big-5 personality dimensions. Also, participants completed a Social Desirability measure. Results: A MANOVA revealed a main effect for the Learning Disability description, F(6, 185) = 6.41 p< .0001, eta2 = .17,for the Big-5 personality dimensions, Emotional Stability, F(1, 185) = 13.39, p < .001, eta2 = .066, and Openness to Experiences F(1,185) = 7.12, p< .008, eta2 = .036.Stimulus individuals described as having a learning disability were perceived as being less emotionally stable and more open to experiences than those described as not having a learning disability. Another MANOVA revealed a main effect for having a disability or not, F(8, 183) = 4.29, p< .0001, eta2 = .158, for the Life Success items, Attractiveness, F(1, 198) = 16.63, p< .0001, eta2 = .080, and Future Success,F(1, 198) = 4.57, p< .034, eta2 = .023. Stimulus individuals described as having a learning disability were perceived as being less attractive and with less potential for success than those described as not having a learning disability. Conclusion: The results of this research provide evidence that a bias exists toward those who have learning disabilities. The mere presence of an LD label had the ability to cause a differential perception of those with LDs and those without LDs.
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.