118 resultados para emotion recognition
em CentAUR: Central Archive University of Reading - UK
Resumo:
Empathy is the lens through which we view others' emotion expressions, and respond to them. In this study, empathy and facial emotion recognition were investigated in adults with autism spectrum conditions (ASC; N=314), parents of a child with ASC (N=297) and IQ-matched controls (N=184). Participants completed a self-report measure of empathy (the Empathy Quotient [EQ]) and a modified version of the Karolinska Directed Emotional Faces Task (KDEF) using an online test interface. Results showed that mean scores on the EQ were significantly lower in fathers (p<0.05) but not mothers (p>0.05) of children with ASC compared to controls, whilst both males and females with ASC obtained significantly lower EQ scores (p<0.001) than controls. On the KDEF, statistical analyses revealed poorer overall performance by adults with ASC (p<0.001) compared to the control group. When the 6 distinct basic emotions were analysed separately, the ASC group showed impaired performance across five out of six expressions (happy, sad, angry, afraid and disgusted). Parents of a child with ASC were not significantly worse than controls at recognising any of the basic emotions, after controlling for age and non-verbal IQ (all p>0.05). Finally, results indicated significant differences between males and females with ASC for emotion recognition performance (p<0.05) but not for self-reported empathy (p>0.05). These findings suggest that self-reported empathy deficits in fathers of autistic probands are part of the 'broader autism phenotype'. This study also reports new findings of sex differences amongst people with ASC in emotion recognition, as well as replicating previous work demonstrating empathy difficulties in adults with ASC. The use of empathy measures as quantitative endophenotypes for ASC is discussed.
Resumo:
Objective: This study was designed to examine the existence of deficits in mentalizing or theory of mind (ToM) in children with traumatic brain injury (TBI). Research design: ToM functioning was assessed in 12 children aged 6-12 years with TBI and documented frontal lobe damage and compared to 12 controls matched for age, sex and verbal ability. Brief measures of attention and memory were also included. Main outcome and results: The TBI group was significantly impaired relative to controls on the advanced ToM measure and a measure of basic emotion recognition. No difference was found in a basic measure of ToM. Conclusion: Traumatic brain damage in childhood may disrupt the developmental acquisition of emotion recognition and advanced ToM skills. The clinical and theoretical importance of these findings is discussed and the implications for the assessment and treatment of children who have experienced TBI are outlined.
Resumo:
Theory of mind ability has been associated with performance in interpersonal interactions and has been found to influence aspects such as emotion recognition, social competence, and social anxiety. Being able to attribute mental states to others requires attention to subtle communication cues such as facial emotional expressions. Decoding and interpreting emotions expressed by the face, especially those with negative valence, are essential skills to successful social interaction. The current study explored the association between theory of mind skills and attentional bias to facial emotional expressions. According to the study hypothesis, individuals with poor theory of mind skills showed preferential attention to negative faces over both non-negative faces and neutral objects. Tentative explanations for the findings are offered emphasizing the potential adaptive role of vigilance for threat as a way of allocating a limited capacity to interpret others’ mental states to obtain as much information as possible about potential danger in the social environment.
Resumo:
Three experiments examined the cultural relativity of emotion recognition using the visual search task. Caucasian-English and Japanese participants were required to search for an angry or happy discrepant face target against an array of competing distractor faces. Both cultural groups performed the task with displays that consisted of Caucasian and Japanese faces in order to investigate the effects of racial congruence on emotion detection performance. Under high perceptual load conditions, both cultural groups detected the happy face more efficiently than the angry face. When perceptual load was reduced such that target detection could be achieved by feature-matching, the English group continued to show a happiness advantage in search performance that was more strongly pronounced for other race faces. Japanese participants showed search time equivalence for happy and angry targets. Experiment 3 encouraged participants to adopt a perceptual based strategy for target detection by removing the term 'emotion' from the instructions. Whilst this manipulation did not alter the happiness advantage displayed by our English group, it reinstated it for our Japanese group, who showed a detection advantage for happiness only for other race faces. The results demonstrate cultural and linguistic modifiers on the perceptual saliency of the emotional signal and provide new converging evidence from cognitive psychology for the interactionist perspective on emotional expression recognition.
Resumo:
Background Children with callous-unemotional (CU) traits, a proposed precursor to adult psychopathy, are characterized by impaired emotion recognition, reduced responsiveness to others’ distress, and a lack of guilt or empathy. Reduced attention to faces, and more specifically to the eye region, has been proposed to underlie these difficulties, although this has never been tested longitudinally from infancy. Attention to faces occurs within the context of dyadic caregiver interactions, and early environment including parenting characteristics has been associated with CU traits. The present study tested whether infants’ preferential tracking of a face with direct gaze and levels of maternal sensitivity predict later CU traits. Methods Data were analyzed from a stratified random sample of 213 participants drawn from a population-based sample of 1233 first-time mothers. Infants’ preferential face tracking at 5 weeks and maternal sensitivity at 29 weeks were entered into a weighted linear regression as predictors of CU traits at 2.5 years. Results Controlling for a range of confounders (e.g., deprivation), lower preferential face tracking predicted higher CU traits (p = .001). Higher maternal sensitivity predicted lower CU traits in girls (p = .009), but not boys. No significant interaction between face tracking and maternal sensitivity was found. Conclusions This is the first study to show that attention to social features during infancy as well as early sensitive parenting predict the subsequent development of CU traits. Identifying such early atypicalities offers the potential for developing parent-mediated interventions in children at risk for developing CU traits.
Resumo:
Background There is a need to develop and adapt therapies for use with people with learning disabilities who have mental health problems. Aims To examine the performance of people with learning disabilities on two cognitive therapy tasks (emotion recognition and discrimination among thoughts, feelings and behaviours). We hypothesized that cognitive therapy task performance would be significantly correlated with IQ and receptive vocabulary, and that providing a visual cue would improve performance. Method Fifty-nine people with learning disabilities were assessed on the Wechsler Abbreviated Scale of Intelligence (WASI), the British Picture Vocabulary Scale-II (BPVS-II), a test of emotion recognition and a task requiring participants to discriminate among thoughts, feelings and behaviours. In the discrimination task, participants were randomly assigned to a visual cue condition or a no-cue condition. Results There was considerable variability in performance. Emotion recognition was significantly associated with receptive vocabulary, and discriminating among thoughts, feelings and behaviours was significantly associated with vocabulary and IQ. There was no effect of the cue on the discrimination task. Conclusion People with learning disabilities with higher IQs and good receptive vocabulary were more likely to be able to identify different emotions and to discriminate among thoughts, feelings and behaviours. This implies that they may more easily understand the cognitive model. Structured ways of simplifying the concepts used in cognitive therapy and methods of socialization and education in the cognitive model are required to aid participation of people with learning disabilities.
Resumo:
We examined whether it is possible to identify the emotional content of behaviour from point-light displays where pairs of actors are engaged in interpersonal communication. These actors displayed a series of emotions, which included sadness, anger, joy, disgust, fear, and romantic love. In experiment 1, subjects viewed brief clips of these point-light displays presented the right way up and upside down. In experiment 2, the importance of the interaction between the two figures in the recognition of emotion was examined. Subjects were shown upright versions of (i) the original pairs (dyads), (ii) a single actor (monad), and (iii) a dyad comprising a single actor and his/her mirror image (reflected dyad). In each experiment, the subjects rated the emotional content of the displays by moving a slider along a horizontal scale. All of the emotions received a rating for every clip. In experiment 1, when the displays were upright, the correct emotions were identified in each case except disgust; but, when the displays were inverted, performance was significantly diminished for some ernotions. In experiment 2, the recognition of love and joy was impaired by the absence of the acting partner, and the recognition of sadness, joy, and fear was impaired in the non-veridical (mirror image) displays. These findings both support and extend previous research by showing that biological motion is sufficient for the perception of emotion, although inversion affects performance. Moreover, emotion perception from biological motion can be affected by the veridical or non-veridical social context within the displays.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.
Resumo:
This paper reports the development of a highly parameterised 3-D model able to adopt the shapes of a wide variety of different classes of vehicles (cars, vans, buses, etc), and its subsequent specialisation to a generic car class which accounts for most commonly encountered types of car (includng saloon, hatchback and estate cars). An interactive tool has been developed to obtain sample data for vehicles from video images. A PCA description of the manually sampled data provides a deformable model in which a single instance is described as a 6 parameter vector. Both the pose and the structure of a car can be recovered by fitting the PCA model to an image. The recovered description is sufficiently accurate to discriminate between vehicle sub-classes.
Resumo:
This study investigated the ability of neonatal larvae of the root-feeding weevil, Sitona lepidus Gyllenhal, to locate white clover Trifolium repens L. (Fabaceae) roots growing in soil and to distinguish them from the roots of other species of clover and a co-occurring grass species. Choice experiments used a combination of invasive techniques and the novel technique of high resolution X-ray microtomography to non-invasively track larval movement in the soil towards plant roots. Burrowing distances towards roots of different plant species were also examined. Newly hatched S. lepidus recognized T. repens roots and moved preferentially towards them when given a choice of roots of subterranean clover, Trifolium subterraneum L. (Fabaceae), strawberry clover Trifolium fragiferum L. (Fabaceae), or perennial ryegrass Lolium perenne L. (Poaceae). Larvae recognized T. repens roots, whether released in groups of five or singly, when released 25 mm (meso-scale recognition) or 60 mm (macro-scale recognition) away from plant roots. There was no statistically significant difference in movement rates of larvae.