872 resultados para Facial Expressions
Resumo:
Theory of mind ability has been associated with performance in interpersonal interactions and has been found to influence aspects such as emotion recognition, social competence, and social anxiety. Being able to attribute mental states to others requires attention to subtle communication cues such as facial emotional expressions. Decoding and interpreting emotions expressed by the face, especially those with negative valence, are essential skills to successful social interaction. The current study explored the association between theory of mind skills and attentional bias to facial emotional expressions. According to the study hypothesis, individuals with poor theory of mind skills showed preferential attention to negative faces over both non-negative faces and neutral objects. Tentative explanations for the findings are offered emphasizing the potential adaptive role of vigilance for threat as a way of allocating a limited capacity to interpret others’ mental states to obtain as much information as possible about potential danger in the social environment.
Resumo:
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.
Resumo:
Schizophrenia patients have been shown to be compromised in their ability to recognize facial emotion. This deficit has been shown to be related to negative symptoms severity. However, to date, most studies have used static rather than dynamic depictions of faces. Nineteen patients with schizophrenia were compared with seventeen controls on 2 tasks; the first involving the discrimination of facial identity, emotion, and butterfly wings; the second testing emotion recognition using both static and dynamic stimuli. In the first task, the patients performed more poorly than controls for emotion discrimination only, confirming a specific deficit in facial emotion recognition. In the second task, patients performed more poorly in both static and dynamic facial emotion processing. An interesting pattern of associations suggestive of a possible double dissociation emerged in relation to correlations with symptom ratings: high negative symptom ratings were associated with poorer recognition of static displays of emotion, whereas high positive symptom ratings were associated with poorer recognition of dynamic displays of emotion. However, while the strength of associations between negative symptom ratings and accuracy during static and dynamic facial emotion processing was significantly different, those between positive symptom ratings and task performance were not. The results confirm a facial emotion-processing deficit in schizophrenia using more ecologically valid dynamic expressions of emotion. The pattern of findings may reflect differential patterns of cortical dysfunction associated with negative and positive symptoms of schizophrenia in the context of differential neural mechanisms for the processing of static and dynamic displays of facial emotion.
Resumo:
Gemstone Team FACE
Resumo:
Bien que des photographies d’expressions émotionnelles faciales soient couramment utilisées pour étudier le traitement des informations affectives, les ensembles de stimuli actuels comportent différentes limites méthodologiques. Dans la perspective d’améliorer la validité écologique des travaux sur la reconnaissance émotionnelle faciale (REF), l’étude propose un nouvel ensemble de stimuli dynamiques constitué de personnages virtuels exprimant les six émotions fondamentales à différentes intensités. La validation préliminaire des stimuli a été effectuée en les comparant aux stimuli du POFA. Dans l’étude 1, le contenu émotionnel de 84 avatars statiques et de 48 photographies du POFA a été évalué par 150 étudiants. Dans l’étude 2, la REF a été évaluée chez 134 étudiants à l’aide d’avatars dynamiques. L’habileté à reconnaître les émotions est similaire à celle rapportée dans la littérature scientifique ainsi qu’au POFA. Les résultats permettent de conclure que les avatars constituent un ensemble valide de stimuli pour étudier la REF, offrant une contribution significative dans le domaine de la recherche émotionnelle. L’usage d’avatars étant de plus en plus courant à des fins d’intervention et de recherche, les stimuli ouvrent aussi la voie à diverses applications, entre autres pour l’étude des comportements violents. D’autres travaux devront être réalisés afin de poursuivre leur validation.
Resumo:
Spontaneous mimicry is a marker of empathy. Conditions characterized by reduced spontaneous mimicry (e.g., autism) also display deficits in sensitivity to social rewards. We tested if spontaneous mimicry of socially rewarding stimuli (happy faces) depends on the reward value of stimuli in 32 typical participants. An evaluative conditioning paradigm was used to associate different reward values with neutral target faces. Subsequently, electromyographic activity over the Zygomaticus Major was measured whilst participants watched video clips of the faces making happy expressions. Higher Zygomaticus Major activity was found in response to happy faces conditioned with high reward versus low reward. Moreover, autistic traits in the general population modulated the extent of spontaneous mimicry of happy faces. This suggests a link between reward and spontaneous mimicry and provides a possible underlying mechanism for the reduced response to social rewards seen in autism.
Resumo:
Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism
Resumo:
The primary aim of this study was to investigate facial emotion recognition (FER) in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and 20 healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of FER and the 26-item Toronto Alexithymia Scale. Patients withSFD exhibited elevated alexithymia symptoms relative to healthy controls.Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in FER observed in the patients with SFD was most likely a consequence of concurrent alexithymia. It should be noted that neither depression nor anxiety was significantly related to emotion recognition accuracy, suggesting that these variables did not contribute the emotion recognition deficit. Impaired FER observed in the patients with SFD could plausibly have a negative influence on these individuals’ social functioning.
Resumo:
Significant facial emotion recognition (FER) deficits have been observed in participants exhibiting high levels of eating psychopathology. The current study aimed to determine if the pattern of FER deficits is influenced by intensity of facial emotion and to establish if eating psychopathology is associated with a specific pattern of emotion recognition errors that is independent of other psychopathological or personality factors. Eighty females, 40 high and 40 low scorers on the Eating Disorders Inventory (EDI) were presented with a series of faces, each featuring one of five emotional expressions at one of four intensities, and were asked to identify the emotion portrayed. Results revealed that, in comparison to Low EDI scorers, high scorers correctly recognised significantly fewer expressions, particularly of fear and anger. There was also a trend for this deficit to be more evident for subtle displays of emotion (50% intensity). Deficits in anger recognition were related specifically to scores on the body dissatisfaction subscale of the EDI. Error analyses revealed that, in comparison to Low EDI scorers, high scorers made significantly more and fear-as-anger errors. Also, a tendency to label anger expressions as sadness was related to body dissatisfaction. Current findings confirm FER deficits in subclinical eating psychopathology and extend these findings to subtle expressions of emotion. Furthermore, this is the first study to establish that these deficits are related to a specific pattern of recognition errors. Impaired FER could disrupt normal social functioning and might represent a risk factor for the development of more severe psychopathology.
Resumo:
Background: Investigating genetic modulation of emotion processing may contribute to the understanding of heritable mechanisms of emotional disorders. The aim of the present study was to test the effects of catechol- O-methyltransferase (COMT) val158met and serotonin-transporter-linked promoter region (5-HTTLPR) polymorphisms on facial emotion processing in healthy individuals. Methods: Two hundred and seventy five (167 female) participants were asked to complete a computerized facial affect recognition task, which involved four experimental conditions, each containing one type of emotional face (fearful, angry, sad or happy) intermixed with neutral faces. Participants were asked to indicate whether the face displayed an emotion or was neutral. The COMT-val158met and 5-HTTLPR polymorphisms were genotyped. Results: Met homozygotes (COMT) showed a stronger bias to perceive neutral faces as expressions of anger, compared with val homozygotes. However, the S-homozygotes (5-HTTLPR) showed a reduced bias to perceive neutral faces as expressions of happiness, compared to L-homozygotes. No interaction between 5-HTTLPR and COMT was found. Conclusions: These results add to the knowledge of individual differences in social cognition that are modulated via serotonergic and dopaminergic systems. This potentially could contribute to the understanding of the mechanisms of susceptibility to emotional disorders. © 2013 Elsevier Masson SAS.
Resumo:
Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.
Resumo:
Acoustically, vehicles are extremely noisy environments and as a consequence audio-only in-car voice recognition systems perform very poorly. Seeing that the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem. However, implementing such an approach requires a system being able to accurately locate and track the driver’s face and facial features in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using this system, we present our results which show that using the Viola-Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose.
Resumo:
Gabor representations have been widely used in facial analysis (face recognition, face detection and facial expression detection) due to their biological relevance and computational properties. Two popular Gabor representations used in literature are: 1) Log-Gabor and 2) Gabor energy filters. Even though these representations are somewhat similar, they also have distinct differences as the Log-Gabor filters mimic the simple cells in the visual cortex while the Gabor energy filters emulate the complex cells, which causes subtle differences in the responses. In this paper, we analyze the difference between these two Gabor representations and quantify these differences on the task of facial action unit (AU) detection. In our experiments conducted on the Cohn-Kanade dataset, we report an average area underneath the ROC curve (A`) of 92.60% across 17 AUs for the Gabor energy filters, while the Log-Gabor representation achieved an average A` of 96.11%. This result suggests that small spatial differences that the Log-Gabor filters pick up on are more useful for AU detection than the differences in contours and edges that the Gabor energy filters extract.
Resumo:
In automatic facial expression detection, very accurate registration is desired which can be achieved via a deformable model approach where a dense mesh of 60-70 points on the face is used, such as an active appearance model (AAM). However, for applications where manually labeling frames is prohibitive, AAMs do not work well as they do not generalize well to unseen subjects. As such, a more coarse approach is taken for person-independent facial expression detection, where just a couple of key features (such as face and eyes) are tracked using a Viola-Jones type approach. The tracked image is normally post-processed to encode for shift and illumination invariance using a linear bank of filters. Recently, it was shown that this preprocessing step is of no benefit when close to ideal registration has been obtained. In this paper, we present a system based on the Constrained Local Model (CLM) which is a generic or person-independent face alignment algorithm which gains high accuracy. We show these results against the LBP feature extraction on the CK+ and GEMEP datasets.
Resumo:
Occlusion is a big challenge for facial expression recognition (FER) in real-world situations. Previous FER efforts to address occlusion suffer from loss of appearance features and are largely limited to a few occlusion types and single testing strategy. This paper presents a robust approach for FER in occluded images and addresses these issues. A set of Gabor based templates is extracted from images in the gallery using a Monte Carlo algorithm. These templates are converted into distance features using template matching. The resulting feature vectors are robust to occlusion. Occluded eyes and mouth regions and randomly places occlusion patches are used for testing. Two testing strategies analyze the effects of these occlusions on the overall recognition performance as well as each facial expression. Experimental results on the Cohn-Kanade database confirm the high robustness of our approach and provide useful insights about the effects of occlusion on FER. Performance is also compared with previous approaches.