951 resultados para facial recognition
Resumo:
Background This study aims to examine the relationship between how individuals with intellectual disabilities report their own levels of anger, and the ability of those individuals to recognize emotions. It was hypothesized that increased expression of anger would be linked to lower ability to recognize facial emotional expressions and increased tendency to interpret facial expressions in a hostile or negative manner. It was also hypothesized increased levels of anger may lead to the altered perception of a particular emotion.
Method A cross-sectional survey design was used. Thirty participants completed a test of facial emotion recognition (FER), and a self-report anger inventory (Benson & Ivins 1992) as part of a structured interview.
Results Individuals with higher self-reported anger did not show significantly reduced performance in FER, or interpret facial expressions in a more hostile manner compared with individuals with less self-reported anger. However, they were less accurate in recognizing neutral facial emotions.
Conclusions It is tentatively suggested that individuals with high levels of anger may be likely to perceive emotional content in a neutral facial expression because of their high levels of emotional arousal.
Resumo:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extraction. Simple, complex and end-stopped cells provide input for line, edge and keypoint detection. Detected events provide a rich, multi-scale object representation, and this representation can be stored in memory in order to identify objects. In this paper, the above context is applied to face recognition. The multi-scale line/edge representation is explored in conjunction with keypoint-based saliency maps for Focus-of-Attention. Recognition rates of up to 96% were achieved by combining frontal and 3/4 views, and recognition was quite robust against partial occlusions.
Resumo:
Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.
Resumo:
Les humains communiquent via différents types de canaux: les mots, la voix, les gestes du corps, des émotions, etc. Pour cette raison, un ordinateur doit percevoir ces divers canaux de communication pour pouvoir interagir intelligemment avec les humains, par exemple en faisant usage de microphones et de webcams. Dans cette thèse, nous nous intéressons à déterminer les émotions humaines à partir d’images ou de vidéo de visages afin d’ensuite utiliser ces informations dans différents domaines d’applications. Ce mémoire débute par une brève introduction à l'apprentissage machine en s’attardant aux modèles et algorithmes que nous avons utilisés tels que les perceptrons multicouches, réseaux de neurones à convolution et autoencodeurs. Elle présente ensuite les résultats de l'application de ces modèles sur plusieurs ensembles de données d'expressions et émotions faciales. Nous nous concentrons sur l'étude des différents types d’autoencodeurs (autoencodeur débruitant, autoencodeur contractant, etc) afin de révéler certaines de leurs limitations, comme la possibilité d'obtenir de la coadaptation entre les filtres ou encore d’obtenir une courbe spectrale trop lisse, et étudions de nouvelles idées pour répondre à ces problèmes. Nous proposons également une nouvelle approche pour surmonter une limite des autoencodeurs traditionnellement entrainés de façon purement non-supervisée, c'est-à-dire sans utiliser aucune connaissance de la tâche que nous voulons finalement résoudre (comme la prévision des étiquettes de classe) en développant un nouveau critère d'apprentissage semi-supervisé qui exploite un faible nombre de données étiquetées en combinaison avec une grande quantité de données non-étiquetées afin d'apprendre une représentation adaptée à la tâche de classification, et d'obtenir une meilleure performance de classification. Finalement, nous décrivons le fonctionnement général de notre système de détection d'émotions et proposons de nouvelles idées pouvant mener à de futurs travaux.
Resumo:
Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunteers (9 women and 11 men), aged between 19 and 31 years, took part in the study. The experiment consisted of two sessions in which participants had to identify the emotion of static (photographs) and dynamic (videos) displays of facial expressions on the computer screen. The mean accuracy was submitted to an Anova for repeated measures of model: 2 sexes x [2 conditions x 4 expressions x 4 intensities]. We observed an advantage for the recognition of dynamic expressions of happiness and fear compared to the static stimuli (p < .05). Analysis of interactions showed that expressions with intensity of 25 % were better recognized in the dynamic condition (p < .05). The addition of motion contributes to improve recognition especially in male participants (p < .05). We concluded that the effect of the motion varies as a function of the type of emotion, intensity of the expression and sex of the participant. These results support the hypothesis that dynamic stimuli have more ecological validity and are more appropriate to the research with emotions.
Resumo:
The perceptive accuracy of university students was compared between men and women, from sciences and humanities courses, to recognize emotional facial expressions. emotional expressions have had increased interest in several areas involved with human interaction, reflecting the importance of perceptive skills in human expression of emotions for the effectiveness of communication. Two tests were taken: one was a quick exposure (0.5 s) of 12 faces with an emotional expression, followed by a neutral face. subjects had to tell if happiness, sadness, anger, fear, disgust or surprise was flashed, and each emotion was shown twice, at random. on the second test 15 faces with the combination of two emotional expressions were shown without a time limit, and the subject had to name one of the emotions of the previous list. in this study, women perceived sad expressions better while men realized more happy faces. there was no significant difference in other emotions detection like anger, fear, surprise, disgust. Students of humanities and sciences areas of both sexes, when compared, had similar capacities to perceive emotional expressions
Resumo:
Background: One of the many cognitive deficits reported in bipolar disorder (BD) patients is facial emotion recognition (FER), which has recently been associated with dopaminergic catabolism. Catechol-O-methyltransferase (COMT) is one of the main enzymes involved in the metabolic degradation of dopamine (DA) in the prefrontal cortex (PFC). The COMT gene polymorphism rs4680 (Val(158)Met) Met allele is associated with decreased activity of this enzyme in healthy controls. The objective of this study was to evaluate the influence of Val(158)Met on FER during manic and depressive episodes in BD patients and in healthy controls. Materials and methods: 64 BD type I patients (39 in manic and 25 in depressive episodes) and 75 healthy controls were genotyped for COMT rs4680 and assessed for FER using the Ekman 60 Faces (EK60) and Emotion Hexagon (Hx) tests. Results: Bipolar manic patients carrying the Met allele recognized fewer surprised faces, while depressed patients with the Met allele recognized fewer "angry" and "happy" faces. Healthy homozygous subjects with the Met allele had higher FER scores on the Hx total score, as well as on "disgust" and "angry" faces than other genotypes. Conclusion: This is the first study suggesting that COMT rs4680 modulates FER differently during BD episodes and in healthy controls. This provides evidence that PFC DA is part of the neurobiological mechanisms of social cognition. Further studies on other COMT polymorphisms that include euthymic BD patients are warranted. ClinicalTrials.gov Identifier: NCT00969. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Introduction: Impairments in facial emotion recognition (PER) have been reported in bipolar disorder (BD) during all mood states. FER has been the focus of functional magnetic resonance imaging studies evaluating differential activation of limbic regions. Recently, the alpha 1-C subunit of the L-type voltage-gated calcium channel (CACNA1C) gene has been described as a risk gene for BD and its Met allele found to increase CACNA1C mRNA expression. In healthy controls, the CACNA1C risk (Met) allele has been reported to increase limbic system activation during emotional stimuli and also to impact on cognitive function. The aim of this study was to investigate the impact of CACNA1C genotype on FER scores and limbic system morphology in subjects with BD and healthy controls. Material and methods: Thirty-nine euthymic BD I subjects and 40 healthy controls were submitted to a PER recognition test battery and genotyped for CACNA1C. Subjects were also examined with a 3D 3-Tesla structural imaging protocol. Results: The CACNA1C risk allele for BD was associated to FER impairment in BD, while in controls nothing was observed. The CACNA1C genotype did not impact on amygdala or hippocampus volume neither in BD nor controls. Limitations: Sample size. Conclusion: The present findings suggest that a polymorphism in calcium channels interferes FER phenotype exclusively in BD and doesn't interfere on limbic structures morphology. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
One of the most consistent findings in the neuroscience of autism is hypoactivation of the fusiform gyrus (FG) during face processing. In this study the authors examined whether successful facial affect recognition training is associated with an increased activation of the FG in autism. The effect of a computer-based program to teach facial affect identification was examined in 10 individuals with high-functioning autism. Blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) changes in the FG and other regions of interest, as well as behavioral facial affect recognition measures, were assessed pre- and posttraining. No significant activation changes in the FG were observed. Trained participants showed behavioral improvements, which were accompanied by higher BOLD fMRI signals in the superior parietal lobule and maintained activation in the right medial occipital gyrus.
Resumo:
Autism is a chronic pervasive neurodevelopmental disorder characterized by the early onset of social and communicative impairments as well as restricted, ritualized, stereotypic behavior. The endophenotype of autism includes neuropsychological deficits, for instance a lack of "Theory of Mind" and problems recognizing facial affect. In this study, we report the development and evaluation of a computer-based program to teach and test the ability to identify basic facially expressed emotions. 10 adolescent or adult subjects with high-functioning autism or Asperger-syndrome were included in the investigation. A priori the facial affect recognition test had shown good psychometric properties in a normative sample (internal consistency: rtt=.91-.95; retest reliability: rtt=.89-.92). In a prepost design, one half of the sample was randomly assigned to receive computer treatment while the other half of the sample served as control group. The training was conducted for five weeks, consisting of two hours training a week. The trained individuals improved significantly on the affect recognition task, but not on any other measure. Results support the usefulness of the program to teach the detection of facial affect. However, the improvement found is limited to a circumscribed area of social-communicative function and generalization is not ensured.
Resumo:
In the past, the accuracy of facial approximations has been assessed by resemblance ratings (i.e., the comparison of a facial approximation directly to a target individual) and recognition tests (e.g., the comparison of a facial approximation to a photo array of faces including foils and a target individual). Recently, several research studies have indicated that recognition tests hold major strengths in contrast to resemblance ratings. However, resemblance ratings remain popularly employed and/or are given weighting when judging facial approximations, thus indicating that no consensus has been reached. This study aims to further investigate the matter by comparing the results of resemblance ratings and recognition tests for two facial approximations which clearly differed in their morphological appearance. One facial approximation was constructed by an experienced practitioner privy to the appearance of the target individual (practitioner had direct access to an antemortem frontal photograph during face construction), while the other facial approximation was constructed by a novice under blind conditions. Both facial approximations, whilst clearly morphologically different, were given similar resemblance scores even though recognition test results produced vastly different results. One facial approximation was correctly recognized almost without exception while the other was not correctly recognized above chance rates. These results suggest that resemblance ratings are insensitive measures of the accuracy of facial approximations and lend further weight to the use of recognition tests in facial approximation assessment. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The primary aim of this study was to investigate facial emotion recognition (FER) in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and 20 healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of FER and the 26-item Toronto Alexithymia Scale. Patients withSFD exhibited elevated alexithymia symptoms relative to healthy controls.Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in FER observed in the patients with SFD was most likely a consequence of concurrent alexithymia. It should be noted that neither depression nor anxiety was significantly related to emotion recognition accuracy, suggesting that these variables did not contribute the emotion recognition deficit. Impaired FER observed in the patients with SFD could plausibly have a negative influence on these individuals’ social functioning.
Resumo:
Significant facial emotion recognition (FER) deficits have been observed in participants exhibiting high levels of eating psychopathology. The current study aimed to determine if the pattern of FER deficits is influenced by intensity of facial emotion and to establish if eating psychopathology is associated with a specific pattern of emotion recognition errors that is independent of other psychopathological or personality factors. Eighty females, 40 high and 40 low scorers on the Eating Disorders Inventory (EDI) were presented with a series of faces, each featuring one of five emotional expressions at one of four intensities, and were asked to identify the emotion portrayed. Results revealed that, in comparison to Low EDI scorers, high scorers correctly recognised significantly fewer expressions, particularly of fear and anger. There was also a trend for this deficit to be more evident for subtle displays of emotion (50% intensity). Deficits in anger recognition were related specifically to scores on the body dissatisfaction subscale of the EDI. Error analyses revealed that, in comparison to Low EDI scorers, high scorers made significantly more and fear-as-anger errors. Also, a tendency to label anger expressions as sadness was related to body dissatisfaction. Current findings confirm FER deficits in subclinical eating psychopathology and extend these findings to subtle expressions of emotion. Furthermore, this is the first study to establish that these deficits are related to a specific pattern of recognition errors. Impaired FER could disrupt normal social functioning and might represent a risk factor for the development of more severe psychopathology.
Resumo:
The functional catechol-O-methyltransferase (COMT Val108/158Met) polymorphism has been shown to have an impact on tasks of executive function, memory and attention and recently, tasks with an affective component. As oestrogen reduces COMT activity, we focused on the interaction between gender and COMT genotype on brain activations during an affective processing task. We used functional MRI (fMRI) to record brain activations from 74 healthy subjects who engaged in a facial affect recognition task; subjects viewed and identified fearful compared to neutral faces. There was no main effect of the COMT polymorphism, gender or genotypegender interaction on task performance. We found a significant effect of gender on brain activations in the left amygdala and right temporal pole, where females demonstrated increased activations over males. Within these regions, Val/Val carriers showed greater signal magnitude compared to Met/Met carriers, particularly in females. The COMT Val108/158Met polymorphism impacts on gender-related patterns of activation in limbic and paralimbic regions but the functional significance of any oestrogen-related COMT inhibition appears modest. Copyright © 2008 CINP.