966 resultados para Facial expression recognition
Resumo:
Facial expression recognition is one of the most challenging research areas in the image recognition ¯eld and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future
Resumo:
One of the most consistent findings in the neuroscience of autism is hypoactivation of the fusiform gyrus (FG) during face processing. In this study the authors examined whether successful facial affect recognition training is associated with an increased activation of the FG in autism. The effect of a computer-based program to teach facial affect identification was examined in 10 individuals with high-functioning autism. Blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) changes in the FG and other regions of interest, as well as behavioral facial affect recognition measures, were assessed pre- and posttraining. No significant activation changes in the FG were observed. Trained participants showed behavioral improvements, which were accompanied by higher BOLD fMRI signals in the superior parietal lobule and maintained activation in the right medial occipital gyrus.
Resumo:
The present topical review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human non-verbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be both voluntarily or emotionally controlled. Recent studies in non-human primates and humans revealed that the motor control of facial expressions has a distributed neural representation. At least 5 cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and, finally, the rostral and caudal cingulate cortex. The results of studies in humans and non-human primates suggest that the innervation of the face is bilaterally controlled for the upper part, and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, since they receive input from different structures of the limbic system. This article is protected by copyright. All rights reserved.
Resumo:
Anxiety and fear are often confounded in discussions of human emotions. However, studies of rodent defensive reactions under naturalistic conditions suggest anxiety is functionally distinct from fear. Unambiguous threats, such as predators, elicit flight from rodents (if an escape-route is available), whereas ambiguous threats (e.g., the odor of a predator) elicit risk assessment behavior, which is associated with anxiety as it is preferentially modulated by anti-anxiety drugs. However, without human evidence, it would be premature to assume that rodent-based psychological models are valid for humans. We tested the human validity of the risk assessment explanation for anxiety by presenting 8 volunteers with emotive scenarios and asking them to pose facial expressions. Photographs and videos of these expressions were shown to 40 participants who matched them to the scenarios and labeled each expression. Scenarios describing ambiguous threats were preferentially matched to the facial expression posed in response to the same scenario type. This expression consisted of two plausible environmental-scanning behaviors (eye darts and head swivels) and was labeled as anxiety, not fear. The facial expression elicited by unambiguous threat scenarios was labeled as fear. The emotion labels generated were then presented to another 18 participants who matched them back to photographs of the facial expressions. This back-matching of labels to faces also linked anxiety to the environmental-scanning face rather than fear face. Results therefore suggest that anxiety produces a distinct facial expression and that it has adaptive value in situations that are ambiguously threatening, supporting a functional, risk-assessing explanation for human anxiety.
Resumo:
The primary aim of this study was to investigate facial emotion recognition (FER) in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and 20 healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of FER and the 26-item Toronto Alexithymia Scale. Patients withSFD exhibited elevated alexithymia symptoms relative to healthy controls.Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in FER observed in the patients with SFD was most likely a consequence of concurrent alexithymia. It should be noted that neither depression nor anxiety was significantly related to emotion recognition accuracy, suggesting that these variables did not contribute the emotion recognition deficit. Impaired FER observed in the patients with SFD could plausibly have a negative influence on these individuals’ social functioning.
Resumo:
Significant facial emotion recognition (FER) deficits have been observed in participants exhibiting high levels of eating psychopathology. The current study aimed to determine if the pattern of FER deficits is influenced by intensity of facial emotion and to establish if eating psychopathology is associated with a specific pattern of emotion recognition errors that is independent of other psychopathological or personality factors. Eighty females, 40 high and 40 low scorers on the Eating Disorders Inventory (EDI) were presented with a series of faces, each featuring one of five emotional expressions at one of four intensities, and were asked to identify the emotion portrayed. Results revealed that, in comparison to Low EDI scorers, high scorers correctly recognised significantly fewer expressions, particularly of fear and anger. There was also a trend for this deficit to be more evident for subtle displays of emotion (50% intensity). Deficits in anger recognition were related specifically to scores on the body dissatisfaction subscale of the EDI. Error analyses revealed that, in comparison to Low EDI scorers, high scorers made significantly more and fear-as-anger errors. Also, a tendency to label anger expressions as sadness was related to body dissatisfaction. Current findings confirm FER deficits in subclinical eating psychopathology and extend these findings to subtle expressions of emotion. Furthermore, this is the first study to establish that these deficits are related to a specific pattern of recognition errors. Impaired FER could disrupt normal social functioning and might represent a risk factor for the development of more severe psychopathology.
Resumo:
The functional catechol-O-methyltransferase (COMT Val108/158Met) polymorphism has been shown to have an impact on tasks of executive function, memory and attention and recently, tasks with an affective component. As oestrogen reduces COMT activity, we focused on the interaction between gender and COMT genotype on brain activations during an affective processing task. We used functional MRI (fMRI) to record brain activations from 74 healthy subjects who engaged in a facial affect recognition task; subjects viewed and identified fearful compared to neutral faces. There was no main effect of the COMT polymorphism, gender or genotypegender interaction on task performance. We found a significant effect of gender on brain activations in the left amygdala and right temporal pole, where females demonstrated increased activations over males. Within these regions, Val/Val carriers showed greater signal magnitude compared to Met/Met carriers, particularly in females. The COMT Val108/158Met polymorphism impacts on gender-related patterns of activation in limbic and paralimbic regions but the functional significance of any oestrogen-related COMT inhibition appears modest. Copyright © 2008 CINP.
Resumo:
Background: Bipolar disorder is associated with dysfunction in prefrontal and limbic areas implicated in emotional processing. Aims: To explore whether lamotrigine monotherapy may exert its action by improving the function of the neural network involved in emotional processing. Method: We used functional magnetic resonance imaging to examine changes in brain activation during a sad facial affect recognition task in 12 stable patients with bipolar disorder when medication-free compared with healthy controls and after 12 weeks of lamotrigine monotherapy. Results: At baseline, compared with controls, patients with bipolar disorder showed overactivity in temporal regions and underactivity in the dorsal medial and right ventrolateral prefrontal cortex, and the dorsal cingulate gyrus. Following lamotrigine monotherapy, patients demonstrated reduced temporal and increased prefrontal activation. Conclusions: This preliminary evidence suggests that lamotrigine may enhance the function of the neural circuitry involved in affect recognition.
Resumo:
Biometrics is afield of study which pursues the association of a person's identity with his/her physiological or behavioral characteristics.^ As one aspect of biometrics, face recognition has attracted special attention because it is a natural and noninvasive means to identify individuals. Most of the previous studies in face recognition are based on two-dimensional (2D) intensity images. Face recognition based on 2D intensity images, however, is sensitive to environment illumination and subject orientation changes, affecting the recognition results. With the development of three-dimensional (3D) scanners, 3D face recognition is being explored as an alternative to the traditional 2D methods for face recognition.^ This dissertation proposes a method in which the expression and the identity of a face are determined in an integrated fashion from 3D scans. In this framework, there is a front end expression recognition module which sorts the incoming 3D face according to the expression detected in the 3D scans. Then, scans with neutral expressions are processed by a corresponding 3D neutral face recognition module. Alternatively, if a scan displays a non-neutral expression, e.g., a smiling expression, it will be routed to an appropriate specialized recognition module for smiling face recognition.^ The expression recognition method proposed in this dissertation is innovative in that it uses information from 3D scans to perform the classification task. A smiling face recognition module was developed, based on the statistical modeling of the variance between faces with neutral expression and faces with a smiling expression.^ The proposed expression and face recognition framework was tested with a database containing 120 3D scans from 30 subjects (Half are neutral faces and half are smiling faces). It is shown that the proposed framework achieves a recognition rate 10% higher than attempting the identification with only the neutral face recognition module.^
Resumo:
In the conceptual framework of affective neuroscience, this thesis intends to advance the understanding of the plasticity mechanisms of other’s emotional facial expression representations. Chapter 1 outlines a description of the neurophysiological bases of Hebbian plasticity, reviews influential studies that adopted paired associative stimulation procedures, and introduces new lines of research where the impact of cortico-cortical paired associative stimulation protocols on higher order cognitive functions is investigated. The experiments in Chapter 2 aimed to test the modulatory influence of a perceptual-motor training, based on the execution of emotional expressions, on the subsequent emotion intensity judgements of others’ high (i.e., full visible) and low-intensity (i.e., masked) emotional expressions. As a result of the training-induced learning, participants showed a significant congruence effect, as indicated by relatively higher expression intensity ratings for the same emotion as the one that was previously trained. Interestingly, although judged as overall less emotionally intense, surgical facemasks did not prevent the emotion-specific effects of the training to occur, suggesting that covering the lower part of other’s face do not interact with the training-induced congruence effect. In Chapter 3 it was implemented a transcranial magnetic stimulation study targeting neural pathways involving re-entrant input from higher order brain regions into lower levels of the visual processing hierarchy. We focused on cortical visual networks within the temporo-occipital stream underpinning the processing of emotional faces and susceptible to plastic adaptations. Importantly, we tested the plasticity-induced effects in a state dependent manner, by administering ccPAS while presenting different facial expressions yet afferent to a specific emotion. Results indicated that the discrimination accuracy of emotion-specific expressions is enhanced following the ccPAS treatment, suggesting that a multi-coil TMS intervention might represent a suitable tool to drive brain remodeling at a neural network level, and consequently influence a specific behavior.
Resumo:
Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.
Resumo:
La présente recherche est constituée de deux études. Dans l’étude 1, il s’agit d’améliorer la validité écologique des travaux sur la reconnaissance émotionnelle faciale (REF) en procédant à la validation de stimuli qui permettront d’étudier cette question en réalité virtuelle. L’étude 2 vise à documenter la relation entre le niveau de psychopathie et la performance à une tâche de REF au sein d’un échantillon de la population générale. Pour ce faire, nous avons créé des personnages virtuels animés de différentes origines ethniques exprimant les six émotions fondamentales à différents niveaux d’intensité. Les stimuli, sous forme statique et dynamique, ont été évalués par des étudiants universitaires. Les résultats de l’étude 1 indiquent que les stimuli virtuels, en plus de comporter plusieurs traits distinctifs, constituent un ensemble valide pour étudier la REF. L’étude 2 a permis de constater qu’un score plus élevé à l’échelle de psychopathie, spécifiquement à la facette de l’affect plat, est associé à une plus grande sensibilité aux expressions émotionnelles, particulièrement pour la tristesse. Inversement, un niveau élevé de tendances criminelles est, pour sa part, associé à une certaine insensibilité générale et à un déficit spécifique pour le dégoût. Ces résultats sont spécifiques aux participants masculins. Les données s’inscrivent dans une perspective évolutive de la psychopathie. L’étude met en évidence l’importance d’étudier l’influence respective des facettes de la personnalité psychopathique, ce même dans des populations non-cliniques. De plus, elle souligne la manifestation différentielle des tendances psychopathiques chez les hommes et chez les femmes.
Resumo:
Ce mémoire est composé de trois articles et présente les résultats de travaux de recherche effectués dans le but d'améliorer les techniques actuelles permettant d'utiliser des données associées à certaines tâches dans le but d'aider à l'entraînement de réseaux de neurones sur une tâche différente. Les deux premiers articles présentent de nouveaux ensembles de données créés pour permettre une meilleure évaluation de ce type de techniques d'apprentissage machine. Le premier article introduit une suite d'ensembles de données pour la tâche de reconnaissance automatique de chiffres écrits à la main. Ces ensembles de données ont été générés à partir d'un ensemble de données déjà existant, MNIST, auquel des nouveaux facteurs de variation ont été ajoutés. Le deuxième article introduit un ensemble de données pour la tâche de reconnaissance automatique d'expressions faciales. Cet ensemble de données est composé d'images de visages qui ont été collectées automatiquement à partir du Web et ensuite étiquetées. Le troisième et dernier article présente deux nouvelles approches, dans le contexte de l'apprentissage multi-tâches, pour tirer avantage de données pour une tâche donnée afin d'améliorer les performances d'un modèle sur une tâche différente. La première approche est une généralisation des neurones Maxout récemment proposées alors que la deuxième consiste en l'application dans un contexte supervisé d'une technique permettant d'inciter des neurones à apprendre des fonctions orthogonales, à l'origine proposée pour utilisation dans un contexte semi-supervisé.