860 resultados para Emotional expressions
Resumo:
Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.
Resumo:
Individuals with social phobia display social information processing biases yet their aetiological significance is unclear. Infants of mothers with social phobia and control infants' responses were assessed at 10 days, 10 and 16 weeks, and 10 months to faces versus non-faces, variations in intensity of emotional expressions, and gaze direction. Infant temperament and maternal behaviours were also assessed. Both groups showed a preference for faces over non-faces at 10 days and 10 weeks, and full faces over profiles at 16 weeks; they also looked more to high vs. low intensity angry faces at 10 weeks, and fearful faces at 10 months; however, index infants' initial orientation and overall looking to high-intensity fear faces was relatively less than controls at 10 weeks. This was not explained by infant temperament or maternal behaviours. The findings suggest that offspring of mothers with social phobia show processing biases to emotional expressions in infancy.
Resumo:
Theory of mind ability has been associated with performance in interpersonal interactions and has been found to influence aspects such as emotion recognition, social competence, and social anxiety. Being able to attribute mental states to others requires attention to subtle communication cues such as facial emotional expressions. Decoding and interpreting emotions expressed by the face, especially those with negative valence, are essential skills to successful social interaction. The current study explored the association between theory of mind skills and attentional bias to facial emotional expressions. According to the study hypothesis, individuals with poor theory of mind skills showed preferential attention to negative faces over both non-negative faces and neutral objects. Tentative explanations for the findings are offered emphasizing the potential adaptive role of vigilance for threat as a way of allocating a limited capacity to interpret others’ mental states to obtain as much information as possible about potential danger in the social environment.
Resumo:
Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.
Resumo:
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.
Resumo:
The recognition of faces and of facial expressions in an important evolutionary skill, and an integral part of social communication. It has been argued that the processing of faces is distinct from the processing of non-face stimuli and functional neuroimaging investigations have even found evidence of a distinction between the perception of faces and of emotional expressions. Structural and temporal correlates of face perception and facial affect have only been separately identified. Investigation neural dynamics of face perception per se as well as facial affect would allow the mapping of these in space, time and frequency specific domains. Participants were asked to perform face categorisation and emotional discrimination tasks and Magnetoencephalography (MEG) was used to measure the neurophysiology of face and facial emotion processing. SAM analysis techniques enable the investigation of spectral changes within specific time-windows and frequency bands, thus allowing the identification of stimulus specific regions of cortical power changes. Furthermore, MEG’s excellent temporal resolution allows for the detection of subtle changes associated with the processing of face and non-face stimuli and different emotional expressions. The data presented reveal that face perception is associated with spectral power changes within a distributed cortical network comprising occipito-temporal as well as parietal and frontal areas. For the perception of facial affect, spectral power changes were also observed within frontal and limbic areas including the parahippocampal gyrus and the amygdala. Analyses of temporal correlates also reveal a distinction between the processing of faces and facial affect. Face perception per se occurred at earlier latencies whereas the discrimination of facial expression occurred within a longer time-window. In addition, the processing of faces and facial affect was differentially associated with changes in cortical oscillatory power for alpha, beta and gamma frequencies. The perception of faces and facial affect is associated with distinct changes in cortical oscillatory activity that can be mapped to specific neural structures, specific time-windows and latencies as well as specific frequency bands. Therefore, the work presented in this thesis provides further insight into the sequential processing of faces and facial affect.
Resumo:
The perceptive accuracy of university students was compared between men and women, from sciences and humanities courses, to recognize emotional facial expressions. emotional expressions have had increased interest in several areas involved with human interaction, reflecting the importance of perceptive skills in human expression of emotions for the effectiveness of communication. Two tests were taken: one was a quick exposure (0.5 s) of 12 faces with an emotional expression, followed by a neutral face. subjects had to tell if happiness, sadness, anger, fear, disgust or surprise was flashed, and each emotion was shown twice, at random. on the second test 15 faces with the combination of two emotional expressions were shown without a time limit, and the subject had to name one of the emotions of the previous list. in this study, women perceived sad expressions better while men realized more happy faces. there was no significant difference in other emotions detection like anger, fear, surprise, disgust. Students of humanities and sciences areas of both sexes, when compared, had similar capacities to perceive emotional expressions
Resumo:
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).
Resumo:
Serotonin has been implicated in the neurobiology of depressive and anxiety disorders, but little is known about its role in the modulation of basic emotional processing. The aim of this study was to determine the effect of the selective serotonin reuptake inhibitor, escitalopram, on the perception of facial emotional expressions. Twelve healthy male volunteers completed two experimental sessions each, in a randomized, balanced order, double-blind design. A single oral dose of escitalopram (10 mg) or placebo was administered 3 h before the task. Participants were presented to a task composed of six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) that were morphed between neutral and each standard emotion in 10% steps. Escitalopram facilitated the recognition of sadness and inhibited the recognition of happiness in male, but not female faces. No drug effect on subjective measures was detected. These results confirm that serotonin modulates the recognition of emotional faces, and suggest that the gender of the face can have a role in this modulation. Further studies including female volunteers are needed.
Resumo:
Traumatic brain injury (TBI) often affects social adaptive functioning and these changes in social adaptability are usually associated with general damage to the frontal cortex. Recent evidence suggests that certain neurons within the orbitofrontal cortex appear to be specialized for the processing of faces and facial expressions. The orbitofrontal cortex also appears to be involved in self-initiated somatic activation to emotionally-charged stimuli. According to Somatic Marker Theory (Damasio, 1994), the reduced physiological activation fails to provide an individual with appropriate somatic cues to personally-relevant stimuli and this, in turn, may result in maladaptive behaviour. Given the susceptibility of the orbitofrontal cortex in TBI, it was hypothesized that impaired perception and reactivity to socially-relevant information might be responsible for some of the social difficulties encountered after TBL Fifteen persons who sustained a moderate to severe brain injury were compared to age and education matched Control participants. In the first study, both groups were presented with photographs of models displaying the major emotions and either asked to identify the emotions or simply view the faces passively. In a second study, participants were asked to select cards from decks that varied in terms of how much money could be won or lost. Those decks with higher losses were considered to be high-risk decks. Electrodermal activity was measured concurrently in both situations. Relative to Controls, TBI participants were found to have difficulty identifying expressions of surprise, sadness, anger, and fear. TBI persons were also found to be under-reactive, as measured by electrodermal activity, while passively viewing slides of negative expressions. No group difference,in reactivity to high-risk card decks was observed. The ability to identify emotions in the face and electrodermal reactivity to faces and to high-risk decks in the card game were examined in relationship to social monitoring and empathy as described by family members or friends on the Brock Adaptive Functioning Questionnaire (BAFQ). Difficulties identifying negative expressions (i.e., sadness, anger, fear, and disgust) predicted problems in monitoring social situations. As well, a modest relationship was observed between hypo-arousal to negative faces and problems with social monitoring. Finally, hypo-arousal in the anticipation of risk during the card game related to problems in empathy. In summary, these data are consistent with the view that alterations in the ability to perceive emotional expressions in the face and the disruption in arousal to personally-relevant information may be accounting for some of the difficulties in social adaptation often observed in persons who have sustained a TBI. Furthermore, these data provide modest support for Damasio's Somatic Marker Theory in that physiological reactivity to socially-relevant information has some value in predicting social function. Therefore, the assessment of TBI persons, particularly those with adaptive behavioural problems, should be expanded to determine whether alterations in perception and reactivity to socially-relevant stimuli have occurred. When this is the case, rehabilitative strategies aimed more specifically at these difficulties should be considered.
Resumo:
Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.
Resumo:
Cette thèse a pour objectif de comparer les expressions émotionnelles évoquées par la musique, la voix (expressions non-linguistiques) et le visage sur les plans comportemental et neuronal. Plus précisément, le but est de bénéficier de l’indéniable pouvoir émotionnel de la musique afin de raffiner notre compréhension des théories et des modèles actuels associés au traitement émotionnel. Qui plus est, il est possible que cette disposition surprenante de la musique pour évoquer des émotions soit issue de sa capacité à s’immiscer dans les circuits neuronaux dédiés à la voix, bien que les évidences à cet effet demeurent éparses pour le moment. Une telle comparaison peut potentiellement permettre d’élucider, en partie, la nature des émotions musicales. Pour ce faire, différentes études ont été réalisées et sont ici présentées dans deux articles distincts. Les études présentées dans le premier article ont comparé, sur le plan comportemental, les effets d’expressions émotionnelles sur la mémoire entre les domaines musical et vocal (non-linguistique). Les résultats ont révélé un avantage systématique en mémoire pour la peur dans les deux domaines. Aussi, une corrélation dans la performance individuelle en mémoire a été trouvée entre les expressions de peur musicales et vocales. Ces résultats sont donc cohérents avec l’hypothèse d’un traitement perceptif similaire entre la musique et la voix. Dans le deuxième article, les corrélats neuronaux associés à la perception d’expressions émotionnelles évoquées par la musique, la voix et le visage ont été directement comparés en imagerie par résonnance magnétique fonctionnelle (IRMf). Une augmentation significative du signal « Blood Oxygen Level Dependent » (BOLD) a été trouvée dans l’amygdale (et à l’insula postérieure) en réponse à la peur, parmi l’ensemble des domaines et des modalités à l’étude. Une corrélation dans la réponse BOLD individuelle de l’amygdale, entre le traitement musical et vocal, a aussi été mise en évidence, suggérant à nouveau des similarités entre les deux domaines. En outre, des régions spécifiques à chaque domaine ont été relevées. Notamment, le gyrus fusiforme (FG/FFA) pour les expressions du visage, le sulcus temporal supérieur (STS) pour les expressions vocales ainsi qu’une portion antérieure du gyrus temporal supérieur (STG) particulièrement sensible aux expressions musicales (peur et joie), dont la réponse s’est avérée modulée par l’intensité des stimuli. Mis ensemble, ces résultats révèlent des similarités mais aussi des différences dans le traitement d’expressions émotionnelles véhiculées par les modalités visuelle et auditive, de même que différents domaines dans la modalité auditive (musique et voix). Plus particulièrement, il appert que les expressions musicales et vocales partagent d’étroites similarités surtout en ce qui a trait au traitement de la peur. Ces données s’ajoutent aux connaissances actuelles quant au pouvoir émotionnel de la musique et contribuent à élucider les mécanismes perceptuels sous-jacents au traitement des émotions musicales. Par conséquent, ces résultats donnent aussi un appui important à l’utilisation de la musique dans l’étude des émotions qui pourra éventuellement contribuer au développement de potentielles interventions auprès de populations psychiatriques.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Resumo:
Significant facial emotion recognition (FER) deficits have been observed in participants exhibiting high levels of eating psychopathology. The current study aimed to determine if the pattern of FER deficits is influenced by intensity of facial emotion and to establish if eating psychopathology is associated with a specific pattern of emotion recognition errors that is independent of other psychopathological or personality factors. Eighty females, 40 high and 40 low scorers on the Eating Disorders Inventory (EDI) were presented with a series of faces, each featuring one of five emotional expressions at one of four intensities, and were asked to identify the emotion portrayed. Results revealed that, in comparison to Low EDI scorers, high scorers correctly recognised significantly fewer expressions, particularly of fear and anger. There was also a trend for this deficit to be more evident for subtle displays of emotion (50% intensity). Deficits in anger recognition were related specifically to scores on the body dissatisfaction subscale of the EDI. Error analyses revealed that, in comparison to Low EDI scorers, high scorers made significantly more and fear-as-anger errors. Also, a tendency to label anger expressions as sadness was related to body dissatisfaction. Current findings confirm FER deficits in subclinical eating psychopathology and extend these findings to subtle expressions of emotion. Furthermore, this is the first study to establish that these deficits are related to a specific pattern of recognition errors. Impaired FER could disrupt normal social functioning and might represent a risk factor for the development of more severe psychopathology.