124 resultados para Recognizing emotional facial expressions
em CentAUR: Central Archive University of Reading - UK
Resumo:
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.
Resumo:
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an 'automatic imitation' paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development.
Resumo:
The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.
Resumo:
The aim of this study was to empirically evaluate an embodied conversational agent called GRETA in an effort to answer two main questions: (1) What are the benefits (and costs) of presenting information via an animated agent, with certain characteristics, in a 'persuasion' task, compared to other forms of display? (2) How important is it that emotional expressions are added in a way that is consistent with the content of the message, in animated agents? To address these questions, a positively framed healthy eating message was created which was variously presented via GRETA, a matched human actor, GRETA's voice only (no face) or as text only. Furthermore, versions of GRETA were created which displayed additional emotional facial expressions in a way that was either consistent or inconsistent with the content of the message. Overall, it was found that although GRETA received significantly higher ratings for helpfulness and likability, presenting the message via GRETA led to the poorest memory performance among users. Importantly, however, when GRETA's additional emotional expressions were consistent with the content of the verbal message, the negative effect on memory performance disappeared. Overall, the findings point to the importance of achieving consistency in animated agents. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.
Resumo:
To investigate the mechanisms involved in automatic processing of facial expressions, we used the QUEST procedure to measure the display durations needed to make a gender decision on emotional faces portraying fearful, happy, or neutral facial expressions. In line with predictions of appraisal theories of emotion, our results showed greater processing priority of emotional stimuli regardless of their valence. Whereas all experimental conditions led to an averaged threshold of about 50 ms, fearful and happy facial expressions led to significantly less variability in the responses than neutral faces. Results suggest that attention may have been automatically drawn by the emotion portrayed by face targets, yielding more informative perceptions and less variable responses. The temporal resolution of the perceptual system (expressed by the thresholds) and the processing priority of the stimuli (expressed by the variability in the responses) may influence subjective and objective measures of awareness, respectively.
Resumo:
The neuropeptide substance P and its receptor NK1 have been implicated in emotion, anxiety and stress in preclinical studies. However, the role of NK1 receptors in human brain function is less clear and there have been inconsistent reports of the value of NK1 receptor antagonists in the treatment of clinical depression. The present study therefore aimed to investigate effects of NK1 antagonism on the neural processing of emotional information in healthy volunteers. Twenty-four participants were randomized to receive a single dose of aprepitant (125 mg) or placebo. Approximately 4 h later, neural responses during facial expression processing and an emotional counting Stroop word task were assessed using fMRI. Mood and subjective experience were also measured using self-report scales. As expected a single dose of aprepitant did not affect mood and subjective state in the healthy volunteers. However, NK1 antagonism increased responses specifically during the presentation of happy facial expressions in both the rostral anterior cingulate and the right amygdala. In the emotional counting Stroop task the aprepitant group had increased activation in both the medial orbitofrontal cortex and the precuneus cortex to positive vs. neutral words. These results suggest consistent effects of NK1 antagonism on neural responses to positive affective information in two different paradigms. Such findings confirm animal studies which support a role for NK1 receptors in emotion. Such an approach may be useful in understanding the effects of novel drug treatments prior to full-scale clinical trials.
Resumo:
A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness
Variations in the human cannabinoid receptor (CNR1) gene modulate striatal responses to happy faces.
Resumo:
Happy facial expressions are innate social rewards and evoke a response in the striatum, a region known for its role in reward processing in rats, primates and humans. The cannabinoid receptor 1 (CNR1) is the best-characterized molecule of the endocannabinoid system, involved in processing rewards. We hypothesized that genetic variation in human CNR1 gene would predict differences in the striatal response to happy faces. In a 3T functional magnetic resonance imaging (fMRI) scanning study on 19 Caucasian volunteers, we report that four single nucleotide polymorphisms (SNPs) in the CNR1 locus modulate differential striatal response to happy but not to disgust faces. This suggests a role for the variations of the CNR1 gene in underlying social reward responsivity. Future studies should aim to replicate this finding with a balanced design in a larger sample, but these preliminary results suggest neural responsivity to emotional and socially rewarding stimuli varies as a function of CNR1 genotype. This has implications for medical conditions involving hypo-responsivity to emotional and social stimuli, such as autism.
Resumo:
Postnatal maternal depression is associated with difficulties in maternal responsiveness. As most signals arising from the infant come from facial expressions one possible explanation for these difficulties is that mothers with postnatal depression are differentially affected by particular infant facial expressions. Thus, this study investigates the effects of postnatal depression on mothers’ perceptions of infant facial expressions. Participants (15 controls, 15 depressed and 15 anxious mothers) were asked to rate a number of infant facial expressions, ranging from very positive to very negative. Each face was shown twice, for a short and for a longer period of time in random order. Results revealed that mothers used more extreme ratings when shown the infant faces (i.e. more negative or more positive) for a longer period of time. Mothers suffering from postnatal depression were more likely to rate negative infant faces shown for a longer period more negatively than controls. The differences were specific to depression rather than an effect of general postnatal psychopathology—as no differences were observed between anxious mothers and controls. There were no other significant differences in maternal ratings of infant faces showed for short periods or for positive or neutral valence faces of either length. The findings that mothers with postnatal depression rate negative infant faces more negatively indicate that appraisal bias might underlie some of the difficulties that these mothers have in responding to their own infants signals.
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.