834 resultados para Recognizing emotional facial expressions
Resumo:
The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether similarity is based on physical characteristics or underlying dimensions of valence and arousal. Using a 3- alternative forced-choice task in which stimuli were presented briefly (Exp 1a) or for an unlimited time (Exp 1b) we provide evidence that congruency effects are more complex than either model predicts; the effects are asymmetrical and cannot be accounted for by similarity alone. Fearful postures are especially influential when paired with facial expressions, but not when presented in a flanker task (Exp 2). We suggest refinements to each model that may account for our results and suggest that additional studies be conducted prior to drawing strong theoretical conclusions.
Resumo:
The perceptive accuracy of university students was compared between men and women, from sciences and humanities courses, to recognize emotional facial expressions. emotional expressions have had increased interest in several areas involved with human interaction, reflecting the importance of perceptive skills in human expression of emotions for the effectiveness of communication. Two tests were taken: one was a quick exposure (0.5 s) of 12 faces with an emotional expression, followed by a neutral face. subjects had to tell if happiness, sadness, anger, fear, disgust or surprise was flashed, and each emotion was shown twice, at random. on the second test 15 faces with the combination of two emotional expressions were shown without a time limit, and the subject had to name one of the emotions of the previous list. in this study, women perceived sad expressions better while men realized more happy faces. there was no significant difference in other emotions detection like anger, fear, surprise, disgust. Students of humanities and sciences areas of both sexes, when compared, had similar capacities to perceive emotional expressions
Resumo:
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Resumo:
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).
Resumo:
Holistic face perception, i.e. the mandatory integration of featural information across the face, hasbeen considered to play a key role when recognizing emotional face expressions (e.g., Tanaka et al.,2002). However, despite their early onset holistic processing skills continue to improvethroughout adolescence (e.g., Schwarzer et al., 2010) and therefore might modulate theevaluation of facial expressions. We tested this hypothesis using an attentional blink (AB)paradigm to compare the impact of happy, fearful and neutral faces in adolescents (10–13 years)and adults on subsequently presented neutral target stimuli (animals, plants and objects) in a rapidserial visual presentation stream. Adolescents and adults were found to be equally reliable whenreporting the emotional expression of the face stimuli. However, the detection of emotional butnot neutral faces imposed a significantly stronger AB effect on the detection of the neutral targetsin adults compared to adolescents. In a control experiment we confirmed that adolescents ratedemotional faces lower in terms of valence and arousal than adults. The results suggest a protracteddevelopment of the ability to evaluate facial expressions that might be attributed to the latematuration of holistic processing skills.
Resumo:
Background: Difficulties in emotion processing and poor social function are common to bipolar disorder (BD) and major depressive disorder (MDD) depression, resulting in many BID depressed individuals being misdiagnosed with MDD. The amygdala is a key region implicated in processing emotionally salient stimuli, including emotional facial expressions. It is unclear, however, whether abnormal amygdala activity during positive and negative emotion processing represents a persistent marker of BD regardless of illness phase or a state marker of depression common or specific to BID and MDD depression. Methods: Sixty adults were recruited: 15 depressed with BID type 1 (BDd), 15 depressed with recurrent MDD, 15 with BID in remission (BDr), diagnosed with DSM-IV and Structured Clinical Interview for DSM-IV Research Version criteria; and 15 healthy control subjects (HC). Groups were age- and gender ratio-matched; patient groups were matched for age of illness onset and illness duration; depressed groups were matched for depression severity. The BDd were taking more psychotropic medication than other patient groups. All individuals participated in three separate 3T neuroimaging event-related experiments, where they viewed mild and intense emotional and neutral faces of fear, happiness, or sadness from a standardized series. Results: The BDd-relative to HC, BDr, and MDD-showed elevated left amygdala activity to mild and neutral facial expressions in the sad (p < .009) but not other emotion experiments that was not associated with medication. There were no other significant between-group differences in amygdala activity. Conclusions: Abnormally elevated left amygdala activity to mild sad and neutral faces might be a depression-specific marker in BID but not MDD, suggesting different pathophysiologic processes for BD versus MDD depression.
Resumo:
Background - Difficulties in emotion processing and poor social function are common to bipolar disorder (BD) and major depressive disorder (MDD) depression, resulting in many BD depressed individuals being misdiagnosed with MDD. The amygdala is a key region implicated in processing emotionally salient stimuli, including emotional facial expressions. It is unclear, however, whether abnormal amygdala activity during positive and negative emotion processing represents a persistent marker of BD regardless of illness phase or a state marker of depression common or specific to BD and MDD depression. Methods - Sixty adults were recruited: 15 depressed with BD type 1 (BDd), 15 depressed with recurrent MDD, 15 with BD in remission (BDr), diagnosed with DSM-IV and Structured Clinical Interview for DSM-IV Research Version criteria; and 15 healthy control subjects (HC). Groups were age- and gender ratio-matched; patient groups were matched for age of illness onset and illness duration; depressed groups were matched for depression severity. The BDd were taking more psychotropic medication than other patient groups. All individuals participated in three separate 3T neuroimaging event-related experiments, where they viewed mild and intense emotional and neutral faces of fear, happiness, or sadness from a standardized series. Results - The BDd—relative to HC, BDr, and MDD—showed elevated left amygdala activity to mild and neutral facial expressions in the sad (p < .009) but not other emotion experiments that was not associated with medication. There were no other significant between-group differences in amygdala activity. Conclusions - Abnormally elevated left amygdala activity to mild sad and neutral faces might be a depression-specific marker in BD but not MDD, suggesting different pathophysiologic processes for BD versus MDD depression.
Resumo:
Previously, studies investigating emotional face perception - regardless of whether they involved adults or children - presented participants with static photos of faces in isolation. In the natural world, faces are rarely encountered in isolation. In the few studies that have presented faces in context, the perception of emotional facial expressions is altered when paired with an incongruent context. For both adults and 8- year-old children, reaction times increase and accuracy decreases when facial expressions are presented in an incongruent context depicting a similar emotion (e.g., sad face on a fear body) compared to when presented in a congruent context (e.g., sad face on a sad body; Meeren, van Heijnsbergen, & de Gelder, 2005; Mondloch, 2012). This effect is called a congruency effect and does not exist for dissimilar emotions (e.g., happy and sad; Mondloch, 2012). Two models characterize similarity between emotional expressions differently; the emotional seed model bases similarity on physical features, whereas the dimensional model bases similarity on underlying dimensions of valence an . arousal. Study 1 investigated the emergence of an adult-like pattern of congruency effects in pre-school aged children. Using a child-friendly sorting task, we identified the youngest age at which children could accurately sort isolated facial expressions and body postures and then measured whether an incongruent context disrupted the perception of emotional facial expressions. Six-year-old children showed congruency effects for sad/fear but 4-year-old children did not for sad/happy. This pattern of congruency effects is consistent with both models and indicates that an adult-like pattern exists at the youngest age children can reliably sort emotional expressions in isolation. In Study 2, we compared the two models to determine their predictive abilities. The two models make different predictions about the size of congruency effects for three emotions: sad, anger, and fear. The emotional seed model predicts larger congruency effects when sad is paired with either anger or fear compared to when anger and fear are paired with each other. The dimensional model predicts larger congruency effects when anger and fear are paired together compared to when either is paired with sad. In both a speeded and unspeeded task the results failed to support either model, but the pattern of results indicated fearful bodies have a special effect. Fearful bodies reduced accuracy, increased reaction times more than any other posture, and shifted the pattern of errors. To determine whether the results were specific to bodies, we ran the reverse task to determine if faces could disrupt the perception of body postures. This experiment did not produce congruency effects, meaning faces do not influence the perception of body postures. In the final experiment, participants performed a flanker task to determine whether the effect of fearful bodies was specific to faces or whether fearful bodies would also produce a larger effect in an unrelated task in which faces were absent. Reaction times did not differ across trials, meaning fearful bodies' large effect is specific to situations with faces. Collectively, these studies provide novel insights, both developmentally and theoretically, into how emotional faces are perceived in context.
Resumo:
The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.
Resumo:
The divided visual field technique was used to investigate the pattern of brain asymmetry in the perception of positive/approach and negative/withdrawal facial expressions. A total of 80 undergraduate students (65 female, 15 male) were distributed in five experimental groups in order to investigate separately the perception of expressions of happiness, surprise, fear, sadness, and the neutral face. In each trial a target and a distractor expression were presented simultaneously in a computer screen for 150 ms and participants had to determine the side (left or right) on which the target expression was presented. Results indicated that expressions of happiness and fear were identified faster when presented in the left visual field, suggesting an advantage of the right hemisphere in the perception of these expressions. Fewer judgement errors and faster reaction times were also observed for the matching condition in which emotional faces were presented in the left visual field and neutral faces in the right visual field. Other results indicated that positive expressions (happiness and surprise) were perceived faster and more accurately than negative ones (sadness and fear). Main results tend to support the right hemisphere hypothesis, which predicts a better performance of the right hemisphere to perceive emotions, as opposed to the approach-withdrawal hypothesis.
Resumo:
The amygdala has a key role in automatic non-conscious processing of emotions. Highly salient emotional stimuli elicit amygdala activity, and happy faces are among the most rapidly perceived facial expressions. In backward masking paradigms, an image is presented briefly and then masked by another stimulus. However, reports of amygdala responses to masked happy faces have been mixed. In the present Study, we used functional magnetic resonance imaging (fMRI) to examine amygdala activation to masked happy, sad, and neutral facial expressions. Masked happy faces elicited greater amygdala activation bilaterally as compared to masked sad faces. Our findings indicate that the amygdala is highly responsive to non-consciously perceived happy facial expressions. (JINS, 2010, 16, 383-387.)
Resumo:
Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.
Resumo:
Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunteers (9 women and 11 men), aged between 19 and 31 years, took part in the study. The experiment consisted of two sessions in which participants had to identify the emotion of static (photographs) and dynamic (videos) displays of facial expressions on the computer screen. The mean accuracy was submitted to an Anova for repeated measures of model: 2 sexes x [2 conditions x 4 expressions x 4 intensities]. We observed an advantage for the recognition of dynamic expressions of happiness and fear compared to the static stimuli (p < .05). Analysis of interactions showed that expressions with intensity of 25 % were better recognized in the dynamic condition (p < .05). The addition of motion contributes to improve recognition especially in male participants (p < .05). We concluded that the effect of the motion varies as a function of the type of emotion, intensity of the expression and sex of the participant. These results support the hypothesis that dynamic stimuli have more ecological validity and are more appropriate to the research with emotions.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.