912 resultados para Facial emotion recognition
Resumo:
The amygdala has a key role in automatic non-conscious processing of emotions. Highly salient emotional stimuli elicit amygdala activity, and happy faces are among the most rapidly perceived facial expressions. In backward masking paradigms, an image is presented briefly and then masked by another stimulus. However, reports of amygdala responses to masked happy faces have been mixed. In the present Study, we used functional magnetic resonance imaging (fMRI) to examine amygdala activation to masked happy, sad, and neutral facial expressions. Masked happy faces elicited greater amygdala activation bilaterally as compared to masked sad faces. Our findings indicate that the amygdala is highly responsive to non-consciously perceived happy facial expressions. (JINS, 2010, 16, 383-387.)
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Emotion, audition, event-related potentials, MMN, multidimensional scaling, timbre, perception
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.
Resumo:
Emotion regulation is crucial for successfully engaging in social interactions. Yet, little is known about the neural mechanisms controlling behavioral responses to emotional expressions perceived in the face of other people, which constitute a key element of interpersonal communication. Here, we investigated brain systems involved in social emotion perception and regulation, using functional magnetic resonance imaging (fMRI) in 20 healthy participants. The latter saw dynamic facial expressions of either happiness or sadness, and were asked to either imitate the expression or to suppress any expression on their own face (in addition to a gender judgment control task). fMRI results revealed higher activity in regions associated with emotion (e.g., the insula), motor function (e.g., motor cortex), and theory of mind (e.g., [pre]cuneus) during imitation. Activity in dorsal cingulate cortex was also increased during imitation, possibly reflecting greater action monitoring or conflict with own feeling states. In addition, premotor regions were more strongly activated during both imitation and suppression, suggesting a recruitment of motor control for both the production and inhibition of emotion expressions. Expressive suppression (eSUP) produced increases in dorsolateral and lateral prefrontal cortex typically related to cognitive control. These results suggest that voluntary imitation and eSUP modulate brain responses to emotional signals perceived from faces, by up- and down-regulating activity in distributed subcortical and cortical networks that are particularly involved in emotion, action monitoring, and cognitive control.
Resumo:
Recent theoretical writings suggest that the ineffective regulation of negative emotional states may reduce the ability of women to detect and respond effectively to situational and interpersonal factors that increase risk for sexual assault. However, little empirical research has explored this hypothesis. In the present study, it was hypothesized that prior sexual victimization and negative mood state would each independently predict poor risk recognition and less effective defensive actions in response to an analogue sexual assault vignette. Further, these variables were expected to interact to produce particularly impaired risk responses. Finally, that the in vivo emotion regulation strategy of suppression and corresponding cognitive resource usage (operationalized as memory impairment for the vignette) were hypothesized to mediate these associations. Participants were 668 female undergraduate students who were randomly assigned to receive a negative or neutral film mood induction followed by an audiotaped dating interaction during which they were instructed to indicate when the man had “gone too far” and describe an adaptive response to the situation. Approximately 33.5% of the sample reported a single victimization and 10% reported revictimization. Hypotheses were largely unsupported as sexual victimization history, mood condition, and their interaction did not impact risk recognition or adaptive responding. However, in vivo emotional suppression and cognitive resource usage were shown to predict delayed risk recognition only. Findings suggest that contrary to hypotheses, negative mood (as induced here) may not relate to risk recognition and response impairments. However, it may be important for victimization prevention programs that focus on risk perception to address possible underlying issues with emotional suppression and limited cognitive resources to improve risk perception abilities. Limitations and future directions are discussed.
Resumo:
In the past, the accuracy of facial approximations has been assessed by resemblance ratings (i.e., the comparison of a facial approximation directly to a target individual) and recognition tests (e.g., the comparison of a facial approximation to a photo array of faces including foils and a target individual). Recently, several research studies have indicated that recognition tests hold major strengths in contrast to resemblance ratings. However, resemblance ratings remain popularly employed and/or are given weighting when judging facial approximations, thus indicating that no consensus has been reached. This study aims to further investigate the matter by comparing the results of resemblance ratings and recognition tests for two facial approximations which clearly differed in their morphological appearance. One facial approximation was constructed by an experienced practitioner privy to the appearance of the target individual (practitioner had direct access to an antemortem frontal photograph during face construction), while the other facial approximation was constructed by a novice under blind conditions. Both facial approximations, whilst clearly morphologically different, were given similar resemblance scores even though recognition test results produced vastly different results. One facial approximation was correctly recognized almost without exception while the other was not correctly recognized above chance rates. These results suggest that resemblance ratings are insensitive measures of the accuracy of facial approximations and lend further weight to the use of recognition tests in facial approximation assessment. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The aim of the present study was to establish if patients with major depression (MD) exhibit a memory bias for sad faces, relative to happy and neutral, when the affective element of the faces is not explicitly processed at encoding. To this end, 16 psychiatric out-patients with MD and 18 healthy, never-depressed controls (HC) were presented with a series of emotional faces and were required to identify the gender of the individuals featured in the photographs. Participants were subsequently given a recognition memory test for these faces. At encoding, patients with MD exhibited a non-significant tendency towards slower gender identification (GI) times, relative to HC, for happy faces. However, the GI times of the two groups did not differ for sad or neutral faces. At memory testing, patients with MD did not exhibit the expected memory bias for sad faces. Similarly, HC did not demonstrate enhanced memory for happy faces. Overall, patients with MD were impaired in their memory for the faces relative to the HC. The current findings are consistent with the proposal that mood-congruent memory biases are contingent upon explicit processing of the emotional element of the to-be-remembered material at encoding.
Resumo:
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).
Resumo:
Some decades of research on emotional development have underlined the contribution of several domains to emotional understanding in childhood. Based on this research, Pons and colleagues (Pons & Harris, 2002; Pons, Harris & Rosnay, 2004) have proposed the Test of Emotion Comprehension (TEC) which assesses nine domains of emotional understanding, namely the recognition of emotions, based on facial expressions; the comprehension of external emotional causes; impact of desire on emotions; emotions based on beliefs; memory influence on emotions; possibility of emotional regulation; possibility of hiding an emotional state; having mixed emotions; contribution of morality to emotional experiences. This instrument was administered individually to 182 Portuguese children aged between 8 and 11 years, of 3rd and 4th grades, in public schools. Additionally, we used the Socially in Action-Peers (SAp) (Rocha, Candeias & Lopes da Silva, 2012) to assess TEC’s criterion-related validity. Mean differences results in TEC by gender and by socio-economic status (SES) were analyzed. The results of the TEC’s psychometric analysis were performed in terms of items’ sensitivity and reliability (stability, test-retest). Finally, in order to explore the theoretical structure underlying TEC a Confirmatory Factor Analysis and a Similarity Structure Analysis were computed. Implications of these findings for emotional understanding assessment and intervention in childhood are discussed.
Resumo:
This article details the author’s attempts to improve understanding of organisational behaviour through investigation of the cognitive and affective processes that underlie attitudes and behaviour. To this end, the paper describes the author’s earlier work on the attribution theory of leadership and, more recently, in three areas of emotion research: affective events theory, emotional intelligence, and the effect of supervisors’ facial expression on employees’ perceptions of leader-member exchange quality. The paper summarises the author’s research on these topics, shows how they have contributed to furthering our understanding of organisational behaviour, suggests where research in these areas are going, and draws some conclusions for management practice.
Resumo:
In studies of mirror-self-recognition subjects are usually surreptitiously marked on their head, and then presented with a mirror. Scores of studies have established that by 18 to 24 months, children investigate their own head upon seeing the mark in the mirror. Scores of papers have debated what this means. Suggestions range from rich interpretations (e.g., the development of self-awareness) to lean accounts (e.g., the development of proprioceptivevisual matching), and include numerous more moderate proposals (e.g., the development of a concept of one's face). In Study 1, 18-24-monthold toddlers were given the standard test and a novel task in which they were marked on their legs rather than on their face. Toddlers performed equivalently on both tasks, suggesting that passing the test does not rely on information specific to facial features. In Study 2, toddlers were surreptitiously slipped into trouser legs that were prefixed to a highchair. Toddlers failed to retrieve the sticker now that their legs looked different from expectations. This finding, together with the findings from a third study which showed that self-recognition in live video feedback develops later than mirror selfrecognition, suggests that performance is not solely the result of proprioceptive-visual matching.
Resumo:
The divided visual field technique was used to investigate the pattern of brain asymmetry in the perception of positive/approach and negative/withdrawal facial expressions. A total of 80 undergraduate students (65 female, 15 male) were distributed in five experimental groups in order to investigate separately the perception of expressions of happiness, surprise, fear, sadness, and the neutral face. In each trial a target and a distractor expression were presented simultaneously in a computer screen for 150 ms and participants had to determine the side (left or right) on which the target expression was presented. Results indicated that expressions of happiness and fear were identified faster when presented in the left visual field, suggesting an advantage of the right hemisphere in the perception of these expressions. Fewer judgement errors and faster reaction times were also observed for the matching condition in which emotional faces were presented in the left visual field and neutral faces in the right visual field. Other results indicated that positive expressions (happiness and surprise) were perceived faster and more accurately than negative ones (sadness and fear). Main results tend to support the right hemisphere hypothesis, which predicts a better performance of the right hemisphere to perceive emotions, as opposed to the approach-withdrawal hypothesis.
Resumo:
A 14-year-old patient had a low-energy facial blunt trauma that evolved to right facial paralysis caused by parotid hematoma with parotid salivary gland lesion. Computed tomography and angiography demonstrated intraparotid collection without pseudoaneurysm and without radiologic signs of fracture in the face. The patient was treated with serial punctures for hematoma deflation, resolving with regression and complete remission of facial paralysis, with no late sequela. The authors discuss the relationship between facial nerve traumatic injuries associated or not with the presence of facial fractures, emphasizing the importance of early recognition and appropriate treatment of such cases.