989 resultados para facial emotion processing
Resumo:
Background: The spectrum approach was used to examine contributions of comorbid symptom dimensions of substance abuse and eating disorder to abnormal prefrontal-cortical and subcortical-striatal activity to happy and fear faces previously demonstrated in bipolar disorder (BD). Method: Fourteen remitted BD-type I and sixteen healthy individuals viewed neutral, mild and intense happy and fear faces in two event-related fMRI experiments. All individuals completed Substance-Use and Eating-Disorder Spectrum measures. Region-of-Interest analyses for bilateral prefrontal and subcortical-striatal regions were performed. Results: BD individuals scored significantly higher on these spectrum measures than healthy individuals (p<0.05), and were distinguished by activity in prefrontal and subcortical-striatal regions. BD relative to healthy individuals showed reduced dorsal prefrontal-cortical activity to all faces. Only BD individuals showed greater subcortical-striatal activity to happy and neutral faces. In BD individuals, negative correlations were shown between substance use severity and right PFC activity to intense happy faces (p<0.04), and between substance use severity and right caudate nucleus activity to neutral faces (p<0.03). Positive correlations were shown between eating disorder and right ventral putamen activity to intense happy (p<0.02) and neutral faces (p<0.03). Exploratory analyses revealed few significant relationships between illness variables and medication upon neural activity in BID individuals. Limitations: Small sample size of predominantly medicated BD individuals. Conclusion: This study is the first to report relationships between comorbid symptom dimensions of substance abuse and eating disorder and prefrontal-cortical and subcortical-striatal activity to facial expressions in BD. Our findings suggest that these comorbid features may contribute to observed patterns of functional abnormalities in neural systems underlying mood regulation in BD. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
To examine abnormal patterns of frontal cortical-subcortical activity in response to emotional stimuli in euthymic individuals with bipolar disorder type I in order to identify trait-like, pathophysiologic mechanisms of the disorder. We examined potential confounding effects of total psychotropic medication load and illness variables upon neural abnormalities. We analyzed neural activity in 19 euthymic bipolar and 24 healthy individuals to mild and intense happy, fearful and neutral faces. Relative to healthy individuals, bipolar subjects had significantly increased left striatal activity in response to mild happy faces (p < 0.05, corrected), decreased right dorsolateral prefrontal cortical (DLPFC) activity in response to neutral, mild and intense happy faces, and decreased left DLPFC activity in response to neutral, mild and intense fearful faces (p < 0.05, corrected). Bipolar and healthy individuals did not differ in amygdala activity in response to either emotion. In bipolar individuals, there was no significant association between medication load and abnormal activity in these regions, but a negative relationship between age of illness onset and amygdala activity in response to mild fearful faces (p = 0.007). Relative to those without comorbidities, bipolar individuals with comorbidities showed a trend increase in left striatal activity in response to mild happy faces. Abnormally increased striatal activity in response to potentially rewarding stimuli and decreased DLPFC activity in response to other emotionally salient stimuli may underlie mood instabilities in euthymic bipolar individuals, and are more apparent in those with comorbid diagnoses. No relationship between medication load and abnormal neural activity in bipolar individuals suggests that our findings may reflect pathophysiologic mechanisms of the illness rather than medication confounds. Future studies should examine whether this pattern of abnormal neural activity could distinguish bipolar from unipolar depression.
Resumo:
Background: It has been suggested that individuals with social anxiety disorder (SAD) are exaggeratedly concerned about approval and disapproval by others. Therefore, we assessed the recognition of facial expressions by individuals with SAD, in an attempt to overcome the limitations of previous studies. Methods: The sample was formed by 231 individuals (78 SAD patients and 153 healthy controls). All individuals were treatment naive, aged 18-30 years and with similar socioeconomic level. Participants judged which emotion (happiness, sadness, disgust, anger, fear, and surprise) was presented in the facial expression of stimuli displayed on a computer screen. The stimuli were manipulated in order to depict different emotional intensities, with the initial image being a neutral face (0%) and, as the individual moved on across images, the expressions increased their emotional intensity until reaching the total emotion (100%). The time, accuracy, and intensity necessary to perform judgments were evaluated. Results: The groups did not show statistically significant differences in respect to the number of correct judgments or to the time necessary to respond. However, women with SAD required less emotional intensity to recognize faces displaying fear (p = 0.002), sadness (p = 0.033) and happiness (p = 0.002), with no significant differences for the other emotions or men with SAD. Conclusions: The findings suggest that women with SAD are hypersensitive to threat-related and approval-related social cues. Future studies investigating the neural basis of the impaired processing of facial emotion in SAD using functional neuroimaging would be desirable and opportune. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The reliance in experimental psychology on testing undergraduate populations with relatively little life experience, and/or ambiguously valenced stimuli with varying degrees of self-relevance, may have contributed to inconsistent findings in the literature on the valence hypothesis. To control for these potential limitations, the current study assessed lateralised lexical decisions for positive and negative attachment words in 40 middle-aged male and female participants. Self-relevance was manipulated in two ways: by testing currently married compared with previously married individuals and by assessing self-relevance ratings individually for each word. Results replicated a left hemisphere advantage for lexical decisions and a processing advantage of emotional over neutral words but did not support the valence hypothesis. Positive attachment words yielded a processing advantage over neutral words in the right hemisphere, while emotional words (irrespective of valence) yielded a processing advantage over neutral words in the left hemisphere. Both self-relevance manipulations were unrelated to lateralised performance. The role of participant sex and age in emotion processing are discussed as potential modulators of the present findings.
Resumo:
On évoque souvent des difficultés à interagir socialement chez les enfants ayant une dysphasie. Ces difficultés sont généralement attribuées aux troubles du langage, mais elles pourraient aussi provenir d’un problème à décoder les émotions des autres. Le but de la présente recherche est d’explorer cette voie chez les enfants dysphasiques de 9 à 12 ans. Différents stimuli émotionnels leur ont été présentés sous forme de vidéos ainsi qu’à des enfants d’un groupe contrôle selon cinq conditions : parole non filtrée, parole filtrée, visage dynamique, visage dynamique accompagné de la parole non filtrée, et visage dynamique avec parole filtrée. Les enfants dysphasiques et les enfants du groupe contrôle ne se comportent pas différemment de manière significative en fonction des émotions présentées et des conditions testées. Par contre, un sous-groupe d’enfants ayant une dysphasie mixte commet significativement plus d’erreurs pour l’ensemble de la tâche que le sous-groupe d’enfants sans dysphasie de même âge chronologique. En fait une part seulement des enfants dysphasiques mixtes ont des scores plus faibles. Ces mêmes enfants présentent un QI non verbal faible tandis que leur compréhension du langage est équivalente à celle de leur sous-groupe (enfants dysphasiques mixtes). Malgré ces différences significatives, les scores des enfants dysphasiques mixtes restent relativement élevés et les difficultés observées sont subtiles. Sur le plan clinique, les praticiens (orthophonistes, psychologues, éducateur) devront systématiser l’évaluation des habiletés de décodage des émotions chez l’enfant dysphasique dont les difficultés ne sont pas forcément évidentes dans la vie quotidienne. La recherche devra développer un outil de dépistage sensible aux troubles de décodage émotionnel et des stratégies thérapeutiques adaptées.
Resumo:
Les humains communiquent via différents types de canaux: les mots, la voix, les gestes du corps, des émotions, etc. Pour cette raison, un ordinateur doit percevoir ces divers canaux de communication pour pouvoir interagir intelligemment avec les humains, par exemple en faisant usage de microphones et de webcams. Dans cette thèse, nous nous intéressons à déterminer les émotions humaines à partir d’images ou de vidéo de visages afin d’ensuite utiliser ces informations dans différents domaines d’applications. Ce mémoire débute par une brève introduction à l'apprentissage machine en s’attardant aux modèles et algorithmes que nous avons utilisés tels que les perceptrons multicouches, réseaux de neurones à convolution et autoencodeurs. Elle présente ensuite les résultats de l'application de ces modèles sur plusieurs ensembles de données d'expressions et émotions faciales. Nous nous concentrons sur l'étude des différents types d’autoencodeurs (autoencodeur débruitant, autoencodeur contractant, etc) afin de révéler certaines de leurs limitations, comme la possibilité d'obtenir de la coadaptation entre les filtres ou encore d’obtenir une courbe spectrale trop lisse, et étudions de nouvelles idées pour répondre à ces problèmes. Nous proposons également une nouvelle approche pour surmonter une limite des autoencodeurs traditionnellement entrainés de façon purement non-supervisée, c'est-à-dire sans utiliser aucune connaissance de la tâche que nous voulons finalement résoudre (comme la prévision des étiquettes de classe) en développant un nouveau critère d'apprentissage semi-supervisé qui exploite un faible nombre de données étiquetées en combinaison avec une grande quantité de données non-étiquetées afin d'apprendre une représentation adaptée à la tâche de classification, et d'obtenir une meilleure performance de classification. Finalement, nous décrivons le fonctionnement général de notre système de détection d'émotions et proposons de nouvelles idées pouvant mener à de futurs travaux.
Resumo:
Emotion processing deficits can cause catastrophic damage to a person's ability to interact socially. While it is known that older adults have difficulty identifying facial emotions, it is still not clear whether this difficulty extends to identification of the emotion conveyed by prosody. This study investigated whether the ability of older adults to decode emotional prosody falls below that of young adults after controlling for loss of hearing sensitivity and key features of cognitive ageing. Apart from frontal lobe load, only verbal IQ was associated with the age-related reduction in performance displayed by older participants, but a notable deficit existed after controlling for its effects. It is concluded that older adults may indeed have difficulty deducing the emotion conveyed by prosody, and that while this difficulty can be exaggerated by some aspects of cognitive ageing, it is primary in origin.
Resumo:
BACKGROUND: Sex differences are present in many neuropsychiatric conditions that affect emotion and approach-avoidance behavior. One potential mechanism underlying such observations is testosterone in early development. Although much is known about the effects of testosterone in adolescence and adulthood, little is known in humans about how testosterone in fetal development influences later neural sensitivity to valenced facial cues and approach-avoidance behavioral tendencies. METHODS: With functional magnetic resonance imaging we scanned 25 8-11-year-old children while viewing happy, fear, neutral, or scrambled faces. Fetal testosterone (FT) was measured via amniotic fluid sampled between 13 and 20 weeks gestation. Behavioral approach-avoidance tendencies were measured via parental report on the Sensitivity to Punishment and Sensitivity to Rewards questionnaire. RESULTS: Increasing FT predicted enhanced selectivity for positive compared with negatively valenced facial cues in reward-related regions such as caudate, putamen, and nucleus accumbens but not the amygdala. Statistical mediation analyses showed that increasing FT predicts increased behavioral approach tendencies by biasing caudate, putamen, and nucleus accumbens but not amygdala to be more responsive to positive compared with negatively valenced cues. In contrast, FT was not predictive of behavioral avoidance tendencies, either through direct or neurally mediated paths. CONCLUSIONS: This work suggests that testosterone in humans acts as a fetal programming mechanism on the reward system and influences behavioral approach tendencies later in life. As a mechanism influencing atypical development, FT might be important across a range of neuropsychiatric conditions that asymmetrically affect the sexes, the reward system, emotion processing, and approach behavior.
Resumo:
Empathy is the lens through which we view others' emotion expressions, and respond to them. In this study, empathy and facial emotion recognition were investigated in adults with autism spectrum conditions (ASC; N=314), parents of a child with ASC (N=297) and IQ-matched controls (N=184). Participants completed a self-report measure of empathy (the Empathy Quotient [EQ]) and a modified version of the Karolinska Directed Emotional Faces Task (KDEF) using an online test interface. Results showed that mean scores on the EQ were significantly lower in fathers (p<0.05) but not mothers (p>0.05) of children with ASC compared to controls, whilst both males and females with ASC obtained significantly lower EQ scores (p<0.001) than controls. On the KDEF, statistical analyses revealed poorer overall performance by adults with ASC (p<0.001) compared to the control group. When the 6 distinct basic emotions were analysed separately, the ASC group showed impaired performance across five out of six expressions (happy, sad, angry, afraid and disgusted). Parents of a child with ASC were not significantly worse than controls at recognising any of the basic emotions, after controlling for age and non-verbal IQ (all p>0.05). Finally, results indicated significant differences between males and females with ASC for emotion recognition performance (p<0.05) but not for self-reported empathy (p>0.05). These findings suggest that self-reported empathy deficits in fathers of autistic probands are part of the 'broader autism phenotype'. This study also reports new findings of sex differences amongst people with ASC in emotion recognition, as well as replicating previous work demonstrating empathy difficulties in adults with ASC. The use of empathy measures as quantitative endophenotypes for ASC is discussed.
Resumo:
A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness
Resumo:
Introduction: Impairments in facial emotion recognition (PER) have been reported in bipolar disorder (BD) during all mood states. FER has been the focus of functional magnetic resonance imaging studies evaluating differential activation of limbic regions. Recently, the alpha 1-C subunit of the L-type voltage-gated calcium channel (CACNA1C) gene has been described as a risk gene for BD and its Met allele found to increase CACNA1C mRNA expression. In healthy controls, the CACNA1C risk (Met) allele has been reported to increase limbic system activation during emotional stimuli and also to impact on cognitive function. The aim of this study was to investigate the impact of CACNA1C genotype on FER scores and limbic system morphology in subjects with BD and healthy controls. Material and methods: Thirty-nine euthymic BD I subjects and 40 healthy controls were submitted to a PER recognition test battery and genotyped for CACNA1C. Subjects were also examined with a 3D 3-Tesla structural imaging protocol. Results: The CACNA1C risk allele for BD was associated to FER impairment in BD, while in controls nothing was observed. The CACNA1C genotype did not impact on amygdala or hippocampus volume neither in BD nor controls. Limitations: Sample size. Conclusion: The present findings suggest that a polymorphism in calcium channels interferes FER phenotype exclusively in BD and doesn't interfere on limbic structures morphology. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The attention deficit/hyperactivity disorder (ADHD) shows an increased prevalence in arrested offenders compared to the normal population. The aim of the present study was to investigate whether ADHD symptoms are a major risk factor for criminal behaviour, or whether further deficits, mainly abnormalities in emotion-processing, have to be considered as important additional factors that promote delinquency in the presence of ADHD symptomatology. Event related potentials (ERPs) of 13 non-delinquent and 13 delinquent subjects with ADHD and 13 controls were compared using a modified visual Go/Nogo continuous performance task (VCPT) and a newly developed version of the visual CPT that additionally requires emotional evaluation (ECPT). ERPs were analyzed regarding their topographies and Global Field Power (GFP). Offenders with ADHD differed from non-delinquent subjects with ADHD in the ERPs representing higher-order visual processing of objects and faces (N170) and facial affect (P200), and in late monitoring and evaluative functions (LPC) of behavioural response inhibition. Concerning neural activity thought to reflect the allocation of neural resources and cognitive processing capability (P300 Go), response inhibition (P300 Nogo), and attention/expectancy (CNV), deviances were observable in both ADHD groups and may thus be attributed to ADHD rather than to delinquency. In conclusion, ADHD symptomatology may be a risk factor for delinquency, since some neural information processing deficits found in ADHD seemed to be even more pronounced in offenders with ADHD. However, our results suggest additional risk factors consisting of deviant higher-order visual processing, especially of facial affect, as well as abnormalities in monitoring and evaluative functions of response inhibition.
Resumo:
The quick identification of potentially threatening events is a crucial cognitive capacity to survive in a changing environment. Previous functional MRI data revealed the right dorsolateral prefrontal cortex and the region of the left intraparietal sulcus (IPS) to be involved in the perception of emotionally negative stimuli. For assessing chronometric aspects of emotion processing, we applied transcranial magnetic stimulation above these areas at different times after negative and neutral picture presentation. An interference with emotion processing was found with transcranial magnetic stimulation above the dorsolateral prefrontal cortex 200-300 ms and above the left intraparietal sulcus 240/260 ms after negative stimuli. The data suggest a parallel and conjoint involvement of prefrontal and parietal areas for the identification of emotionally negative stimuli.
Resumo:
Daily we cope with upcoming potentially disadvantageous events. Therefore, it makes sense to be prepared for the worst case. Such a 'pessimistic' bias is reflected in brain activation during emotion processing. Healthy individuals underwent functional neuroimaging while viewing emotional stimuli that were earlier cued ambiguously or unambiguously concerning their emotional valence. Presentation of ambiguously announced pleasant pictures compared with unambiguously announced pleasant pictures resulted in increased activity in the ventrolateral prefrontal, premotor and temporal cortex, and in the caudate nucleus. This was not the case for the respective negative conditions. This indicates that pleasant stimuli after ambiguous cueing provided 'unexpected' emotional input, resulting in the adaptation of brain activity. It strengthens the hypothesis of a 'pessimistic' bias of brain activation toward ambiguous emotional events.
Resumo:
This thesis investigates if emotional states of users interacting with a virtual robot can be recognized reliably and if specific interaction strategy can change the users’ emotional state and affect users’ risk decision. For this investigation, the OpenFace [1] emotion recognition model was intended to be integrated into the Flobi [2] system, to allow the agent to be aware of the current emotional state of the user and to react appropriately. There was an open source ROS [3] bridge available online to integrate OpenFace to the Flobi simulation but it was not consistent with some other projects in Flobi distribution. Then due to technical reasons DeepFace was selected. In a human-agent interaction, the system is compared to a system without using emotion recognition. Evaluation could happen at different levels: evaluation of emotion recognition model, evaluation of the interaction strategy, and evaluation of effect of interaction on user decision. The results showed that the happy emotion induction was 58% and fear emotion induction 77% successful. Risk decision results show that: in happy induction after interaction 16.6% of participants switched to a lower risk decision and 75% of them did not change their decision and the remaining switched to a higher risk decision. In fear inducted participants 33.3% decreased risk 66.6 % did not change their decision The emotion recognition accuracy was and had bias to. The sensitivity and specificity is calculated for each emotion class. The emotion recognition model classifies happy emotions as neutral in most of the time.