24 resultados para facial expression recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postnatal maternal depression is associated with difficulties in maternal responsiveness. As most signals arising from the infant come from facial expressions one possible explanation for these difficulties is that mothers with postnatal depression are differentially affected by particular infant facial expressions. Thus, this study investigates the effects of postnatal depression on mothers’ perceptions of infant facial expressions. Participants (15 controls, 15 depressed and 15 anxious mothers) were asked to rate a number of infant facial expressions, ranging from very positive to very negative. Each face was shown twice, for a short and for a longer period of time in random order. Results revealed that mothers used more extreme ratings when shown the infant faces (i.e. more negative or more positive) for a longer period of time. Mothers suffering from postnatal depression were more likely to rate negative infant faces shown for a longer period more negatively than controls. The differences were specific to depression rather than an effect of general postnatal psychopathology—as no differences were observed between anxious mothers and controls. There were no other significant differences in maternal ratings of infant faces showed for short periods or for positive or neutral valence faces of either length. The findings that mothers with postnatal depression rate negative infant faces more negatively indicate that appraisal bias might underlie some of the difficulties that these mothers have in responding to their own infants signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Empathy is the lens through which we view others' emotion expressions, and respond to them. In this study, empathy and facial emotion recognition were investigated in adults with autism spectrum conditions (ASC; N=314), parents of a child with ASC (N=297) and IQ-matched controls (N=184). Participants completed a self-report measure of empathy (the Empathy Quotient [EQ]) and a modified version of the Karolinska Directed Emotional Faces Task (KDEF) using an online test interface. Results showed that mean scores on the EQ were significantly lower in fathers (p<0.05) but not mothers (p>0.05) of children with ASC compared to controls, whilst both males and females with ASC obtained significantly lower EQ scores (p<0.001) than controls. On the KDEF, statistical analyses revealed poorer overall performance by adults with ASC (p<0.001) compared to the control group. When the 6 distinct basic emotions were analysed separately, the ASC group showed impaired performance across five out of six expressions (happy, sad, angry, afraid and disgusted). Parents of a child with ASC were not significantly worse than controls at recognising any of the basic emotions, after controlling for age and non-verbal IQ (all p>0.05). Finally, results indicated significant differences between males and females with ASC for emotion recognition performance (p<0.05) but not for self-reported empathy (p>0.05). These findings suggest that self-reported empathy deficits in fathers of autistic probands are part of the 'broader autism phenotype'. This study also reports new findings of sex differences amongst people with ASC in emotion recognition, as well as replicating previous work demonstrating empathy difficulties in adults with ASC. The use of empathy measures as quantitative endophenotypes for ASC is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This commentary raises general questions about the parsimony and generalizability of the SIMS model, before interrogating the specific roles that the amygdala and eye contact play in it. Additionally, this situates the SIMS model alongside another model of facial expression processing, with a view to incorporating individual differences in emotion perception.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three experiments examined the cultural relativity of emotion recognition using the visual search task. Caucasian-English and Japanese participants were required to search for an angry or happy discrepant face target against an array of competing distractor faces. Both cultural groups performed the task with displays that consisted of Caucasian and Japanese faces in order to investigate the effects of racial congruence on emotion detection performance. Under high perceptual load conditions, both cultural groups detected the happy face more efficiently than the angry face. When perceptual load was reduced such that target detection could be achieved by feature-matching, the English group continued to show a happiness advantage in search performance that was more strongly pronounced for other race faces. Japanese participants showed search time equivalence for happy and angry targets. Experiment 3 encouraged participants to adopt a perceptual based strategy for target detection by removing the term 'emotion' from the instructions. Whilst this manipulation did not alter the happiness advantage displayed by our English group, it reinstated it for our Japanese group, who showed a detection advantage for happiness only for other race faces. The results demonstrate cultural and linguistic modifiers on the perceptual saliency of the emotional signal and provide new converging evidence from cognitive psychology for the interactionist perspective on emotional expression recognition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The neuropeptide substance P and its receptor NK1 have been implicated in emotion, anxiety and stress in preclinical studies. However, the role of NK1 receptors in human brain function is less clear and there have been inconsistent reports of the value of NK1 receptor antagonists in the treatment of clinical depression. The present study therefore aimed to investigate effects of NK1 antagonism on the neural processing of emotional information in healthy volunteers. Twenty-four participants were randomized to receive a single dose of aprepitant (125 mg) or placebo. Approximately 4 h later, neural responses during facial expression processing and an emotional counting Stroop word task were assessed using fMRI. Mood and subjective experience were also measured using self-report scales. As expected a single dose of aprepitant did not affect mood and subjective state in the healthy volunteers. However, NK1 antagonism increased responses specifically during the presentation of happy facial expressions in both the rostral anterior cingulate and the right amygdala. In the emotional counting Stroop task the aprepitant group had increased activation in both the medial orbitofrontal cortex and the precuneus cortex to positive vs. neutral words. These results suggest consistent effects of NK1 antagonism on neural responses to positive affective information in two different paradigms. Such findings confirm animal studies which support a role for NK1 receptors in emotion. Such an approach may be useful in understanding the effects of novel drug treatments prior to full-scale clinical trials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When a visual stimulus is suppressed from awareness, processing of the suppressed image is necessarily reduced. Although adaptation to simple image properties such as orientation still occurs, adaptation to high-level properties such as face identity is eliminated. Here we show that emotional facial expression continues to be processed even under complete suppression, as indexed by substantial facial expression aftereffects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness