927 resultados para Facial Muscles


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theory of mind ability has been associated with performance in interpersonal interactions and has been found to influence aspects such as emotion recognition, social competence, and social anxiety. Being able to attribute mental states to others requires attention to subtle communication cues such as facial emotional expressions. Decoding and interpreting emotions expressed by the face, especially those with negative valence, are essential skills to successful social interaction. The current study explored the association between theory of mind skills and attentional bias to facial emotional expressions. According to the study hypothesis, individuals with poor theory of mind skills showed preferential attention to negative faces over both non-negative faces and neutral objects. Tentative explanations for the findings are offered emphasizing the potential adaptive role of vigilance for threat as a way of allocating a limited capacity to interpret others’ mental states to obtain as much information as possible about potential danger in the social environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Postnatal maternal depression is associated with difficulties in maternal responsiveness. As most signals arising from the infant come from facial expressions one possible explanation for these difficulties is that mothers with postnatal depression are differentially affected by particular infant facial expressions. Thus, this study investigates the effects of postnatal depression on mothers’ perceptions of infant facial expressions. Participants (15 controls, 15 depressed and 15 anxious mothers) were asked to rate a number of infant facial expressions, ranging from very positive to very negative. Each face was shown twice, for a short and for a longer period of time in random order. Results revealed that mothers used more extreme ratings when shown the infant faces (i.e. more negative or more positive) for a longer period of time. Mothers suffering from postnatal depression were more likely to rate negative infant faces shown for a longer period more negatively than controls. The differences were specific to depression rather than an effect of general postnatal psychopathology—as no differences were observed between anxious mothers and controls. There were no other significant differences in maternal ratings of infant faces showed for short periods or for positive or neutral valence faces of either length. The findings that mothers with postnatal depression rate negative infant faces more negatively indicate that appraisal bias might underlie some of the difficulties that these mothers have in responding to their own infants signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent celebrations of the centenary of the publication of the Futurist manifesto led to a renewed discussion of the ideas and artworks of the Italian artists’ group. Jacques Rancière related the Futurist ethos with the modernist project of liberating art from representation. Franco ‘Bifo’ Berardi, in his post-Futurist manifesto, also identified a historical irony at play in the emptying out of Futurism’s promise: a liberated mechanical humanity did indeed materialize, in a global economic system premised on financial servitude to the future via debt. However, these models continue to assess Futurism against an unchallenged humanism, finding it either supporting ideals of freedom and human rights despite itself, or else lacking in these areas. But Futurism is potentially more relevant than ever not in spite of its anti-humanist agenda, precisely because of it. Tom McCarthy annexes not Futurist art but Futurist writing to an emerging object oriented ontology that seeks to challenge the primacy of the human. If Futurism is to be repurposed as a critical concept, it can only do so by countering the humanist myth the liberal subject that underlies the current cultural and political hegemony of neo-liberalism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spontaneous mimicry is a marker of empathy. Conditions characterized by reduced spontaneous mimicry (e.g., autism) also display deficits in sensitivity to social rewards. We tested if spontaneous mimicry of socially rewarding stimuli (happy faces) depends on the reward value of stimuli in 32 typical participants. An evaluative conditioning paradigm was used to associate different reward values with neutral target faces. Subsequently, electromyographic activity over the Zygomaticus Major was measured whilst participants watched video clips of the faces making happy expressions. Higher Zygomaticus Major activity was found in response to happy faces conditioned with high reward versus low reward. Moreover, autistic traits in the general population modulated the extent of spontaneous mimicry of happy faces. This suggests a link between reward and spontaneous mimicry and provides a possible underlying mechanism for the reduced response to social rewards seen in autism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Duchenne muscular dystrophy is a severe X-linked inherited muscle wasting disorder caused by mutations in the dystrophin gene. Adeno-associated virus (AAV) vectors have been extensively used to deliver genes efficiently for dystrophin expression in skeletal muscles. To overcome limited packaging capacity of AAV vectors (<5 kb), truncated recombinant microdystrophin genes with deletions of most of rod and carboxyl-terminal (CT) domains of dystrophin have been developed. We have previously shown the efficiency of mRNA sequence–optimized microdystrophin (ΔR4-23/ΔCT, called MD1) with deletion of spectrin-like repeat domain 4 to 23 and CT domain in ameliorating the pathology of dystrophic mdx mice. However, the CT domain of dystrophin is thought to recruit part of the dystrophin-associated protein complex, which acts as a mediator of signalling between extracellular matrix and cytoskeleton in muscle fibers. In this study, we extended the ΔR4-23/ΔCT microdystrophin by incorporating helix 1 of the coiled-coil motif in the CT domain of dystrophin (MD2), which contains the α1-syntrophin and α-dystrobrevin binding sites. Intramuscular injection of AAV2/9 expressing CT domain–extended microdystrophin showed efficient dystrophin expression in tibialis anterior muscles of mdx mice. The presence of the CT domain of dystrophin in MD2 increased the recruitment of α1-syntrophin and α-dystrobrevin at the sarcolemma and significantly improved the muscle resistance to lengthening contraction–induced muscle damage in the mdx mice compared with MD1. These results suggest that the incorporation of helix 1 of the coiled-coil motif in the CT domain of dystrophin to the microdystrophins will substantially improve their efficiency in restoring muscle function in patients with Duchenne muscular dystrophy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study demonstrates that making a standardized pain face increases negative affect in response to nociceptive stimulation, even in the absence of social feedback. This suggests that exaggerated facial displays of pain, although often socially reinforced, may also have unintended aversive consequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.