906 resultados para Expressió facial
Resumo:
Spontaneous mimicry is a marker of empathy. Conditions characterized by reduced spontaneous mimicry (e.g., autism) also display deficits in sensitivity to social rewards. We tested if spontaneous mimicry of socially rewarding stimuli (happy faces) depends on the reward value of stimuli in 32 typical participants. An evaluative conditioning paradigm was used to associate different reward values with neutral target faces. Subsequently, electromyographic activity over the Zygomaticus Major was measured whilst participants watched video clips of the faces making happy expressions. Higher Zygomaticus Major activity was found in response to happy faces conditioned with high reward versus low reward. Moreover, autistic traits in the general population modulated the extent of spontaneous mimicry of happy faces. This suggests a link between reward and spontaneous mimicry and provides a possible underlying mechanism for the reduced response to social rewards seen in autism.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.
Resumo:
This study demonstrates that making a standardized pain face increases negative affect in response to nociceptive stimulation, even in the absence of social feedback. This suggests that exaggerated facial displays of pain, although often socially reinforced, may also have unintended aversive consequences.
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism
Resumo:
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Resumo:
The etiology of idiopathic peripheral facial palsy (IPFP) is still uncertain; however, some authors suggest the possibility of a viral infection. Aim: to analyze the ultrastructure of the facial nerve seeking viral evidences that might provide etiological data. Material and Methods: We studied 20 patients with peripheral facial palsy (PFP), with moderate to severe FP, of both genders, between 18-60 years of age, from the Clinic of Facial Nerve Disorders. The patients were broken down into two groups - Study: eleven patients with IPFP and Control: nine patients with trauma or tumor-related PFP. The fragments were obtained from the facial nerve sheath or from fragments of its stumps - which would be discarded or sent to pathology exam during the facial nerve repair surgery. The removed tissue was fixed in 2% glutaraldehyde, and studied under Electronic Transmission Microscopy. Results: In the study group we observed an intense repair cellular activity by increased collagen fibers, fibroblasts containing developed organelles, free of viral particles. In the control group this repair activity was not evident, but no viral particles were observed. Conclusion: There were no viral particles, and there were evidences of intense activity of repair or viral infection.
Resumo:
Cavernous sinus thrombosis is a severe encephalic complication of the cervicofacial infections that can lead to death if not treated in adequate time. Among the several etiologies related to the development of this infection, myiasis has not been reported, enforcing the importance of the report of a case of thrombosis of the cavernous sinus developed from a facial myiasis. (Quintessence Int 2010;41:e72-e74)
Resumo:
Here we report on the clinical and genetic data for a large sample of Brazilian patients studied at the Hospital de Reabilitacao de Anomalas Craniofaciais-Universidade de Sao Paulo (HRAC-USP) who presented with either the classic holoprosencephaly or the holoprosencephaly-like (HPE-L) phenotype. The sample included patients without detected mutations in some HPE determinant genes such as SHH, GLI2, SIX3, TGIF, and PTCH, as well as the photographic documentation of the previously reported patients in our Center. The HPE-L phenotype has been also called of HPE ``minor forms"" or ""microforms,"" The variable phenotype, the challenge of genetic counseling, and the similarities to patients with isolated cleft lip/palate are discussed. (c) 2010 Wiley-Liss, Inc.
Resumo:
Genetic mutations responsible for oblique facial clefts (ObFC), a unique class of facial malformations, are largely unknown. We show that loss-of-function mutations in SPECC1L. are pathogenic for this human developmental disorder and that SPECC1L is a critical organizer of vertebrate facial morphogenesis. During murine embryogenesis, Speed 1 1 is expressed in cell populations of the developing facial primordial, which proliferate and fuse to form the face. In zebrafish, knockdown of a SPECC1L homolog produces a faceless phenotype with loss of jaw and facial structures, and knockdown in Drosophila phenocopies mutants in the integrin signaling pathway that exhibit cell-migration and -adhesion defects. Furthermore, in mammalian cells, SPECC1L colocalizes with both tubulin and actin, and its deficiency results in defective actin-cytoskeleton reorganization, as well as abnormal cell adhesion and migration. Collectively, these data demonstrate that SPECC1L functions in actin-cytoskeleton reorganization and is required for proper facial morphogenesis.
Resumo:
Conclusion. Hyperbaric oxygen treatment (HBOT) promoted an increase of the mean axonal diameter in the group evaluated 2 weeks after lesion induction, which suggests a more advanced regeneration process. However, the number of myelin nerve fibers of the facial nerve of the rabbits was similar when compared to the control and treatment groups, in both evaluation periods. Objective. To evaluate the effect of HBOT on the histological pattern of the facial nerve in rabbits exposed to a nerve crush injury. Materials and methods. Twenty rabbits were exposed to facial nerve crush injury. Ten rabbits received HBOT, 10 rabbits comprised the control group. The rabbits were sacrificed 2 and 4 weeks after the trauma. Qualitative morphological analysis, measurement of the external axonal diameters and myelin fiber count were carried out in an area of 185 000 mu m(2). Results. There was an increase in the area of the axons and thicker myelin in the 2 weeks treatment group in comparison with the control group. The mean diameter of the axons was of 2.34 mu m in the control group and of 2.81 mu m in the HBOT group, with statistically significant differences. The 2 week control group had a mean number of myelin fibers of 186 +/- 5.2664, and the HBOT group had a mean number of 2026.3 +/- 302; this was not statistically significant. The 4 week control group presented a mean of 2495.1 +/- 479 fibers and the HBOT group presented a mean of 2359.9 +/- 473; this was not statistically significant.