966 resultados para Facial expression recognition


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Serotonin has been implicated in the neurobiology of depressive and anxiety disorders, but little is known about its role in the modulation of basic emotional processing. The aim of this study was to determine the effect of the selective serotonin reuptake inhibitor, escitalopram, on the perception of facial emotional expressions. Twelve healthy male volunteers completed two experimental sessions each, in a randomized, balanced order, double-blind design. A single oral dose of escitalopram (10 mg) or placebo was administered 3 h before the task. Participants were presented to a task composed of six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) that were morphed between neutral and each standard emotion in 10% steps. Escitalopram facilitated the recognition of sadness and inhibited the recognition of happiness in male, but not female faces. No drug effect on subjective measures was detected. These results confirm that serotonin modulates the recognition of emotional faces, and suggest that the gender of the face can have a role in this modulation. Further studies including female volunteers are needed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’expression faciale de la douleur occupe un rôle central dans la communication de la douleur et dans l’estimation de l’intensité de la douleur vécue par autrui. Les propriétés du visage d’une personne en souffrance ont été investiguées principalement à l’aide de méthodes descriptives (e.g. FACS). L’introduction fait le point sur les connaissances de l’expression faciale de douleur et de la communication de cette expérience sur les plans comportemental et cérébral et souligne que les mécanismes et stratégies visuels utilisés par l’observateur pour parvenir à détecter la douleur dans le visage d’autrui demeurent très peu connus. L’étude des processus impliqués dans la reconnaissance de l’expression de la douleur est essentielle pour comprendre la communication de la douleur et éventuellement expliquer des phénomènes ayant des impacts cliniques considérables, tel que l’effet classique de sous-estimation de la douleur d’autrui. L’article 1 vise à établir à l’aide d’une méthode directe (Bubbles) les informations visuelles utilisées efficacement par l’observateur lorsqu’il doit catégoriser la douleur parmi les émotions de base. Les résultats montrent que parmi l’ensemble des caractéristiques du visage typique de la douleur, peu d’informations sont vraiment efficaces pour parvenir à cette discrimination et que celles qui le sont encodent la partie affective- motivationnelle de l’expérience d’autrui. L’article 2 investigue le pouvoir de ces régions privilégiées du visage de la douleur dans la modulation d’une expérience nociceptive chez l’observateur afin de mieux comprendre les mécanismes impliqués dans une telle modulation. En effet, s’il est connu que des stimuli ayant une valence émotionnelle négative, y compris des expressions faciales de douleur, peuvent augmenter les réponses spinales (réflexes) et supra-spinales (ex.: perceptives) de la douleur, l’information visuelle suffisante pour permettre l’activation des voies modulatrices demeure inconnue. Les résultats montrent qu’en voyant les régions diagnostiques pour la reconnaissance de l’expression faciale de douleur, la douleur perçue par l’observateur suite à une stimulation nociceptive est plus grande que lorsqu’il voit les régions les moins corrélées avec une bonne reconnaissance de la douleur. L’exploration post-expérimentale des caractéristiques de nos stimuli suggère que cette modulation n’est pas explicable par l’induction d’un état émotionnel négatif, appuyant ainsi un rôle prépondérant de la communication de la douleur dans la modulation vicariante de l’expérience douloureuse de l’observateur. Les mesures spinales ne sont toutefois pas modulées par ces manipulations et suggèrent ainsi que ce ne sont pas des voies cérébro-spinale qui sont impliquées dans ce phénomène.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thèse réalisée en cotutelle avec l'université de Franche-Comté, école doctorale Langage, espace, temps et société.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Empathy is the lens through which we view others' emotion expressions, and respond to them. In this study, empathy and facial emotion recognition were investigated in adults with autism spectrum conditions (ASC; N=314), parents of a child with ASC (N=297) and IQ-matched controls (N=184). Participants completed a self-report measure of empathy (the Empathy Quotient [EQ]) and a modified version of the Karolinska Directed Emotional Faces Task (KDEF) using an online test interface. Results showed that mean scores on the EQ were significantly lower in fathers (p<0.05) but not mothers (p>0.05) of children with ASC compared to controls, whilst both males and females with ASC obtained significantly lower EQ scores (p<0.001) than controls. On the KDEF, statistical analyses revealed poorer overall performance by adults with ASC (p<0.001) compared to the control group. When the 6 distinct basic emotions were analysed separately, the ASC group showed impaired performance across five out of six expressions (happy, sad, angry, afraid and disgusted). Parents of a child with ASC were not significantly worse than controls at recognising any of the basic emotions, after controlling for age and non-verbal IQ (all p>0.05). Finally, results indicated significant differences between males and females with ASC for emotion recognition performance (p<0.05) but not for self-reported empathy (p>0.05). These findings suggest that self-reported empathy deficits in fathers of autistic probands are part of the 'broader autism phenotype'. This study also reports new findings of sex differences amongst people with ASC in emotion recognition, as well as replicating previous work demonstrating empathy difficulties in adults with ASC. The use of empathy measures as quantitative endophenotypes for ASC is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Individuals with facial paralysis of 6 months or more without evidence of clinical or electromyographic improvement have been successfully reanimated utilizing an orthodromic temporalis transfer in conjunction with end-to-side cross-face nerve grafts. The temporalis muscle insertion is released from the coronoid process of the mandible and sutured to a fascia lata graft that is secured distally to the commissure and paralyzed hemilip. The orthodromic transfer of the temporalis muscle overcomes the concave temporal deformity and zygomatic fullness produced by the turning down of the central third of the muscle (Gillies procedure) while yielding stronger muscle contraction and a more symmetric smile. The muscle flap is combined with cross-face sural nerve grafts utilizing end-to-side neurorrhaphies to import myelinated motor fibers to the paralyzed muscles of facial expression in the midface and perioral region. Cross-face nerve grafting provides the potential for true spontaneous facial motion. We feel that the synergy created by the combination of techniques can perhaps produce a more symmetrical and synchronized smile than either procedure in isolation.Nineteen patients underwent an orthodromic temporalis muscle flap in conjunction with cross-face (buccal-buccal with end-to-side neurorrhaphy) nerve grafts. To evaluate the symmetry of the smile, we measured the length of the two hemilips (normal and affected) using the CorelDRAW X3 software. Measurements were obtained in the pre- and postoperative period and compared for symmetry.There was significant improvement in smile symmetry in 89.5 % of patients.Orthodromic temporalis muscle transfer in conjunction with cross face nerve grafts creates a synergistic effect frequently producing an aesthetic, symmetric smile.This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors at www.spinger.com/00266.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study aimed to measure, using fMRI, the effect of diazepam on the haemodynamic response to emotional faces. Twelve healthy male volunteers (mean age = 24.83 +/- 3.16 years), were evaluated in a randomized, balanced-order, double-blind, placebo-controlled crossover design. Diazepam (10 mg) or placebo was given 1 h before the neuroimaging acquisition. In a blocked design covert face emotional task, subjects were presented with neutral (A) and aversive (B) (angry or fearful) faces. Participants were also submitted to an explicit emotional face recognition task, and subjective anxiety was evaluated throughout the procedures. Diazepam attenuated the activation of right amygdala and right orbitofrontal cortex and enhanced the activation of right anterior cingulate cortex (ACC) to fearful faces. In contrast, diazepam enhanced the activation of posterior left insula and attenuated the activation of bilateral ACC to angry faces. In the behavioural task, diazepam impaired the recognition of fear in female faces. Under the action of diazepam, volunteers were less anxious at the end of the experimental session. These results suggest that benzodiazepines can differentially modulate brain activation to aversive stimuli, depending on the stimulus features and indicate a role of amygdala and insula in the anxiolytic action of benzodiazepines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot’s point of view.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stress in der Post-Akquisitionsphase begünstigt die Gedächtniskonsolidierung emotional erregender Informationen. Das Zusammenspiel von noradrenerger Aktivierung und Cortisol auf Ebene der Amygdala ist hierbei von entscheidender Bedeutung. rnIn dieser Studie wird untersucht, ob dieser Effekt durch das Ausmaß der kardiovaskulären bzw. der subjektiv erlebten Stressreaktivität beeinflusst wird. 49 Probanden (Alter: 23.8 Jahre; 32 Frauen) wurden je 52 Gesichter, davon 50% mit ärgerlichem sowie 50 % mit glücklichem Ausdruck präsentiert. Sofort nach Akquisition wurde bei 30 Probanden akuter Stress durch den sozial evaluierten Kaltwassertest (SECPT; Eintauchen der dominanten Hand in eiskaltes Wasser für 3 Minuten unter Beobachtung) induziert, bei 19 Probanden wurde eine Kontrollprozedur ohne Stress durchgeführt. Die 30 Probanden der SECPT-Gruppe wurden post-hoc zum einen anhand der individuellen Blutdruckreaktivität und zum zweiten anhand der Stärke der subjektiv bewerteten Stressreaktivität per Mediansplit in zwei Subgrupen unterteilt (High Responder, Low Responder). rnDer erste Wiedererkennungstest fand 30 Minuten nach der Akquisitionsphase, ein weiterer 20 Stunden später statt. Zu den Testzeitpunkten wurden jeweils 26 der initial präsentierten Gesichter mit neutralem Gesichtsausdruck gezeigt sowie 26 neue neutrale Gesichter. rnDie Kontrollgruppe und die Gruppe der High Responder (basierend auf der kardiovaskulären Reaktivität) zeigten ein besseres Erinnerungsvermögen für die initial positiv präsentierten gesichter, wohingegen die Gruppe der Low Responder ein besseres Gedächtnis für die initial negativ präsentierten Gesichter aufwies. rnStress scheint abhängig von der Stärke der kardiovaskulären Reaktion zu valenzspezifischen Konsolidierungseffekten zu führen. Hierbei könnten viszerale Afferenzen z.B. der arteriellen Baroreflexe eine Rolle spielen. rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Identification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.