965 resultados para facial expression reconstruction
Resumo:
Background This study aims to examine the relationship between how individuals with intellectual disabilities report their own levels of anger, and the ability of those individuals to recognize emotions. It was hypothesized that increased expression of anger would be linked to lower ability to recognize facial emotional expressions and increased tendency to interpret facial expressions in a hostile or negative manner. It was also hypothesized increased levels of anger may lead to the altered perception of a particular emotion.
Method A cross-sectional survey design was used. Thirty participants completed a test of facial emotion recognition (FER), and a self-report anger inventory (Benson & Ivins 1992) as part of a structured interview.
Results Individuals with higher self-reported anger did not show significantly reduced performance in FER, or interpret facial expressions in a more hostile manner compared with individuals with less self-reported anger. However, they were less accurate in recognizing neutral facial emotions.
Conclusions It is tentatively suggested that individuals with high levels of anger may be likely to perceive emotional content in a neutral facial expression because of their high levels of emotional arousal.
Resumo:
Face detection and recognition should be complemented by recognition of facial expression, for example for social robots which must react to human emotions. Our framework is based on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth are grouped for face detection [1]; lines and edges provide information for face recognition [2].
Resumo:
Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.
Resumo:
The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.
Resumo:
Thèse réalisée en cotutelle avec l'université de Franche-Comté, école doctorale Langage, espace, temps et société.
Resumo:
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Resumo:
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.
Resumo:
Individuals with facial paralysis of 6 months or more without evidence of clinical or electromyographic improvement have been successfully reanimated utilizing an orthodromic temporalis transfer in conjunction with end-to-side cross-face nerve grafts. The temporalis muscle insertion is released from the coronoid process of the mandible and sutured to a fascia lata graft that is secured distally to the commissure and paralyzed hemilip. The orthodromic transfer of the temporalis muscle overcomes the concave temporal deformity and zygomatic fullness produced by the turning down of the central third of the muscle (Gillies procedure) while yielding stronger muscle contraction and a more symmetric smile. The muscle flap is combined with cross-face sural nerve grafts utilizing end-to-side neurorrhaphies to import myelinated motor fibers to the paralyzed muscles of facial expression in the midface and perioral region. Cross-face nerve grafting provides the potential for true spontaneous facial motion. We feel that the synergy created by the combination of techniques can perhaps produce a more symmetrical and synchronized smile than either procedure in isolation.Nineteen patients underwent an orthodromic temporalis muscle flap in conjunction with cross-face (buccal-buccal with end-to-side neurorrhaphy) nerve grafts. To evaluate the symmetry of the smile, we measured the length of the two hemilips (normal and affected) using the CorelDRAW X3 software. Measurements were obtained in the pre- and postoperative period and compared for symmetry.There was significant improvement in smile symmetry in 89.5 % of patients.Orthodromic temporalis muscle transfer in conjunction with cross face nerve grafts creates a synergistic effect frequently producing an aesthetic, symmetric smile.This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors at www.spinger.com/00266.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
[EN]The use of new technologies in order to step up the inter- action between humans and machines is the main proof that faces are important in videos. Therefore we suggest a novel Face Video Database for development, testing and veri cation of algorithms related to face- based applications and to facial recognition applications. In addition of facial expression videos, the database includes body videos. The videos are taken by three di erent cameras, working in real time, without vary- ing illumination conditions.
Resumo:
[EN]This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot’s point of view.
Resumo:
We propose a computationally efficient and biomechanically relevant soft-tissue simulation method for cranio-maxillofacial (CMF) surgery. A template-based facial muscle reconstruction was introduced to minimize the efforts on preparing a patient-specific model. A transversely isotropic mass-tensor model (MTM) was adopted to realize the effect of directional property of facial muscles in reasonable computation time. Additionally, sliding contact around teeth and mucosa was considered for more realistic simulation. Retrospective validation study with postoperative scan of a real patient showed that there were considerable improvements in simulation accuracy by incorporating template-based facial muscle anatomy and sliding contact.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
Previous studies have suggested a link between the processing of the emotional expression of a face and how attractive it appears. In two experiments we investigated the interrelationship between attractiveness and happiness. In Experiment 1 we presented morphed faces varying in attractiveness and happiness and asked participants to choose the more attractive of two simultaneously presented faces. In the second experiment we used the same stimuli as in Experiment 1 and asked participants to choose the happier face. The results of Experiment 1 revealed that the evaluation of attractiveness is strongly influenced by the intensity of a smile expressed on a face: A happy facial expression could even compensate for relative unattractiveness. Conversely, the findings of Experiment 2 showed that facial attractiveness also influences the evaluation of happiness: It was easier to choose the happier of two faces if the happier face was also more attractive. We discuss the interrelationship of happiness and attractiveness with regard to evolutionary relevance of positive affective status and rewarding effects.
Resumo:
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.