872 resultados para facial expressions


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e. g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Previous research has highlighted theoretical and empirical links between measures of both personality and trait emotional intelligence (EI), and the ability to decode facial expressions of emotion. Research has also found that the posed, static characteristics of the photographic stimuli used to explore these links affects the decoding process and differentiates them from the natural expressions they represent. This undermines the ecological validity of established trait-emotion decoding relationships. This study addresses these methodological shortcomings by testing relationships between the reliability of participant ratings of dynamic, spontaneously elicited expressions of emotion with personality and trait EI. Fifty participants completed personality and self-report EI questionnaires, and used a computer-logging program to continuously rate change in emotional intensity expressed in video clips. Each clip was rated twice to obtain an intra-rater reliability score. The results provide limited support for links between both trait EI and personality variables and how reliably we decode natural expressions of emotion. Limitations and future directions are discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Facial activity is strikingly visible in infants reacting to noxious events. Two measures that reduce this activity to composite events, the Neonatal Facial Coding System (NFCS) and the Facial Action Coding System (FACS), were used to examine facial expressions of 56 neonates responding to routine heel lancing for blood sampling purposes. The NFCS focuses upon a limited subset of all possible facial actions that had been identified previously as responsive to painful events, whereas the FACS is a comprehensive system that is inclusive of all facial actions. Descriptions of the facial expressions obtained from the two measurement systems were very similar, supporting the convergent validity of the shorter, more readily applied system. As well, the cluster of facial activity associated with pain in this sample, using either measure, was similar to the cluster of facial activity associated with pain in adults and other newborns, both full-term and preterm, providing construct validity for the position that the face encodes painful distress in infants and adults.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Déterminer si quelqu’un est digne de confiance constitue, tout au long de notre vie, une décision à la base de nos interactions sociales quotidiennes. Des études récentes chez les jeunes adultes ont proposé que le jugement de confiance basé sur un visage constituerait une extension des processus de reconnaissance des expressions faciales, particulièrement de la colère et de la joie (Todorov, 2008). Bien que le jugement de confiance soit d’une grande importance tout au long de notre vie, à notre connaissance, aucune étude n’a tenté d’explorer l’évolution de ce processus au cours du vieillissement. Pourtant, sachant que les personnes âgées saines sont moins efficaces que les jeunes adultes pour reconnaître les expressions faciales émotionnelles (Ruffman et al., 2008; Calder et al., 2003), des différences pourraient exister dans les capacités de ces deux groupes d’âge à poser un jugement de confiance. Le présent travail a permis d’explorer, pour une première fois, les processus perceptifs sous-jacents au jugement de confiance chez une population âgée saine ainsi que chez une population présentant une démence fronto-temporale. Les résultats démontrent que les représentations de colère, de joie et de confiance sont similaires chez les jeunes et les âgés sains et suggèrent qu’il existe bel et bien un lien entre le jugement de confiance et les jugements de joie et de colère. De plus, ils révèlent que ce lien persiste au cours vieillissement, mais que les adultes âgés sains se fient davantage à leur représentation de la colère que les jeunes adultes pour déterminer si un visage est digne de confiance ou non. Enfin, les patients présentant une démence fronto-temporale possèdent des représentations différentes des âgés sains en ce qui concerne la colère, la joie et la confiance, et ils semblent se fier davantage à leur représentation de la joie que les âgés sains pour déterminer le niveau de confiance d’un visage.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Postnatal maternal depression is associated with difficulties in maternal responsiveness. As most signals arising from the infant come from facial expressions one possible explanation for these difficulties is that mothers with postnatal depression are differentially affected by particular infant facial expressions. Thus, this study investigates the effects of postnatal depression on mothers’ perceptions of infant facial expressions. Participants (15 controls, 15 depressed and 15 anxious mothers) were asked to rate a number of infant facial expressions, ranging from very positive to very negative. Each face was shown twice, for a short and for a longer period of time in random order. Results revealed that mothers used more extreme ratings when shown the infant faces (i.e. more negative or more positive) for a longer period of time. Mothers suffering from postnatal depression were more likely to rate negative infant faces shown for a longer period more negatively than controls. The differences were specific to depression rather than an effect of general postnatal psychopathology—as no differences were observed between anxious mothers and controls. There were no other significant differences in maternal ratings of infant faces showed for short periods or for positive or neutral valence faces of either length. The findings that mothers with postnatal depression rate negative infant faces more negatively indicate that appraisal bias might underlie some of the difficulties that these mothers have in responding to their own infants signals.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

People tend to automatically mimic facial expressions of others. If clear evidence exists on the effect of non-verbal behavior (emotion faces) on automatic facial mimicry, little is known about the role of verbal behavior (emotion language) in triggering such effects. Whereas it is well-established that political affiliation modulates facial mimicry, no evidence exists on whether this modulation passes also through verbal means. This research addressed the role of verbal behavior in triggering automatic facial effects depending on whether verbal stimuli are attributed to leaders of different political parties. Study 1 investigated the role of interpersonal verbs, referring to positive and negative emotion expressions and encoding them at different levels of abstraction, in triggering corresponding facial muscle activation in a reader. Study 2 examined the role of verbs expressing positive and negative emotional behaviors of political leaders in modulating automatic facial effects depending on the matched or mismatched political affiliation of participants and politicians of left-and right-wing. Study 3 examined whether verbs expressing happiness displays of ingroup politicians induce a more sincere smile (Duchenne) pattern among readers of same political affiliation relative to happiness expressions of outgroup politicians. Results showed that verbs encoding facial actions at different levels of abstraction elicited differential facial muscle activity (Study 1). Furthermore, political affiliation significantly modulated facial activation triggered by emotion verbs as participants showed more congruent and enhanced facial activity towards ingroup politicians’ smiles and frowns compared to those of outgroup politicians (Study 2). Participants facially responded with a more sincere smile pattern towards verbs expressing smiles of ingroup compared to outgroup politicians (Study 3). Altogether, results showed that the role of political affiliation in modulating automatic facial effects passes also through verbal channels and is revealed at a fine-grained level by inducing quantitative and qualitative differences in automatic facial reactions of readers.