968 resultados para Facial expression.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated feature extraction and correspondence determination is an extremely important problem in the face recognition community as it often forms the foundation of the normalisation and database construction phases of many recognition and verification systems. This paper presents a completely automatic feature extraction system based upon a modified volume descriptor. These features form a stable descriptor for faces and are utilised in a reversible jump Markov chain Monte Carlo correspondence algorithm to automatically determine correspondences which exist between faces. The developed system is invariant to changes in pose and occlusion and results indicate that it is also robust to minor face deformations which may be present with variations in expression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial cues of racial outgroup or anger mediate fear learning that is resistant to extinction. Whether this resistance is potentiated if fear is conditioned to angry, other race faces has not been established. Two groups of Caucasian participants were conditioned with two happy and two angry face conditional stimuli (CSs). During acquisition, one happy and one angry face were paired with an aversive unconditional stimulus whereas the second happy and angry faces were presented alone. CS face race (Caucasian, African American) was varied between groups. During habituation, electrodermal responses were larger to angry faces regardless of race and declined less to other race faces. Extinction was immediate for Caucasian happy faces, delayed for angry faces regardless of race, and slowest for happy racial outgroup faces. Combining the facial cues of other race and anger does not enhance resistance to extinction of fear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage ‘authentic’ mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microneurovascular free muscle transfer with cross-over nerve grafts in facial reanimation Loss of facial symmetry and mimetic function as seen in facial paralysis has an enormous impact on the psychosocial conditions of the patients. Patients with severe long-term facial paralysis are often reanimated with a two-stage procedure combining cross-facial nerve grafting, and 6 to 8 months later with microneurovascular (MNV) muscle transfer. In this thesis, we recorded the long-term results of MNV surgery in facial paralysis and observed the possible contributing factors to final functional and aesthetic outcome after this procedure. Twenty-seven out of forty patients operated on were interviewed, and the functional outcome was graded. Magnetic resonance imaging (MRI) of MNV muscle flaps was done, and nerve graft samples (n=37) were obtained in second stage of the operation and muscle biopsies (n=18) were taken during secondary operations.. The structure of MNV muscles and nerve grafts was evaluated using histological and immunohistochemical methods ( Ki-67, anti-myosin fast, S-100, NF-200, CD-31, p75NGFR, VEGF, Flt-1, Flk-1). Statistical analysis was performed. In our studies, we found that almost two-thirds of the patients achieved good result in facial reanimation. The longer the follow-up time after muscle transfer the weaker was the muscle function. A majority of the patients (78%) defined their quality of life improved after surgery. In MRI study, the free MNV flaps were significantly smaller than originally. A correlation was found between good functional outcome and normal muscle structure in MRI. In muscle biopsies, the mean muscle fiber diameter was diminished to 40% compared to control values. Proliferative activity of satellite cells was seen in 60% of the samples and it tended to decline with an increase of follow-up time. All samples showed intramuscular innervation. Severe muscle atrophy correlated with prolonged intraoperative ischaemia. The good long-term functional outcome correlated with dominance of fast fibers in muscle grafts. In nerve grafts, the mean number of viable axons amounted to 38% of that in control samples. The grafted nerves characterized by fibrosis and regenerated axons were thinner than in control samples although they were well vascularized. A longer time between cross facial nerve grafting and biopsy sampling correlated with a higher number of viable axons. P75Nerve Growth Factor Receptor (p75NGFR) was expressed in every nerve graft sample. The expression of p75NGFR was lower in older than in younger patients. A high expression of p75NGFR was often seen with better function of the transplanted muscle. In grafted nerve Vascular Endothelial Growth Factor (VEGF) and its receptors were expressed in nervous tissue. In conclusion, most of the patients achieved good result in facial reanimation and were satisfied with the functional outcome. The mimic function was poorer in patients with longer follow-up time. MRI can be used to evaluate the structure of the microneurovascular muscle flaps. Regeneration of the muscle flaps was still going on many years after the transplantation and reinnervation was seen in all muscle samples. Grafted nerves were characterized by fibrosis and fewer, thinner axons compared to control nerves although they were well vascularized. P75NGFR and VEGF were expressed in human nerve grafts with higher intensity than in control nerves which is described for the first time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to examine the behavioural responses of infants to pain stimuli across different developmental ages. Eighty infants were included in this cross-sectional design. Four subsamples of 20 infants each included: (1) premature infants between 32 and 34 weeks gestational age undergoing heel-stick procedure; (2) full-term infants receiving intramuscular vitamin K injection; (3) 2-month-old infants receiving subcutaneous injection for immunisation against DPT; and (4) 4-month-old infants receiving subcutaneous injection for immunisation against DPT. Audio and video recordings were made for 15 sec from stimulus. Cry analysis was conducted on the first full expiratory cry by FFT with time and frequency measures. Facial action was coded using the Neonatal Facial Action Coding System (NFCS). Results from multivariate analysis showed that premature infants were different from older infants, that full-term newborns were different from others, but that 2- and 4-month-olds were similar. The specific variables contributing to the significance were higher pitched cries and more horizontal mouth stretch in the premature group and more taut tongue in the full-term newborns. The results imply that the premature infant has the basis for communicating pain via facial actions but that these are not well developed. The full-term newborn is better equipped to interact with his caretakers and express his distress through specific facial actions. The cries of the premature infant, however, have more of the characteristics that are arousing to the listener which serve to alert the caregiver of the state of distress from pain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les humains communiquent via différents types de canaux: les mots, la voix, les gestes du corps, des émotions, etc. Pour cette raison, un ordinateur doit percevoir ces divers canaux de communication pour pouvoir interagir intelligemment avec les humains, par exemple en faisant usage de microphones et de webcams. Dans cette thèse, nous nous intéressons à déterminer les émotions humaines à partir d’images ou de vidéo de visages afin d’ensuite utiliser ces informations dans différents domaines d’applications. Ce mémoire débute par une brève introduction à l'apprentissage machine en s’attardant aux modèles et algorithmes que nous avons utilisés tels que les perceptrons multicouches, réseaux de neurones à convolution et autoencodeurs. Elle présente ensuite les résultats de l'application de ces modèles sur plusieurs ensembles de données d'expressions et émotions faciales. Nous nous concentrons sur l'étude des différents types d’autoencodeurs (autoencodeur débruitant, autoencodeur contractant, etc) afin de révéler certaines de leurs limitations, comme la possibilité d'obtenir de la coadaptation entre les filtres ou encore d’obtenir une courbe spectrale trop lisse, et étudions de nouvelles idées pour répondre à ces problèmes. Nous proposons également une nouvelle approche pour surmonter une limite des autoencodeurs traditionnellement entrainés de façon purement non-supervisée, c'est-à-dire sans utiliser aucune connaissance de la tâche que nous voulons finalement résoudre (comme la prévision des étiquettes de classe) en développant un nouveau critère d'apprentissage semi-supervisé qui exploite un faible nombre de données étiquetées en combinaison avec une grande quantité de données non-étiquetées afin d'apprendre une représentation adaptée à la tâche de classification, et d'obtenir une meilleure performance de classification. Finalement, nous décrivons le fonctionnement général de notre système de détection d'émotions et proposons de nouvelles idées pouvant mener à de futurs travaux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunteers (9 women and 11 men), aged between 19 and 31 years, took part in the study. The experiment consisted of two sessions in which participants had to identify the emotion of static (photographs) and dynamic (videos) displays of facial expressions on the computer screen. The mean accuracy was submitted to an Anova for repeated measures of model: 2 sexes x [2 conditions x 4 expressions x 4 intensities]. We observed an advantage for the recognition of dynamic expressions of happiness and fear compared to the static stimuli (p < .05). Analysis of interactions showed that expressions with intensity of 25 % were better recognized in the dynamic condition (p < .05). The addition of motion contributes to improve recognition especially in male participants (p < .05). We concluded that the effect of the motion varies as a function of the type of emotion, intensity of the expression and sex of the participant. These results support the hypothesis that dynamic stimuli have more ecological validity and are more appropriate to the research with emotions.