940 resultados para Padrão facial hiperdivergente


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facial expression recognition (FER) has been dramatically developed in recent years, thanks to the advancements in related fields, especially machine learning, image processing and human recognition. Accordingly, the impact and potential usage of automatic FER have been growing in a wide range of applications, including human-computer interaction, robot control and driver state surveillance. However, to date, robust recognition of facial expressions from images and videos is still a challenging task due to the difficulty in accurately extracting the useful emotional features. These features are often represented in different forms, such as static, dynamic, point-based geometric or region-based appearance. Facial movement features, which include feature position and shape changes, are generally caused by the movements of facial elements and muscles during the course of emotional expression. The facial elements, especially key elements, will constantly change their positions when subjects are expressing emotions. As a consequence, the same feature in different images usually has different positions. In some cases, the shape of the feature may also be distorted due to the subtle facial muscle movements. Therefore, for any feature representing a certain emotion, the geometric-based position and appearance-based shape normally changes from one image to another image in image databases, as well as in videos. This kind of movement features represents a rich pool of both static and dynamic characteristics of expressions, which playa critical role for FER. The vast majority of the past work on FER does not take the dynamics of facial expressions into account. Some efforts have been made on capturing and utilizing facial movement features, and almost all of them are static based. These efforts try to adopt either geometric features of the tracked facial points, or appearance difference between holistic facial regions in consequent frames or texture and motion changes in loca- facial regions. Although achieved promising results, these approaches often require accurate location and tracking of facial points, which remains problematic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proliferation of news reports published in online websites and news information sharing among social media users necessitates effective techniques for analysing the image, text and video data related to news topics. This paper presents the first study to classify affective facial images on emerging news topics. The proposed system dynamically monitors and selects the current hot (of great interest) news topics with strong affective interestingness using textual keywords in news articles and social media discussions. Images from the selected hot topics are extracted and classified into three categorized emotions, positive, neutral and negative, based on facial expressions of subjects in the images. Performance evaluations on two facial image datasets collected from real-world resources demonstrate the applicability and effectiveness of the proposed system in affective classification of facial images in news reports. Facial expression shows high consistency with the affective textual content in news reports for positive emotion, while only low correlation has been observed for neutral and negative. The system can be directly used for applications, such as assisting editors in choosing photos with a proper affective semantic for a certain topic during news report preparation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emotionally arousing events can distort our sense of time. We used mixed block/event-related fMRI design to establish the neural basis for this effect. Nineteen participants were asked to judge whether angry, happy and neutral facial expressions that varied in duration (from 400 to 1,600 ms) were closer in duration to either a short or long duration they learnt previously. Time was overestimated for both angry and happy expressions compared to neutral expressions. For faces presented for 700 ms, facial emotion modulated activity in regions of the timing network Wiener et al. (NeuroImage 49(2):1728–1740, 2010) namely the right supplementary motor area (SMA) and the junction of the right inferior frontal gyrus and anterior insula (IFG/AI). Reaction times were slowest when faces were displayed for 700 ms indicating increased decision making difficulty. Taken together with existing electrophysiological evidence Ng et al. (Neuroscience, doi: 10.3389/fnint.2011.00077, 2011), the effects are consistent with the idea that facial emotion moderates temporal decision making and that the right SMA and right IFG/AI are key neural structures responsible for this effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage ‘authentic’ mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal-valence dimensional space (AV). The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5–15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schizophrenia patients have been shown to be compromised in their ability to recognize facial emotion. This deficit has been shown to be related to negative symptoms severity. However, to date, most studies have used static rather than dynamic depictions of faces. Nineteen patients with schizophrenia were compared with seventeen controls on 2 tasks; the first involving the discrimination of facial identity, emotion, and butterfly wings; the second testing emotion recognition using both static and dynamic stimuli. In the first task, the patients performed more poorly than controls for emotion discrimination only, confirming a specific deficit in facial emotion recognition. In the second task, patients performed more poorly in both static and dynamic facial emotion processing. An interesting pattern of associations suggestive of a possible double dissociation emerged in relation to correlations with symptom ratings: high negative symptom ratings were associated with poorer recognition of static displays of emotion, whereas high positive symptom ratings were associated with poorer recognition of dynamic displays of emotion. However, while the strength of associations between negative symptom ratings and accuracy during static and dynamic facial emotion processing was significantly different, those between positive symptom ratings and task performance were not. The results confirm a facial emotion-processing deficit in schizophrenia using more ecologically valid dynamic expressions of emotion. The pattern of findings may reflect differential patterns of cortical dysfunction associated with negative and positive symptoms of schizophrenia in the context of differential neural mechanisms for the processing of static and dynamic displays of facial emotion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facial identity and facial expression matching tasks were completed by 5–12-year-old children and adults using stimuli extracted from the same set of normalized faces. Configural and feature processing were examined using speed and accuracy of responding and facial feature selection, respectively. Facial identity matching was slower than face expression matching for all age groups. Large age effects were found on both speed and accuracy of responding and feature use in both identity and expression matching tasks. Eye region preference was found on the facial identity task and mouth region preference on the facial expression task. Use of mouth region information for facial expression matching increased with age, whereas use of eye region information for facial identity matching peaked early. The feature use information suggests that the specific use of primary facial features to arrive at identity and emotion matching judgments matures across middle childhood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theoretical accounts suggest that mirror neurons play a crucial role in social cognition. The current study used transcranial-magnetic stimulation (TMS) to investigate the association between mirror neuron activation and facialemotion processing, a fundamental aspect of social cognition, among healthy adults (n = 20). Facial emotion processing of static (but not dynamic) images correlated significantly with an enhanced motor response, proposed to reflect mirror neuron activation. These correlations did not appear to reflect general facial processing or pattern recognition, and provide support to current theoretical accounts linking the mirror neuron system to aspects of social cognition. We discuss the mechanism by which mirror neurons might facilitate facial emotion recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

People with schizophrenia perform poorly when recognising facial expressions of emotion, particularly negative emotions such as fear. This finding has been taken as evidence of a “negative emotion specific deficit”, putatively associated with a dysfunction in the limbic system, particularly the amygdala. An alternative explanation is that greater difficulty in recognising negative emotions may reflect a priori differences in task difficulty. The present study uses a differential deficit design to test the above argument. Facial emotion recognition accuracy for seven emotion categories was compared across three groups. Eighteen schizophrenia patients and one group of healthy age- and gender-matched controls viewed identical sets of stimuli. A second group of 18 age- and gender-matched controls viewed a degraded version of the same stimuli. The level of stimulus degradation was chosen so as to equate overall level of accuracy to the schizophrenia patients. Both the schizophrenia group and the degraded image control group showed reduced overall recognition accuracy and reduced recognition accuracy for fearful and sad facial stimuli compared with the intact-image control group. There were no differences in recognition accuracy for any emotion category between the schizophrenia group and the degraded image control group. These findings argue against a negative emotion specific deficit in schizophrenia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical evidence suggests impaired facial emotion recognition in schizophrenia. However, the nature of this deficit is the subject of ongoing research. The current study tested the hypothesis that a generalized deficit at an early stage of face-specific processing (i.e. putatively subserved by the fusiform gyrus) accounts for impaired facial emotion recognition in schizophrenia as opposed to the Negative Emotion-specific Deficit Model, which suggests impaired facial information processing at subsequent stages. Event-related potentials (ERPs) were recorded from 11 schizophrenia patients and 15 matched controls while performing a gender discrimination and a facial emotion recognition task. Significant reduction of the face-specific vertex positive potential (VPP) at a peak latency of 165 ms was confirmed in schizophrenia subjects whereas their early visual processing, as indexed by P1, was found to be intact. Attenuated VPP was found to correlate with subsequent P3 amplitude reduction and to predict accuracy when performing a facial emotion discrimination task. A subset of ten schizophrenia patients and ten matched healthy control subjects also performed similar tasks in the magnetic resonance imaging scanner. Patients showed reduced blood oxygenation level-dependent (BOLD) activation in the fusiform, inferior frontal, middle temporal and middle occipital gyrus as well as in the amygdala. Correlation analyses revealed that VPP and the subsequent P3a ERP components predict fusiform gyrus BOLD activation. These results suggest that problems in facial affect recognition in schizophrenia may represent flow-on effects of a generalized deficit in early visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Patients with a number of psychiatric and neuropathological conditions demonstrate problems in recognising facial expressions of emotion. Research indicating that patients with schizophrenia perform more poorly in the recognition of negative valence facial stimuli than positive valence stimuli has been interpreted as evidence of a negative emotion specific deficit. An alternate explanation rests in the psychometric properties of the stimulus materials. This model suggests that the pattern of impairment observed in schizophrenia may reflect initial discrepancies in task difficulty between stimulus categories, which are not apparent in healthy subjects because of ceiling effects. This hypothesis is tested, by examining the performance of healthy subjects in a facial emotion categorisation task with three levels of stimulus resolution. Results confirm the predictions of the model, showing that performance degrades differentially across emotion categories, with the greatest deterioration to negative valence stimuli. In the light of these results, a possible methodology for detecting emotion specific deficits in clinical samples is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.