57 resultados para Facial reproduction
Resumo:
Facial identity and facial expression matching tasks were completed by 5–12-year-old children and adults using stimuli extracted from the same set of normalized faces. Configural and feature processing were examined using speed and accuracy of responding and facial feature selection, respectively. Facial identity matching was slower than face expression matching for all age groups. Large age effects were found on both speed and accuracy of responding and feature use in both identity and expression matching tasks. Eye region preference was found on the facial identity task and mouth region preference on the facial expression task. Use of mouth region information for facial expression matching increased with age, whereas use of eye region information for facial identity matching peaked early. The feature use information suggests that the specific use of primary facial features to arrive at identity and emotion matching judgments matures across middle childhood.
Resumo:
Theoretical accounts suggest that mirror neurons play a crucial role in social cognition. The current study used transcranial-magnetic stimulation (TMS) to investigate the association between mirror neuron activation and facialemotion processing, a fundamental aspect of social cognition, among healthy adults (n = 20). Facial emotion processing of static (but not dynamic) images correlated significantly with an enhanced motor response, proposed to reflect mirror neuron activation. These correlations did not appear to reflect general facial processing or pattern recognition, and provide support to current theoretical accounts linking the mirror neuron system to aspects of social cognition. We discuss the mechanism by which mirror neurons might facilitate facial emotion recognition.
Resumo:
People with schizophrenia perform poorly when recognising facial expressions of emotion, particularly negative emotions such as fear. This finding has been taken as evidence of a “negative emotion specific deficit”, putatively associated with a dysfunction in the limbic system, particularly the amygdala. An alternative explanation is that greater difficulty in recognising negative emotions may reflect a priori differences in task difficulty. The present study uses a differential deficit design to test the above argument. Facial emotion recognition accuracy for seven emotion categories was compared across three groups. Eighteen schizophrenia patients and one group of healthy age- and gender-matched controls viewed identical sets of stimuli. A second group of 18 age- and gender-matched controls viewed a degraded version of the same stimuli. The level of stimulus degradation was chosen so as to equate overall level of accuracy to the schizophrenia patients. Both the schizophrenia group and the degraded image control group showed reduced overall recognition accuracy and reduced recognition accuracy for fearful and sad facial stimuli compared with the intact-image control group. There were no differences in recognition accuracy for any emotion category between the schizophrenia group and the degraded image control group. These findings argue against a negative emotion specific deficit in schizophrenia.
Resumo:
Empirical evidence suggests impaired facial emotion recognition in schizophrenia. However, the nature of this deficit is the subject of ongoing research. The current study tested the hypothesis that a generalized deficit at an early stage of face-specific processing (i.e. putatively subserved by the fusiform gyrus) accounts for impaired facial emotion recognition in schizophrenia as opposed to the Negative Emotion-specific Deficit Model, which suggests impaired facial information processing at subsequent stages. Event-related potentials (ERPs) were recorded from 11 schizophrenia patients and 15 matched controls while performing a gender discrimination and a facial emotion recognition task. Significant reduction of the face-specific vertex positive potential (VPP) at a peak latency of 165 ms was confirmed in schizophrenia subjects whereas their early visual processing, as indexed by P1, was found to be intact. Attenuated VPP was found to correlate with subsequent P3 amplitude reduction and to predict accuracy when performing a facial emotion discrimination task. A subset of ten schizophrenia patients and ten matched healthy control subjects also performed similar tasks in the magnetic resonance imaging scanner. Patients showed reduced blood oxygenation level-dependent (BOLD) activation in the fusiform, inferior frontal, middle temporal and middle occipital gyrus as well as in the amygdala. Correlation analyses revealed that VPP and the subsequent P3a ERP components predict fusiform gyrus BOLD activation. These results suggest that problems in facial affect recognition in schizophrenia may represent flow-on effects of a generalized deficit in early visual processing.
Resumo:
Patients with a number of psychiatric and neuropathological conditions demonstrate problems in recognising facial expressions of emotion. Research indicating that patients with schizophrenia perform more poorly in the recognition of negative valence facial stimuli than positive valence stimuli has been interpreted as evidence of a negative emotion specific deficit. An alternate explanation rests in the psychometric properties of the stimulus materials. This model suggests that the pattern of impairment observed in schizophrenia may reflect initial discrepancies in task difficulty between stimulus categories, which are not apparent in healthy subjects because of ceiling effects. This hypothesis is tested, by examining the performance of healthy subjects in a facial emotion categorisation task with three levels of stimulus resolution. Results confirm the predictions of the model, showing that performance degrades differentially across emotion categories, with the greatest deterioration to negative valence stimuli. In the light of these results, a possible methodology for detecting emotion specific deficits in clinical samples is discussed.
Resumo:
The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.
Resumo:
Neuroimaging research has shown localised brain activation to different facial expressions. This, along with the finding that schizophrenia patients perform poorly in their recognition of negative emotions, has raised the suggestion that patients display an emotion specific impairment. We propose that this asymmetry in performance reflects task difficulty gradations, rather than aberrant processing in neural pathways subserving recognition of specific emotions. A neural network model is presented, which classifies facial expressions on the basis of measurements derived from human faces. After training, the network showed an accuracy pattern closely resembling that of healthy subjects. Lesioning of the network led to an overall decrease in the network’s discriminant capacity, with the greatest accuracy decrease to fear, disgust and anger stimuli. This implies that the differential pattern of impairment in schizophrenia patients can be explained without having to postulate impairment of specific processing modules for negative emotion recognition.
Resumo:
Viewer interests, evoked by video content, can potentially identify the highlights of the video. This paper explores the use of facial expressions (FE) and heart rate (HR) of viewers captured using camera and non-strapped sensor for identifying interesting video segments. The data from ten subjects with three videos showed that these signals are viewer dependent and not synchronized with the video contents. To address this issue, new algorithms are proposed to effectively combine FE and HR signals for identifying the time when viewer interest is potentially high. The results show that, compared with subjective annotation and match report highlights, ‘non-neutral’ FE and ‘relatively higher and faster’ HR is able to capture 60%-80% of goal, foul, and shot-on-goal soccer video events. FE is found to be more indicative than HR of viewer’s interests, but the fusion of these two modalities outperforms each of them.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
Genetic factors contribute to risk of many common diseases affecting reproduction and fertility. In recent years, methods for genome-wide association studies(GWAS) have revolutionized gene discovery forcommontraits and diseases. Results of GWAS are documented in the Catalog of Published Genome-Wide Association Studies at the National Human Genome Research Institute and report over 70 publications for 32 traits and diseases associated with reproduction. These include endometriosis, uterine fibroids, age at menarche and age at menopause. Results that pass appropriate stringent levels of significance are generally well replicated in independent studies. Examples of genetic variation affecting twinning rate, infertility, endometriosis and age at menarche demonstrate that the spectrum of disease-related variants for reproductive traits is similar to most other common diseases.GWAS 'hits' provide novel insights into biological pathways and the translational value of these studies lies in discovery of novel gene targets for biomarkers, drug development and greater understanding of environmental factors contributing to disease risk. Results also show that genetic data can help define sub-types of disease and co-morbidity with other traits and diseases. To date, many studies on reproductive traits have used relatively small samples. Future genetic marker studies in large samples with detailed phenotypic and clinical information will yield new insights into disease risk, disease classification and co-morbidity for many diseases associated with reproduction and infertility.
Resumo:
Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.