912 resultados para Facial emotion recognition
Resumo:
Because faces and bodies share some abstract perceptual features, we hypothesised that similar recognition processes might be used for both. We investigated whether similar caricature effects to those found in facial identity and expression recognition could be found in the recognition of individual bodies and socially meaningful body positions. Participants were trained to name four body positions (anger, fear, disgust, sadness) and four individuals (in a neutral position). We then tested their recognition of extremely caricatured, moderately caricatured, anticaricatured, and undistorted images of each stimulus. Consistent with caricature effects found in face recognition, moderately caricatured representations of individuals' bodies were recognised more accurately than undistorted and extremely caricatured representations. No significant difference was found between participants' recognition of extremely caricatured, moderately caricatured, or undistorted body position line-drawings. AU anti-caricatured representations were named significandy less accurately than the veridical stimuli. Similar mental representations may be used for both bodies and faces.
Resumo:
Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, Adaptive Principal Component Analysis (APCA) [4], which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al, we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our Active Appearance Model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA.
Resumo:
Anxiety and fear are often confounded in discussions of human emotions. However, studies of rodent defensive reactions under naturalistic conditions suggest anxiety is functionally distinct from fear. Unambiguous threats, such as predators, elicit flight from rodents (if an escape-route is available), whereas ambiguous threats (e.g., the odor of a predator) elicit risk assessment behavior, which is associated with anxiety as it is preferentially modulated by anti-anxiety drugs. However, without human evidence, it would be premature to assume that rodent-based psychological models are valid for humans. We tested the human validity of the risk assessment explanation for anxiety by presenting 8 volunteers with emotive scenarios and asking them to pose facial expressions. Photographs and videos of these expressions were shown to 40 participants who matched them to the scenarios and labeled each expression. Scenarios describing ambiguous threats were preferentially matched to the facial expression posed in response to the same scenario type. This expression consisted of two plausible environmental-scanning behaviors (eye darts and head swivels) and was labeled as anxiety, not fear. The facial expression elicited by unambiguous threat scenarios was labeled as fear. The emotion labels generated were then presented to another 18 participants who matched them back to photographs of the facial expressions. This back-matching of labels to faces also linked anxiety to the environmental-scanning face rather than fear face. Results therefore suggest that anxiety produces a distinct facial expression and that it has adaptive value in situations that are ambiguously threatening, supporting a functional, risk-assessing explanation for human anxiety.
Resumo:
The aim was to establish if the memory bias for sad faces, reported in clinically depressed patients (Gilboa-Schechtman, Erhard Weiss, & Jeczemien, 2002; Ridout, Astell, Reid, Glen, & O'Carroll, 2003) generalises to sub-clinical depression (dysphoria) and experimentally induced sadness. Study 1: dysphoric (n = 24) and non-dysphoric (n = 20) participants were presented with facial stimuli, asked to identify the emotion portrayed and then given a recognition memory test for these faces. At encoding, dysphoric participants (DP) exhibited impaired identification of sadness and neutral affect relative to the non-dysphoric group (ND). At memory testing, DP exhibited superior memory for sad faces relative to happy and neutral. They also exhibited enhanced memory for sad faces and impaired memory for happy relative to the ND. Study 2: non-depressed participants underwent a positive (n = 24) or negative (n = 24) mood induction (MI) and were assessed on the same tests as Study 1. At encoding, negative MI participants showed superior identification of sadness, relative to neutral affect and compared to the positive MI group. At memory testing, the negative MI group exhibited enhanced memory for the sad faces relative to happy or neutral and compared to the positive MI group. Conclusion: MCM bias for sad faces generalises from clinical depression to these sub-clinical affective states.
Resumo:
Background - Difficulties in emotion processing and poor social function are common to bipolar disorder (BD) and major depressive disorder (MDD) depression, resulting in many BD depressed individuals being misdiagnosed with MDD. The amygdala is a key region implicated in processing emotionally salient stimuli, including emotional facial expressions. It is unclear, however, whether abnormal amygdala activity during positive and negative emotion processing represents a persistent marker of BD regardless of illness phase or a state marker of depression common or specific to BD and MDD depression. Methods - Sixty adults were recruited: 15 depressed with BD type 1 (BDd), 15 depressed with recurrent MDD, 15 with BD in remission (BDr), diagnosed with DSM-IV and Structured Clinical Interview for DSM-IV Research Version criteria; and 15 healthy control subjects (HC). Groups were age- and gender ratio-matched; patient groups were matched for age of illness onset and illness duration; depressed groups were matched for depression severity. The BDd were taking more psychotropic medication than other patient groups. All individuals participated in three separate 3T neuroimaging event-related experiments, where they viewed mild and intense emotional and neutral faces of fear, happiness, or sadness from a standardized series. Results - The BDd—relative to HC, BDr, and MDD—showed elevated left amygdala activity to mild and neutral facial expressions in the sad (p < .009) but not other emotion experiments that was not associated with medication. There were no other significant between-group differences in amygdala activity. Conclusions - Abnormally elevated left amygdala activity to mild sad and neutral faces might be a depression-specific marker in BD but not MDD, suggesting different pathophysiologic processes for BD versus MDD depression.
Resumo:
Background: Identifying biological markers to aid diagnosis of bipolar disorder (BD) is critically important. To be considered a possible biological marker, neural patterns in BD should be discriminant from those in healthy individuals (HI). We examined patterns of neuromagnetic responses revealed by magnetoencephalography (MEG) during implicit emotion-processing using emotional (happy, fearful, sad) and neutral facial expressions, in sixteen BD and sixteen age- and gender-matched healthy individuals. Methods: Neuromagnetic data were recorded using a 306-channel whole-head MEG ELEKTA Neuromag System, and preprocessed using Signal Space Separation as implemented in MaxFilter (ELEKTA). Custom Matlab programs removed EOG and ECG signals from filtered MEG data, and computed means of epoched data (0-250ms, 250-500ms, 500-750ms). A generalized linear model with three factors (individual, emotion intensity and time) compared BD and HI. A principal component analysis of normalized mean channel data in selected brain regions identified principal components that explained 95% of data variation. These components were used in a quadratic support vector machine (SVM) pattern classifier. SVM classifier performance was assessed using the leave-one-out approach. Results: BD and HI showed significantly different patterns of activation for 0-250ms within both left occipital and temporal regions, specifically for neutral facial expressions. PCA analysis revealed significant differences between BD and HI for mild fearful, happy, and sad facial expressions within 250-500ms. SVM quadratic classifier showed greatest accuracy (84%) and sensitivity (92%) for neutral faces, in left occipital regions within 500-750ms. Conclusions: MEG responses may be used in the search for disease specific neural markers.
Resumo:
[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.
Resumo:
[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.
Resumo:
La hipótesis de retroalimentación facial planteada por Tomkins en 1962 sustenta que la activación de algunos músculos faciales envía información sensorial al cerebro y se induce entonces una experiencia emocional en el sujeto. Partiendo de dicha teoría y de investigaciones que la sustentan, el presente estudio se propuso confirmar el efecto de la emoción inducida a través de la retroalimentación facial sobre la evaluación de cinco tipos de humor en publicidad. Para ello se realizó un experimento con 60 hombres y 60 mujeres, que fueron asignados aleatoriamente a una de dos condiciones: estimulación de sonrisa –músculos hacia arriba- o inhibición de sonrisa –músculos hacía abajo-, mientras evaluaban 16 imágenes de publicidad de humor. A partir del análisis de los resultados se encontraron diferencias significativas entre las condiciones; en línea con la hipótesis formulada, los participantes expuestos a la condición estimulación de sonrisa –músculos hacía arriba- evaluaron más positivamente los comerciales. También se encontraron diferencias significativas en función del sexo y los tipos de humor evaluados. El estudio ofrece evidencia empírica de la teoría propuesta hace más de medio siglo y su efecto en el ámbito de la publicidad actual.
Resumo:
The developmental progression of emotional competence in childhood provides a robust evidence for its relation to social competence and important adjustment outcomes. This study aimed to analyze how this association is established in middle childhood. For this purpose, we tested 182 Portuguese children aged between 8 and 11 years, of 3rd and 4th grades, in public schools. Firstly, for assessing social competence we used an instrument directed to children using critical social situations within the relationships with peers in the school context - Socially in Action-Peers (SAp) (Rocha, Candeias & Lopes da Silva, 2012); children were assessed by three sources: themselves, their peers and their teacher. Secondly, we assessed children’s emotional understanding, individually, with the Test of Emotion Comprehension (Pons & Harris, 2002; Pons, Harris & Rosnay, 2004). Relations between social competence levels (in a composite score and using self, peers and teachers’ scores) and emotional comprehension components (comprehension of the recognition of emotions, based on facial expressions; external emotional causes; contribute of desire to emotion; emotions based on belief; memory influence under emotional state evaluation; possibility of emotional regulation; possibility of hiding an emotional state; having mixed emotions; contribution of morality to emotion experience) were investigated by means of two SSA (Similarity Structure Analysis) - a Multidimensional Scaling procedure and the external variable as points technique. In the first structural analysis (SSA) we will consider self, peers and teachers’ scores on Social Competence as content variables and TEC as external variable; in the second SSA we will consider TEC components as content variables and Social Competence in their different levels as external variable. The implications of these MDS procedures in order to better understand how social competence and emotional comprehension are related in children is discussed, as well as the repercussions of these findings for social competence and emotional understanding assessment and intervention in childhood is examined.
Resumo:
In the conceptual framework of affective neuroscience, this thesis intends to advance the understanding of the plasticity mechanisms of other’s emotional facial expression representations. Chapter 1 outlines a description of the neurophysiological bases of Hebbian plasticity, reviews influential studies that adopted paired associative stimulation procedures, and introduces new lines of research where the impact of cortico-cortical paired associative stimulation protocols on higher order cognitive functions is investigated. The experiments in Chapter 2 aimed to test the modulatory influence of a perceptual-motor training, based on the execution of emotional expressions, on the subsequent emotion intensity judgements of others’ high (i.e., full visible) and low-intensity (i.e., masked) emotional expressions. As a result of the training-induced learning, participants showed a significant congruence effect, as indicated by relatively higher expression intensity ratings for the same emotion as the one that was previously trained. Interestingly, although judged as overall less emotionally intense, surgical facemasks did not prevent the emotion-specific effects of the training to occur, suggesting that covering the lower part of other’s face do not interact with the training-induced congruence effect. In Chapter 3 it was implemented a transcranial magnetic stimulation study targeting neural pathways involving re-entrant input from higher order brain regions into lower levels of the visual processing hierarchy. We focused on cortical visual networks within the temporo-occipital stream underpinning the processing of emotional faces and susceptible to plastic adaptations. Importantly, we tested the plasticity-induced effects in a state dependent manner, by administering ccPAS while presenting different facial expressions yet afferent to a specific emotion. Results indicated that the discrimination accuracy of emotion-specific expressions is enhanced following the ccPAS treatment, suggesting that a multi-coil TMS intervention might represent a suitable tool to drive brain remodeling at a neural network level, and consequently influence a specific behavior.
Resumo:
In the Amazon Region, there is a virtual absence of severe malaria and few fatal cases of naturally occurring Plasmodium falciparum infections; this presents an intriguing and underexplored area of research. In addition to the rapid access of infected persons to effective treatment, one cause of this phenomenon might be the recognition of cytoadherent variant proteins on the infected red blood cell (IRBC) surface, including the var gene encoded P. falciparum erythrocyte membrane protein 1. In order to establish a link between cytoadherence, IRBC surface antibody recognition and the presence or absence of malaria symptoms, we phenotype-selected four Amazonian P. falciparum isolates and the laboratory strain 3D7 for their cytoadherence to CD36 and ICAM1 expressed on CHO cells. We then mapped the dominantly expressed var transcripts and tested whether antibodies from symptomatic or asymptomatic infections showed a differential recognition of the IRBC surface. As controls, the 3D7 lineages expressing severe disease-associated phenotypes were used. We showed that there was no profound difference between the frequency and intensity of antibody recognition of the IRBC-exposed P. falciparum proteins in symptomatic vs. asymptomatic infections. The 3D7 lineages, which expressed severe malaria-associated phenotypes, were strongly recognised by most, but not all plasmas, meaning that the recognition of these phenotypes is frequent in asymptomatic carriers, but is not necessarily a prerequisite to staying free of symptoms.
Resumo:
The aim of this retrospective study was to compare the peculiarities of maxillofacial injuries caused by interpersonal violence with other etiologic factors. Medical records of 3,724 patients with maxillofacial injuries in São Paulo state (Brazil) were retrospectively analyzed. The data were submitted to statistical analysis (simple descriptive statistics and Chi-squared test) using SPSS 18.0 software. Data of 612 patients with facial injuries caused by violence were analyzed. The majority of the patients were male (81%; n = 496), with a mean age of 31.28 years (standard deviation of 13.33 years). These patients were more affected by mandibular and nose fractures, when compared with all other patients (P < 0.01), although fewer injuries were recorded in other body parts (χ(2) = 17.54; P < 0.01); Victims of interpersonal violence exhibited more injuries when the neurocranium was analyzed in isolation (χ(2) = 6.85; P < 0.01). Facial trauma due to interpersonal violence seem to be related to a higher rate of facial fractures and lacerations when compared to all patients with facial injuries. Prominent areas of the face and neurocranium were more affected by injuries.
Resumo:
Although Bell's palsy (BP) is the most common cause of peripheral facial palsy (PFP), other etiologies merit investigation. A 60-year-old female patient presented with recurrent bilateral PFP. Although the patient had a history of acute myeloid leukemia (AML), she had initially been diagnosed with BP-related PFP and had been treated accordingly. When the PFP recurred, additional diagnostic tests were performed. The resulting immunohistochemical profile included CD3 positivity in a few reactive T lymphocytes; positivity for myeloperoxidase in atypical cells; and focal positivity for CD34 and proto-oncogene c-kit proteins in neoplastic cells, thus confirming the suspicion of mastoid infiltration caused by relapsed AML. In patients with neoplastic disease, a finding of PFP calls for extensive investigation in order to rule out the involvement of the temporal bone.
Resumo:
The premature fusion of unilateral coronal suture can cause a significant asymmetry of the craniofacial skeleton, with an oblique deviation of the cranial base that negatively impacts soft tissue facial symmetry. The purpose of this study was to assess facial symmetry obtained in patients with unilateral coronal synostosis (UCS) surgically treated by 2 different techniques. We hypothesized that nasal deviation should not be addressed in a primary surgical correction of UCS. Consecutive UCS patients were enrolled in a prospective study and randomly divided into 2 groups. In group 1, the patients underwent total frontal reconstruction and transferring of onlay bone grafts to the recessive superior orbital rim (n = 7), and in group 2, the patients underwent total frontal reconstruction and unilateral fronto-orbital advancement (n = 5). Computerized photogrammetric analysis measured vertical and horizontal axis of the nose and the orbital globe in the preoperative and postoperative periods. Intragroup and intergroup comparisons were performed. Intragroup preoperative and postoperative comparisons showed a significant (all P < 0.05) reduction of the nasal axis and the orbital-globe axis in the postoperative period in the 2 groups. Intergroup comparisons showed no significant difference (all P > 0.05). Facial symmetry was achieved in the patients with UCS who underwent surgery regardless of surgical approach evaluated here. Our data showed a significant improvement in nasal and orbital-globe deviation, leading us to question the necessity of primary nasal correction in these patients.