982 resultados para emparelhamento de faces
Resumo:
The facial width-to-height ratio (face ratio), is a sexually dimorphic metric associated with actual aggression in men and with observers’ judgements of aggression in male faces. Here, we sought to determine if observers’ judgements of aggression were associated with the face ratio in female faces. In three studies, participants rated photographs of female and male faces on aggression, femininity, masculinity, attractiveness, and nurturing. In Studies 1 and 2, for female and male faces, judgements of aggression were associated with the face ratio even when other cues in the face related to masculinity were controlled statistically. Nevertheless, correlations between the face ratio and judgements of aggression were smaller for female than for male faces (F1,36= 7.43, p= 0.01). In Study 1, there was no significant relationship between judgements of femininity and of aggression in female faces. In Study 2, the association between judgements of masculinity and aggression was weaker in female faces than for male faces in Study 1. The weaker association in female faces may be because aggression and masculinity are stereotypically male traits. Thus, in Study 3, observers rated faces on nurturing (a stereotypically female trait) and on femininity. Judgements of nurturing were associated with femininity (positively) and masculinity (negatively) ratings in both female and male faces. In summary, the perception of aggression differs in female versus male faces. The sex difference was not simply because aggression is a gendered construct; the relationships between masculinity/femininity and nurturing were similar for male and female faces even though nurturing is also a gendered construct. Masculinity and femininity ratings are not associated with aggression ratings nor with the face ratio for female faces. In contrast, all four variables are highly inter-correlated in male faces, likely because these cues in male faces serve as ‘‘honest signals’’.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.
Resumo:
The ability to detect faces in images is of critical ecological significance. It is a pre-requisite for other important face perception tasks such as person identification, gender classification and affect analysis. Here we address the question of how the visual system classifies images into face and non-face patterns. We focus on face detection in impoverished images, which allow us to explore information thresholds required for different levels of performance. Our experimental results provide lower bounds on image resolution needed for reliable discrimination between face and non-face patterns and help characterize the nature of facial representations used by the visual system under degraded viewing conditions. Specifically, they enable an evaluation of the contribution of luminance contrast, image orientation and local context on face-detection performance.
Resumo:
Un dels reptes cabdals de la Universitat és enllaçar l’experiència de recerca amb la docència, així com promoure la internacionalització dels estudis, especialment a escala europea, tenint present que ambdues poden actuar com a catalitzadores de la millora de la qualitat docent. Una de les fórmules d’internacionalització és la realització d’assignatures compartides entre universitats de diferents països, fet que suposa l’oportunitat d’implementar noves metodologies docents. En aquesta comunicació es presenta una experiència en aquesta línia desenvolupada entre la Universitat de Girona i la Universitat de Joensuu (Finlàndia) en el marc dels estudis de Geografia amb la realització de l’assignatura 'The faces of landscape: Catalonia and North Karelia'. Aquesta es desenvolupa al llarg de dues setmanes intensives, una en cadascuna de les Universitats. L’objectiu és presentar i analitzar diferents significats del concepte paisatge aportant també metodologies d’estudi tant dels aspectes físics i ecològics com culturals que s’hi poden vincular i que són les que empren els grups de recerca dels professors responsables de l’assignatura. Aquesta part teòrica es completa amb una presentació de les característiques i dinàmiques pròpies dels paisatges finlandesos i catalans i una sortida de camp. Per a la part pràctica es constitueixen grups d’estudi multinacionals que treballen a escala local algun dels aspectes en els dos països, es comparen i es realitza una presentació i defensa davant del conjunt d’estudiants i professorat. La llengua vehicular de l’assignatura és l’anglès
Resumo:
En aquest projecte es pretén utilitzar mètodes coneguts com ara Viola&Jones (detecció) i EigenFaces (reconeixement) per a detectar i reconèixer cares dintre d’imatges de vídeo. Per a aconseguir aquesta tasca cal partir d’un conjunt de dades d’entrenament per a cada un dels mètodes (base de dades formada per imatges i anotacions manuals). A partir d’aquí, l’aplicació, ha de ser capaç de detectar cares en noves imatges i reconèixer-les (identificar de quina cara es tracta)
Resumo:
Here we show inverse fMRI activation patterns in amygdala and medial prefrontal cortex (mPFC) depending upon whether subjects interpreted surprised facial expressions positively or negatively. More negative interpretations of surprised faces were associated with greater signal changes in the right ventral amygdala, while more positive interpretations were associated with greater signal changes in the ventral mPFC. Accordingly, signal change within these two areas was inversely correlated. Thus, individual differences in the judgment of surprised faces are related to a systematic inverse relationship between amygdala and mPFC activity, a circuitry that the animal literature suggests is critical to the assessment of stimuli that predict potential positive vs negative outcomes.
Resumo:
We recently demonstrated a functional relationship between fMRI responses within the amygdala and the medial prefrontal cortex based upon whether subjects interpreted surprised facial expressions positively or negatively. In the present fMRI study, we sought to assess amygdala-medial prefrontal cortex responsivity when the interpretations of surprised faces were determined by contextual experimental stimuli, rather than subjective judgment. Subjects passively viewed individual presentations of surprised faces preceded by either a negatively or positively valenced contextual sentence (e. g., She just found $500 vs. She just lost $500). Negative and positive sentences were carefully matched in terms of length, situations described, and arousal level. Negatively cued surprised faces produced greater ventral amygdala activation compared to positively cued surprised faces. Responses to negative versus positive sentences were greater within the ventrolateral prefrontal cortex, whereas responses to positive versus negative sentences were greater within the ventromedial prefrontal cortex. The present study demonstrates that amygdala response to surprised facial expressions can be modulated by negatively versus positively valenced verbal contextual information. Connectivity analyses identified candidate cortical-subcortical systems subserving this modulation.
Resumo:
BACKGROUND: Previous functional imaging studies demonstrating amygdala response to happy facial expressions have all included the presentation of negatively valenced primary comparison expressions within the experimental context. This study assessed amygdala response to happy and neutral facial expressions in an experimental paradigm devoid of primary negatively valenced comparison expressions. METHODS: Sixteen human subjects (eight female) viewed 16-sec blocks of alternating happy and neutral faces interleaved with a baseline fixation condition during two functional magnetic resonance imaging scans. RESULTS: Within the ventral amygdala, a negative correlation between happy versus neutral signal changes and state anxiety was observed. The majority of the variability associated with this effect was explained by a positive relationship between state anxiety and signal change to neutral faces. CONCLUSIONS: Interpretation of amygdala responses to facial expressions of emotion will be influenced by considering the contribution of each constituent condition within a greater subtractive finding, as well as 1) their spatial location within the amygdaloid complex; and 2) the experimental context in which they were observed. Here, an observed relationship between state anxiety and ventral amygdala response to happy versus neutral faces was explained by response to neutral faces.
Variations in the human cannabinoid receptor (CNR1) gene modulate striatal responses to happy faces.
Resumo:
Happy facial expressions are innate social rewards and evoke a response in the striatum, a region known for its role in reward processing in rats, primates and humans. The cannabinoid receptor 1 (CNR1) is the best-characterized molecule of the endocannabinoid system, involved in processing rewards. We hypothesized that genetic variation in human CNR1 gene would predict differences in the striatal response to happy faces. In a 3T functional magnetic resonance imaging (fMRI) scanning study on 19 Caucasian volunteers, we report that four single nucleotide polymorphisms (SNPs) in the CNR1 locus modulate differential striatal response to happy but not to disgust faces. This suggests a role for the variations of the CNR1 gene in underlying social reward responsivity. Future studies should aim to replicate this finding with a balanced design in a larger sample, but these preliminary results suggest neural responsivity to emotional and socially rewarding stimuli varies as a function of CNR1 genotype. This has implications for medical conditions involving hypo-responsivity to emotional and social stimuli, such as autism.
Resumo:
Individuals with social phobia display social information processing biases yet their aetiological significance is unclear. Infants of mothers with social phobia and control infants' responses were assessed at 10 days, 10 and 16 weeks, and 10 months to faces versus non-faces, variations in intensity of emotional expressions, and gaze direction. Infant temperament and maternal behaviours were also assessed. Both groups showed a preference for faces over non-faces at 10 days and 10 weeks, and full faces over profiles at 16 weeks; they also looked more to high vs. low intensity angry faces at 10 weeks, and fearful faces at 10 months; however, index infants' initial orientation and overall looking to high-intensity fear faces was relatively less than controls at 10 weeks. This was not explained by infant temperament or maternal behaviours. The findings suggest that offspring of mothers with social phobia show processing biases to emotional expressions in infancy.
Resumo:
The goal of this study was to examine behavioral and electrophysiological correlates of involuntary orienting toward rapidly presented angry faces in non-anxious, healthy adults using a dot-probe task in conjunction with high-density event-related potentials and a distributed source localization technique. Consistent with previous studies, participants showed hypervigilance toward angry faces, as indexed by facilitated response time for validly cued probes following angry faces and an enhanced P1 component. An opposite pattern was found for happy faces suggesting that attention was directed toward the relatively more threatening stimuli within the visual field (neutral faces). Source localization of the P1 effect for angry faces indicated increased activity within the anterior cingulate cortex, possibly reflecting conflict experienced during invalidly cued trials. No modulation of the early C1 component was found for affect or spatial attention. Furthermore, the face-sensitive N170 was not modulated by emotional expression. Results suggest that the earliest modulation of spatial attention by face stimuli is manifested in the P1 component, and provide insights about mechanisms underlying attentional orienting toward cues of threat and social disapproval.