999 resultados para Face perception
Resumo:
Human faces and bodies are both complex and interesting perceptual objects, and both convey important social information. Given these similarities between faces and bodies, we can ask how similar are the visual processing mechanisms used to recognize them. It has long been argued that faces are subject to dedicated and unique perceptual processes, but until recently, relatively little research has focused on how we perceive the human. body. Some recent paradigms indicate that faces and bodies are processed differently; others show similarities in face and body perception. These similarities and differences depend on the type of perceptual task and the level of processing involved. Future research should take these issues into account.
Resumo:
Perception and recognition of faces are fundamental cognitive abilities that form a basis for our social interactions. Research has investigated face perception using a variety of methodologies across the lifespan. Habituation, novelty preference, and visual paired comparison paradigms are typically used to investigate face perception in young infants. Storybook recognition tasks and eyewitness lineup paradigms are generally used to investigate face perception in young children. These methodologies have introduced systematic differences including the use of linguistic information for children but not infants, greater memory load for children than infants, and longer exposure times to faces for infants than for older children, making comparisons across age difficult. Thus, research investigating infant and child perception of faces using common methods, measures, and stimuli is needed to better understand how face perception develops. According to predictions of the Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002), in early development, perception of faces is enhanced in unimodal visual (i.e., silent dynamic face) rather than bimodal audiovisual (i.e., dynamic face with synchronous speech) stimulation. The current study investigated the development of face recognition across children of three ages: 5 – 6 months, 18 – 24 months, and 3.5 – 4 years, using the novelty preference paradigm and the same stimuli for all age groups. It also assessed the role of modality (unimodal visual versus bimodal audiovisual) and memory load (low versus high) on face recognition. It was hypothesized that face recognition would improve across age and would be enhanced in unimodal visual stimulation with a low memory load. Results demonstrated a developmental trend (F(2, 90) = 5.00, p = 0.009) with older children showing significantly better recognition of faces than younger children. In contrast to predictions, no differences were found as a function of modality of presentation (bimodal audiovisual versus unimodal visual) or memory load (low versus high). This study was the first to demonstrate a developmental improvement in face recognition from infancy through childhood using common methods, measures and stimuli consistent across age.
Resumo:
Whereas previous research has demonstrated that trait ratings of faces at encoding leads to enhanced recognition accuracy as compared to feature ratings, this set of experiments examines whether ratings given after encoding and just prior to recognition influence face recognition accuracy. In Experiment 1 subjects who made feature ratings just prior to recognition were significantly less accurate than subjects who made no ratings or trait ratings. In Experiment 2 ratings were manipulated at both encoding and retrieval. The retrieval effect was smaller and nonsignificant, but a combined probability analysis showed that it was significant when results from both experiments are considered jointly. In a third experiment exposure duration at retrieval, a potentially confounding factor in Experiments 1 and 2, had a nonsignificant effect on recognition accuracy, suggesting that it probably does not explain the results from Experiments 1 and 2. These experiments demonstrate that face recognition accuracy can be influenced by processing instructions at retrieval.
Resumo:
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Resumo:
We measured human frequency response functions for seven angular frequency filters whose test frequencies were centered at 1, 2, 3, 4, 8, 16 or 24 cycles/360º using a supra-threshold summation method. The seven functions of 17 experimental conditions each were measured nine times for five observers. For the arbitrarily selected filter phases, the maximum summation effect occurred at test frequency for filters at 1, 2, 3, 4 and 8 cycles/360º. For both 16 and 24 cycles/360º test frequencies, maximum summation occurred at the lower harmonics. These results allow us to conclude that there are narrow-band angular frequency filters operating somehow in the human visual system either through summation or inhibition of specific frequency ranges. Furthermore, as a general result, it appears that addition of higher angular frequencies to lower ones disturbs low angular frequency perception (i.e., 1, 2, 3 and 4 cycles/360º), whereas addition of lower harmonics to higher ones seems to improve detection of high angular frequency harmonics (i.e., 8, 16 and 24 cycles/360º). Finally, we discuss the possible involvement of coupled radial and angular frequency filters in face perception using an example where narrow-band low angular frequency filters could have a major role.
Resumo:
We measured human contrast sensitivity to radial frequencies modulated by cylindrical (Jo) and spherical (j o) Bessel profiles. We also measured responses to profiles of j o, j1, j2, j4, j8, and j16. Functions were measured three times by at least three of eight observers using a forced-choice method. The results conform to our expectations that sensitivity would be higher for cylindrical profiles. We also observed that contrast sensitivity is increased with the j n order for n greater than zero, having distinct orderly effects at the low and high frequency ends. For n = 0, 1, 2, and 4 sensitivity tended to occur around 0.8-1.0 cpd while for n = 8 and 16 it seemed to shift gradually to 0.8-3.0 cpd. We interpret these results as being consistent with the possibility that spatial frequency processing by the human visual system can be defined a priori in terms of polar coordinates and discuss its application to study face perception.
Resumo:
Previous studies have shown that adults and 8-year-olds process faces using norm-based coding and that prolonged exposure to one kind of facial distortion (e.g., compressed features) temporarily shifts the prototype, a process called adaptation, making similarly distorted faces appear more attractive (Anzures et aI., 2009; Valentine, 1999; Webster & MacLin, 1999). Aftereffects provide evidence that our prototype is continually updated by experience. When adults are adapted to two face categories (e.g., Caucasian and Chinese; male and female) distorted in opposing directions (e.g., expanded vs. compressed), their attractiveness ratings shift in opposite directions (Bestelmeyer et aI., 2008; Jaquet et aI., 2007), indicating that adults have dissociable prototypes for some face categories. I created a novel meth04 to investigate whether children show opposing aftereffects. Children and adults were adapted to Caucasian and Chinese faces distorted in opposite directions in the context of a computerized storybook. When testing adults to validate my method, I discovered that opposing aftereffects are contingent on how participants categorize faces and that this categorization is dependent on the context in which adapting stimuli are presented. Opposing aftereffects for Caucasian and Chinese faces were evident when the salience of race was exaggerated by presenting faces in the context of racially segregated birthday parties; expanded faces selected as most normal more often for the race of face that was expanded during adaptation than for the race of face that was compressed. However, opposing aftereffects were not evident when members of the two groups were presented engaging in cooperative social interactions at a racially integrated birthday party. Using the storybook that emphasized face race I 11 provide the first evidence that 8-year-olds demonstrate opposing aftereffects for two face categories defined by race, both when judging face normality and when rating attractiveness.
Resumo:
The current set of studies was conducted to examine the cross-race effect (CRE), a phenomenon commonly found in the face perception literature. The CRE is evident when participants display better own-race face recognition accuracy than other-race recognition accuracy (e.g. Ackerman et al., 2006). Typically the cross-race effect is attributed to perceptual expertise, (i.e., other-race faces are processed less holistically; Michel, Rossion, Han, Chung & Caldara, 2006), and the social cognitive model (i.e., other-race faces are processed at the categorical level by virtue of being an out-group member; Hugenberg, Young, Bernstein, & Sacco, 2010). These effects may be mediated by differential attention. I investigated whether other-race faces are disregarded and, consequently, not remembered as accurately as own-race (in-group) faces. In Experiment 1, I examined how the magnitude of the CRE differed when participants learned individual faces sequentially versus when they learned multiple faces simultaneously in arrays comprising faces and objects. I also examined how the CRE differed when participants recognized individual faces presented sequentially versus in arrays of eight faces. Participants’ recognition accuracy was better for own-race faces than other-race faces regardless of familiarization method. However, the difference between own- and other-race accuracy was larger when faces were familiarized sequentially in comparison to familiarization with arrays. Participants’ response patterns during testing differed depending on the combination of familiarization and testing method. Participants had more false alarms for other-race faces than own-race faces if they learned faces sequentially (regardless of testing strategy); if participants learned faces in arrays, they had more false alarms for other-race faces than own-races faces if ii i they were tested with sequentially presented faces. These results are consistent with the perceptual expertise model in that participants were better able to use the full two seconds in the sequential task for own-race faces, but not for other-race faces. The purpose of Experiment 2 was to examine participants’ attentional allocation in complex scenes. Participants were shown scenes comprising people in real places, but the head stimuli used in Experiment 1 were superimposed onto the bodies in each scene. Using a Tobii eyetracker, participants’ looking time for both own- and other-race faces was evaluated to determine whether participants looked longer at own-race faces and whether individual differences in looking time correlated with individual differences in recognition accuracy. The results of this experiment demonstrated that although own-race faces were preferentially attended to in comparison to other-race faces, individual differences in looking time biases towards own-race faces did not correlate with individual differences in own-race recognition advantages. These results are also consistent with perceptual expertise, as it seems that the role of attentional biases towards own-race faces is independent of the cognitive processing that occurs for own-race faces. All together, these results have implications for face perception tasks that are performed in the lab, how accurate people may be when remembering faces in the real world, and the accuracy and patterns of errors in eyewitness testimony.
Resumo:
La voix humaine constitue la partie dominante de notre environnement auditif. Non seulement les humains utilisent-ils la voix pour la parole, mais ils sont tout aussi habiles pour en extraire une multitude d’informations pertinentes sur le locuteur. Cette expertise universelle pour la voix humaine se reflète dans la présence d’aires préférentielles à celle-ci le long des sillons temporaux supérieurs. À ce jour, peu de données nous informent sur la nature et le développement de cette réponse sélective à la voix. Dans le domaine visuel, une vaste littérature aborde une problématique semblable en ce qui a trait à la perception des visages. L’étude d’experts visuels a permis de dégager les processus et régions impliqués dans leur expertise et a démontré une forte ressemblance avec ceux utilisés pour les visages. Dans le domaine auditif, très peu d’études se sont penchées sur la comparaison entre l’expertise pour la voix et d’autres catégories auditives, alors que ces comparaisons pourraient contribuer à une meilleure compréhension de la perception vocale et auditive. La présente thèse a pour dessein de préciser la spécificité des processus et régions impliqués dans le traitement de la voix. Pour ce faire, le recrutement de différents types d’experts ainsi que l’utilisation de différentes méthodes expérimentales ont été préconisés. La première étude a évalué l’influence d’une expertise musicale sur le traitement de la voix humaine, à l’aide de tâches comportementales de discrimination de voix et d’instruments de musique. Les résultats ont démontré que les musiciens amateurs étaient meilleurs que les non-musiciens pour discriminer des timbres d’instruments de musique mais aussi les voix humaines, suggérant une généralisation des apprentissages perceptifs causés par la pratique musicale. La seconde étude avait pour but de comparer les potentiels évoqués auditifs liés aux chants d’oiseaux entre des ornithologues amateurs et des participants novices. L’observation d’une distribution topographique différente chez les ornithologues à la présentation des trois catégories sonores (voix, chants d’oiseaux, sons de l’environnement) a rendu les résultats difficiles à interpréter. Dans la troisième étude, il était question de préciser le rôle des aires temporales de la voix dans le traitement de catégories d’expertise chez deux groupes d’experts auditifs, soit des ornithologues amateurs et des luthiers. Les données comportementales ont démontré une interaction entre les deux groupes d’experts et leur catégorie d’expertise respective pour des tâches de discrimination et de mémorisation. Les résultats obtenus en imagerie par résonance magnétique fonctionnelle ont démontré une interaction du même type dans le sillon temporal supérieur gauche et le gyrus cingulaire postérieur gauche. Ainsi, les aires de la voix sont impliquées dans le traitement de stimuli d’expertise dans deux groupes d’experts auditifs différents. Ce résultat suggère que la sélectivité à la voix humaine, telle que retrouvée dans les sillons temporaux supérieurs, pourrait être expliquée par une exposition prolongée à ces stimuli. Les données présentées démontrent plusieurs similitudes comportementales et anatomo-fonctionnelles entre le traitement de la voix et d’autres catégories d’expertise. Ces aspects communs sont explicables par une organisation à la fois fonctionnelle et économique du cerveau. Par conséquent, le traitement de la voix et d’autres catégories sonores se baserait sur les mêmes réseaux neuronaux, sauf en cas de traitement plus poussé. Cette interprétation s’avère particulièrement importante pour proposer une approche intégrative quant à la spécificité du traitement de la voix.
Resumo:
Le déficit social, incluant la perturbation du traitement du regard et des émotions, est au cœur de l’autisme. Des études ont montré que les visages de peur provoquent une orientation rapide et involontaire de l’attention spatiale vers leur emplacement chez les individus à développement typique. De plus, ceux-ci détectent plus rapidement et plus efficacement les visages avec un regard direct (vs regard dévié). La présente étude vise à explorer l’effet de l’émotion de peur et de la direction du regard (direct vs dévié) sur l’attention spatiale chez les enfants autistes à l’aide d’une tâche d’attention spatiale implicite. Six enfants avec un trouble autistique (TA) ont participé à cette étude. Les participants doivent détecter l’apparition d’une cible à gauche ou à droite d’un écran. L’apparition de la cible est précédée d’une amorce (paire de visages peur/neutre avec regard direct/dévié). La cible peut être présentée soit dans le même champ visuel que l’amorce émotionnellement chargée (condition valide), soit dans le champ visuel opposé (condition invalide). Nos résultats montrent que les amorces avec un visage de peur (vs les amorces avec un visage neutre) provoquent un effet d’interférence au niveau comportemental et divergent l’attention de leur emplacement chez les enfants avec un TA.
Resumo:
The ability to detect faces in images is of critical ecological significance. It is a pre-requisite for other important face perception tasks such as person identification, gender classification and affect analysis. Here we address the question of how the visual system classifies images into face and non-face patterns. We focus on face detection in impoverished images, which allow us to explore information thresholds required for different levels of performance. Our experimental results provide lower bounds on image resolution needed for reliable discrimination between face and non-face patterns and help characterize the nature of facial representations used by the visual system under degraded viewing conditions. Specifically, they enable an evaluation of the contribution of luminance contrast, image orientation and local context on face-detection performance.
Resumo:
Evidence suggests that the social cognition deficits prevalent in autism spectrum disorders (ASDs) are widely distributed in first degree and extended relatives. This ¿broader autism phenotype¿ (BAP) can be extended into non-clinical populations and show wide distributions of social behaviors such as empathy and social responsiveness ¿ with ASDs exhibiting these behaviors on the lower ends of the distributions. Little evidence has previously shown relationships between self-report measures of social cognition and more objective tasks such as face perception in functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). In this study, three specific hypotheses were addressed: a) increased social ability, as measured by an increased Empathy Quotient, decreased Social Responsiveness Scale (SRS-A) score, and increased Social Attribution Task score, will predict increased activation of the fusiform gyrus in response to faces as compared to houses; b) these same measures will predict N170 amplitude and latency showing decreased latency and increased amplitude for faces as compared to houses with increased social ability; c) increased amygdala volume will predict increased fusiform gyrus activation when viewing faces as compared to houses. Findings supported all of the hypotheses. Empathy scores significantly predicted both right FFG activation [F(1,20) = 4.811, p = .041, ß = .450, R2 = 0.20] and left FFG activation [F(1,20) = 7.70, p = .012, ß = .537, R2 = 0.29]. Based on ERP results increased right lateralization face-related N170 was significantly predicted by the EQ [F(1,54) = 6.94, p = .011, ß = .338, R2 = 0.11]. Finally, total amygdala volume significantly predicted right [F(1,20) = 7.217, p = .014, ß = .515, R2 = 0.27] and left [F(1,20) = 36.77, p < .001, ß = .805, R2 = 0.65] FFG activation. Consistent with the a priori hypotheses, traits attributed to the BAP can significantly predict neural responses to faces in a non-clinical population. This is consistent with the face processing deficits seen in ASDs. The findings presented here contribute to the extension of the BAP from unaffected relatives of individuals with ASDs to the general population. These findings also give continued evidence in support of a continuous distribution of traits found in psychiatric illnesses in place of a traditional, dichotomous ¿all-or-nothing¿ diagnostic framework of neurodevelopmental and neuropsychiatric disorders.
Resumo:
Autism spectrum disorders (ASD) are pervasive developmental disorders that affect approximately 1 in 50 children (Blumberg et al., 2013). Due to the social nature of the deficits that characterize the disorders, many have classified them as disorders of social cognition, which is the process that individuals use in order to successfully interact with members of their own species (Frith & Frith, 2007). Previous research has typically neglected the spectrum nature of ASD in favor of a more categorical approach of ¿autistic¿ versus ¿non-autistic,¿ but the spectrum requires a more continuous approach. Thus, the present study sought to examine the genetic, social-cognitive, and neural correlates of ASD-like traits as well as the relationship between these dimensions in typically developing children. Parents and children completed several quantitative measures examining several areas of social-cognitive functioning, including theory of mind and social functioning, restricted/repetitive behaviors and interests, and adaptive/maladaptive functioning. Children were also asked to undergo an EEG and both parents and children contributed a saliva sample that was used to sequence four single nucleotide polymorphisms (SNPs) of the OXTR gene, rs1042778, rs53576, rs2254298, and rs237897. We successfully demonstrated a significant relationship between behavioral measures of social-cognition and differences in face perception via the N170. However, the directionality of these relationships varied based on the behavioral measure and particular N170 difference scores. We also found support for the associations between the G_G allelic combination of rs1042778 and the A_A and A_G allelic combinations of rs2254298 and increased ASD-like behavior with decreased social-cognitive functioning. In contrast, our results contradict previous findings with rs237897 and imply that individuals with the A_A and A_G genotypes are less similar to those with ASD and have higher social cognitive functioning than those with the G_G genotype. In conclusion, we have demonstrated the existence of ASD-like traits in typically developing children and have shown a link between behavioral, genetic, and neural correlates of social-cognition. These findings demonstrate the importance of considering autism as a spectrum disorder and provide support for the move to a more continuous approach to neurodevelopmental disorders.
Resumo:
To perceive a coherent environment, incomplete or overlapping visual forms must be integrated into meaningful coherent percepts, a process referred to as ?Gestalt? formation or perceptual completion. Increasing evidence suggests that this process engages oscillatory neuronal activity in a distributed neuronal assembly. A separate line of evidence suggests that Gestalt formation requires top-down feedback from higher order brain regions to early visual cortex. Here we combine magnetoencephalography (MEG) and effective connectivity analysis in the frequency domain to specifically address the effective coupling between sources of oscillatory brain activity during Gestalt formation. We demonstrate that perceptual completion of two-tone ?Mooney? faces induces increased gamma frequency band power (55?71 Hz) in human early visual, fusiform and parietal cortices. Within this distributed neuronal assembly fusiform and parietal gamma oscillators are coupled by forward and backward connectivity during Mooney face perception, indicating reciprocal influences of gamma activity between these higher order visual brain regions. Critically, gamma band oscillations in early visual cortex are modulated by top-down feedback connectivity from both fusiform and parietal cortices. Thus, we provide a mechanistic account of Gestalt perception in which gamma oscillations in feature sensitive and spatial attention-relevant brain regions reciprocally drive one another and convey global stimulus aspects to local processing units at low levels of the sensory hierarchy by top-down feedback. Our data therefore support the notion of inverse hierarchical processing within the visual system underlying awareness of coherent percepts.
Resumo:
The recognition of faces and of facial expressions in an important evolutionary skill, and an integral part of social communication. It has been argued that the processing of faces is distinct from the processing of non-face stimuli and functional neuroimaging investigations have even found evidence of a distinction between the perception of faces and of emotional expressions. Structural and temporal correlates of face perception and facial affect have only been separately identified. Investigation neural dynamics of face perception per se as well as facial affect would allow the mapping of these in space, time and frequency specific domains. Participants were asked to perform face categorisation and emotional discrimination tasks and Magnetoencephalography (MEG) was used to measure the neurophysiology of face and facial emotion processing. SAM analysis techniques enable the investigation of spectral changes within specific time-windows and frequency bands, thus allowing the identification of stimulus specific regions of cortical power changes. Furthermore, MEG’s excellent temporal resolution allows for the detection of subtle changes associated with the processing of face and non-face stimuli and different emotional expressions. The data presented reveal that face perception is associated with spectral power changes within a distributed cortical network comprising occipito-temporal as well as parietal and frontal areas. For the perception of facial affect, spectral power changes were also observed within frontal and limbic areas including the parahippocampal gyrus and the amygdala. Analyses of temporal correlates also reveal a distinction between the processing of faces and facial affect. Face perception per se occurred at earlier latencies whereas the discrimination of facial expression occurred within a longer time-window. In addition, the processing of faces and facial affect was differentially associated with changes in cortical oscillatory power for alpha, beta and gamma frequencies. The perception of faces and facial affect is associated with distinct changes in cortical oscillatory activity that can be mapped to specific neural structures, specific time-windows and latencies as well as specific frequency bands. Therefore, the work presented in this thesis provides further insight into the sequential processing of faces and facial affect.