989 resultados para face processing
Resumo:
A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.
Resumo:
The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.
Resumo:
Le traitement visuel répété d’un visage inconnu entraîne une suppression de l’activité neuronale dans les régions préférentielles aux visages du cortex occipito-temporal. Cette «suppression neuronale» (SN) est un mécanisme primitif hautement impliqué dans l’apprentissage de visages, pouvant être détecté par une réduction de l’amplitude de la composante N170, un potentiel relié à l’événement (PRE), au-dessus du cortex occipito-temporal. Le cortex préfrontal dorsolatéral (CPDL) influence le traitement et l’encodage visuel, mais sa contribution à la SN de la N170 demeure inconnue. Nous avons utilisé la stimulation électrique transcrânienne à courant direct (SETCD) pour moduler l’excitabilité corticale du CPDL de 14 adultes sains lors de l’apprentissage de visages inconnus. Trois conditions de stimulation étaient utilisées: inhibition à droite, excitation à droite et placebo. Pendant l’apprentissage, l’EEG était enregistré afin d’évaluer la SN de la P100, la N170 et la P300. Trois jours suivant l’apprentissage, une tâche de reconnaissance était administrée où les performances en pourcentage de bonnes réponses et temps de réaction (TR) étaient enregistrées. Les résultats indiquent que la condition d’excitation à droite a facilité la SN de la N170 et a augmentée l’amplitude de la P300, entraînant une reconnaissance des visages plus rapide à long-terme. À l’inverse, la condition d’inhibition à droite a causé une augmentation de l’amplitude de la N170 et des TR plus lents, sans affecter la P300. Ces résultats sont les premiers à démontrer que la modulation d’excitabilité du CPDL puisse influencer l’encodage visuel de visages inconnus, soulignant l’importance du CPDL dans les mécanismes d’apprentissage de base.
Resumo:
Les connaissances que nous avons sur les personnes familières et célèbres représentent un des grands domaines de la mémoire sémantique. Elles ont une valeur sociale importante puisqu'elles nous permettent de reconnaître et d'identifier les personnes que nous connaissons et de les distinguer de personnes que nous ne connaissons pas. La présente thèse comporte deux volets : le premier volet porte sur l’étude des substrats cérébraux du traitement sémantique sur les personnes célèbres chez le jeune adulte, alors que le deuxième volet porte sur l’étude des connaissances sémantiques sur les personnes célèbres chez la personne âgée sans troubles cognitifs, atteinte d’un Trouble cognitif léger de type amnésique (TCLa), d’un Trouble cognitif léger de type amnésique avec symptômes dépressifs (TCLa-D) ou de dépression tardive. Plus précisément, ce dernier volet étudie la relation entre les troubles sémantiques et la présence de symptômes dépressifs. Le premier volet a donc pour objectif d’explorer en imagerie par résonance magnétique fonctionnelle (IRMf) les substrats cérébraux sous-tendant le traitement sémantique de visages célèbres comparé au traitement perceptif (Article 1). Le rôle des régions temporales postérieures (occipito-temporales) dans le traitement perceptif des visages est aujourd’hui bien établi. Les lobes temporaux antérieurs (LTA) semblent avoir un rôle particulièrement important dans l’identification des visages familiers et connus, mais le rôle précis de cette région dans le traitement sémantique des visages connus demeure encore mal compris. Le premier article met ainsi en lumière les régions corticales impliquées dans le processus de reconnaissance de visages, soit du traitement perceptif au traitement sémantique qui nous permet d’identifier et de retrouver des informations biographiques sur le visage qui nous est présenté. Les présents résultats appuient le modèle proposé par Haxby et collègues (2000) selon lequel la région des lobes temporaux antérieurs (LTA) soit associée au traitement sémantique des visages de personnes célèbres. Quant au deuxième volet, il a pour objectif d’étudier au niveau comportemental l’intégrité des connaissances sémantiques biographiques spécifiques et générales chez des personnes âgées sans troubles cognitifs, atteinte d’un TCLa ou d’un TCLa avec symptômes dépressifs (TCLa-D) ou de dépression tardive. (Article 2). La dépression a été jugée comme étant un facteur interdépendant pouvant jouer un rôle dans la variabilité de la présentation clinique des individus TCLa. En effet, il semble que la présence de symptômes dépressifs influence le profil cognitif des individus TCLa, surtout en ce qui à trait aux fonctions exécutives et à la mémoire épisodique. Cependant, aucune étude n’a à ce jour étudié l’impact des symptômes dépressifs sur la mémoire sémantique des personnes célèbres chez les individus TCLa. Les présents résultats indiquent que les individus TCLa montrent des déficits pour le traitement sémantique des personnes célèbres, et que ces déficits sont modulés par la présence d’une symptomatologie dépressive. La dépression à elle seule ne peut toutefois engendrer des déficits sémantiques puisque le groupe ayant une dépression tardive n’a démontré aucune atteinte de la mémoire sémantique. Les implications théoriques et cliniques de ces résultats seront discutées, ainsi que les limites et perspectives futures.
Resumo:
L’objectif de cette recherche est la création d’une plateforme en ligne qui permettrait d’examiner les différences individuelles de stratégies de traitement de l’information visuelle dans différentes tâches de catégorisation des visages. Le but d’une telle plateforme est de récolter des données de participants géographiquement dispersés et dont les habiletés en reconnaissance des visages sont variables. En effet, de nombreuses études ont montré qu’il existe de grande variabilité dans le spectre des habiletés à reconnaître les visages, allant de la prosopagnosie développementale (Susilo & Duchaine, 2013), un trouble de reconnaissance des visages en l’absence de lésion cérébrale, aux super-recognizers, des individus dont les habiletés en reconnaissance des visages sont au-dessus de la moyenne (Russell, Duchaine & Nakayama, 2009). Entre ces deux extrêmes, les habiletés en reconnaissance des visages dans la population normale varient. Afin de démontrer la faisabilité de la création d’une telle plateforme pour des individus d’habiletés très variables, nous avons adapté une tâche de reconnaissance de l’identité des visages de célébrités utilisant la méthode Bubbles (Gosselin & Schyns, 2001) et avons recruté 14 sujets contrôles et un sujet présentant une prosopagnosie développementale. Nous avons pu mettre en évidence l’importance des yeux et de la bouche dans l’identification des visages chez les sujets « normaux ». Les meilleurs participants semblent, au contraire, utiliser majoritairement le côté gauche du visage (l’œil gauche et le côté gauche de la bouche).
Resumo:
Despite increasing empirical data to the contrary, it continues to be claimed that morphosyntax and face processing skills of people with Williams syndrome are intact, This purported intactness, which coexists with mental retardation, is used to bolster claims about innately specified, independently functioning modules, as if the atypically developing brain were simply a normal brain with parts intact and parts impaired. Yet this is highly unlikely, given the dynamics of brain development and the fact that in a genetic microdeletion syndrome the brain is developing differently from the moment of conception, throughout embryogenesis, and during postnatal brain growth. In this article, we challenge the intactness assumptions, using evidence from a wide variety of studies of toddlers, children, and adults with Williams syndrome.
Resumo:
Perceptual closure refers to the coherent perception of an object under circumstances when the visual information is incomplete. Although the perceptual closure index observed in electroencephalography reflects that an object has been recognized, the full spatiotemporal dynamics of cortical source activity underlying perceptual closure processing remain unknown so far. To address this question, we recorded magnetoencephalographic activity in 15 subjects (11 females) during a visual closure task and performed beamforming over a sequence of successive short time windows to localize high-frequency gamma-band activity (60–100 Hz). Two-tone images of human faces (Mooney faces) were used to examine perceptual closure. Event-related fields exhibited a magnetic closure index between 250 and 325 ms. Time-frequency analyses revealed sustained high-frequency gamma-band activity associated with the processing of Mooney stimuli; closure-related gamma-band activity was observed between 200 and 300 ms over occipitotemporal channels. Time-resolved source reconstruction revealed an early (0–200 ms) coactivation of caudal inferior temporal gyrus (cITG) and regions in posterior parietal cortex (PPC). At the time of perceptual closure (200–400 ms), the activation in cITG extended to the fusiform gyrus, if a face was perceived. Our data provide the first electrophysiological evidence that perceptual closure for Mooney faces starts with an interaction between areas related to processing of three-dimensional structure from shading cues (cITG) and areas associated with the activation of long-term memory templates (PPC). Later, at the moment of perceptual closure, inferior temporal cortex areas specialized for the perceived object are activated, i.e., the fusiform gyrus related to face processing for Mooney stimuli.
Resumo:
Evidence suggests that the social cognition deficits prevalent in autism spectrum disorders (ASDs) are widely distributed in first degree and extended relatives. This ¿broader autism phenotype¿ (BAP) can be extended into non-clinical populations and show wide distributions of social behaviors such as empathy and social responsiveness ¿ with ASDs exhibiting these behaviors on the lower ends of the distributions. Little evidence has previously shown relationships between self-report measures of social cognition and more objective tasks such as face perception in functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). In this study, three specific hypotheses were addressed: a) increased social ability, as measured by an increased Empathy Quotient, decreased Social Responsiveness Scale (SRS-A) score, and increased Social Attribution Task score, will predict increased activation of the fusiform gyrus in response to faces as compared to houses; b) these same measures will predict N170 amplitude and latency showing decreased latency and increased amplitude for faces as compared to houses with increased social ability; c) increased amygdala volume will predict increased fusiform gyrus activation when viewing faces as compared to houses. Findings supported all of the hypotheses. Empathy scores significantly predicted both right FFG activation [F(1,20) = 4.811, p = .041, ß = .450, R2 = 0.20] and left FFG activation [F(1,20) = 7.70, p = .012, ß = .537, R2 = 0.29]. Based on ERP results increased right lateralization face-related N170 was significantly predicted by the EQ [F(1,54) = 6.94, p = .011, ß = .338, R2 = 0.11]. Finally, total amygdala volume significantly predicted right [F(1,20) = 7.217, p = .014, ß = .515, R2 = 0.27] and left [F(1,20) = 36.77, p < .001, ß = .805, R2 = 0.65] FFG activation. Consistent with the a priori hypotheses, traits attributed to the BAP can significantly predict neural responses to faces in a non-clinical population. This is consistent with the face processing deficits seen in ASDs. The findings presented here contribute to the extension of the BAP from unaffected relatives of individuals with ASDs to the general population. These findings also give continued evidence in support of a continuous distribution of traits found in psychiatric illnesses in place of a traditional, dichotomous ¿all-or-nothing¿ diagnostic framework of neurodevelopmental and neuropsychiatric disorders.
Resumo:
One of the most consistent findings in the neuroscience of autism is hypoactivation of the fusiform gyrus (FG) during face processing. In this study the authors examined whether successful facial affect recognition training is associated with an increased activation of the FG in autism. The effect of a computer-based program to teach facial affect identification was examined in 10 individuals with high-functioning autism. Blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) changes in the FG and other regions of interest, as well as behavioral facial affect recognition measures, were assessed pre- and posttraining. No significant activation changes in the FG were observed. Trained participants showed behavioral improvements, which were accompanied by higher BOLD fMRI signals in the superior parietal lobule and maintained activation in the right medial occipital gyrus.
Resumo:
Identification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.
Resumo:
Recent evidence suggests that increased psychophysiological response to negatively valenced emotional stimuli found in major depressive disorder (MDD) may be associated with reduced catecholaminergic neurotransmission. Fourteen unmedicated, remitted subjects with MDD (RMDD) and 13 healthy control subjects underwent catecholamine depletion with oral α-methyl-para-tyrosine (AMPT) in a randomized, placebo-controlled, double-blind crossover trial. Subjects were exposed to fearful (FF) and neutral faces (NF) during a scan with [15O]H2O positron emission tomography to assess the brain-catecholamine interaction in brain regions previously associated with emotional face processing. Treatment with AMPT resulted in significantly increased, normalized cerebral blood flow (CBF) in the left inferior temporal gyrus (ITG) and significantly decreased CBF in the right cerebellum across conditions and groups. In RMDD, flow in the left posterior cingulate cortex (PCC) increased significantly in the FF compared to the NF condition after AMPT, but remained unchanged after placebo, whereas healthy controls showed a significant increase under placebo and a significant decrease under AMPT in this brain region. In the left dorsolateral prefrontal cortex (DLPFC), flow decreased significantly in the FF compared to the NF condition under AMPT, and increased significantly under placebo in RMDD, whereas healthy controls showed no significant differences. Differences between AMPT and placebo of within-session changes in worry-symptoms were positively correlated with the corresponding changes in CBF in the right subgenual prefrontal cortex in RMDD. In conclusion, this study provided evidence for a catecholamine-related modulation of the neural responses to FF expressions in the left PCC and the left DLPFC in subjects with RMDD that might constitute a persistent, trait-like abnormality in MDD.
Resumo:
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.
Resumo:
The ability to recognize individual faces is of crucial social importance for humans and evolutionarily necessary for survival. Consequently, faces may be “special” stimuli, for which we have developed unique modular perceptual and recognition processes. Some of the strongest evidence for face processing being modular comes from cases of prosopagnosia, where patients are unable to recognize faces whilst retaining the ability to recognize other objects. Here we present the case of an acquired prosopagnosic whose poor recognition was linked to a perceptual impairment in face processing. Despite this, she had intact object recognition, even at a subordinate level. She also showed a normal ability to learn and to generalize learning of nonfacial exemplars differing in the nature and arrangement of their parts, along with impaired learning and generalization of facial exemplars. The case provides evidence for modular perceptual processes for faces.
Resumo:
News & Comment. Many influential models of prefrontal cortex function suggest that activity within this area is often associated with additional activity in posterior regions of the cortex that support perception. The purpose of this cortical ‘coupling’ is to ensure that a perceptual representation is generated and then maintained within the working memory system. Areas in the right ventrolateral prefrontal cortex (vlPFC) and the fusiform gyrus have been implicated as associate areas involved in face processing. In an interesting case study by Vignal, Chauvel and Halgren the functional relationship between these two areas was tested1. In order to confirm the epileptogenic foci prior to resective surgery in a 30-year-old male patient, depth electrodes were implanted into sites around prefrontal, anterior temporal and premotor cortices. While the patient was looking at a blank screen, 50-Hz electrical stimulation of two probes implanted into the right anterior frontal gyrus resulted in the patient’s reporting the perception of a series of colourful faces. These facial hallucinations were described as being ‘…like passing slides, one after the after, linked together’. When asked to look at an actual face during stimulation at the same sites the patient reported transformation of that face (such as appearing without spectacles or with a hat). These findings were related to activity of a cortical network involving the vlPFC and the fusiform gyrus. This paper thus suggests a role in face processing for the vlPFC, evoking working memory processes to maintain facial representations.