989 resultados para facial emotion processing
Resumo:
Two experiments were conducted to assess simultaneously the effects of attentional and emotional processes on startle eyeblink modulation. In each experiment, participants were presented with a pleasant and an unpleasant picture. Half the participants were asked to attend to the pleasant picture and to ignore the unpleasant picture, whereas the reverse was the case for the other participants. Startle probes were presented at 3500 and 4500 ins after stimulus onset in Experiment I and at 250, 750, and 4450 ms after stimulus onset and 950 ms after stimulus offset in Experiment 2. Attentional processing affected startle eyeblink modulation and electrodermal responses in both experiments, However, effects of picture valence on startle eyeblink modulation were found only in Experiment 2. The results confirm the utility of startle eyeblink modulation as an index of attentional and emotional processing. They also illustrate that procedural characteristics, such as the nature of the lead intervals and how attention and emotion are operationalized, can determine whether emotional or attentional processes will be reflected in startle eyeblink.
Resumo:
Dissertação para obtenção do Grau de Doutor em Ambiente, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Resumo:
The jointly voluntary and involuntary control of respiration, unique among essential physiological processes, the interconnection of breathing with and its influence on the autonomic nervous system, and disease states associated with the interface between psychology and respiration (e.g., anxiety disorders, hyperventilation syndrome, asthma) make the study of the relationship between respiration and emotion both theoretically and clinically of great relevance. However, the respiratory behavior during affective states is not yet completely understood. We studied breathing pattern responses to 13 picture series varying widely in their affective tone in 37 adults (18 men, 19 women, mean age 26). Time and volume parameters were recorded with the LifeShirt system (VivoMetrics Inc., Ventura, California, USA, see image). We also measured end-tidal pCO2 (EtCO2) with a Microcap Handheld Capnograph (Oridion Medical 1987 Ltd., Jerusalem, Israel) to determine if ventilation is in balance with metabolic demands and spontaneous eye-blinking to investigate the link between respiration and attention. At the end of each picture series, the participants reported their subjective feeling in the affective dimensions of pleasantness and arousal. Increasing self-rated arousal was associated with increasing minute ventilation but not with decreases in EtCO2, suggesting that ventilatory changes during picture viewing paralleled variations in metabolic activity. EtCO2 correlated with pleasantness, and eye-blink rate decreased with increasing unpleasantness in line with a negativity bias in attention. Like MV, inspiratory drive (i.e., mean inspiratory flow) increased with arousal. This relationship reflected increases in inspiratory volume rather than shortening of the time parameters. This study confirms that respiratory responses to affective stimuli are organized to a certain degree along the dimensions of pleasantness and arousal. It shows, for the first time, that during picture viewing, ventilatory increases with increasing arousal are in balance with metabolic activity and that inspiratory volume is modulated by arousal. MV emerges as the most reliable respiratory index of self-perceived arousal. Finally, end-tidal pCO2 is slightly lower during processing of negative as compared to positive picture contents, which is proposed to enhance sensory perception and reflect a negativity bias in attention.
Resumo:
This study investigated the neural regions involved in blood pressure reactions to negative stimuli and their possible modulation by attention. Twenty-four healthy human subjects (11 females; age = 24.75 ± 2.49 years) participated in an affective perceptual load task that manipulated attention to negative/neutral distractor pictures. fMRI data were collected simultaneously with continuous recording of peripheral arterial blood pressure. A parametric modulation analysis examined the impact of attention and emotion on the relation between neural activation and blood pressure reactivity during the task. When attention was available for processing the distractor pictures, negative pictures resulted in behavioral interference, neural activation in brain regions previously related to emotion, a transient decrease of blood pressure, and a positive correlation between blood pressure response and activation in a network including prefrontal and parietal regions, the amygdala, caudate, and mid-brain. These effects were modulated by attention; behavioral and neural responses to highly negative distractor pictures (compared with neutral pictures) were smaller or diminished, as was the negative blood pressure response when the central task involved high perceptual load. Furthermore, comparing high and low load revealed enhanced activation in frontoparietal regions implicated in attention control. Our results fit theories emphasizing the role of attention in the control of behavioral and neural reactions to irrelevant emotional distracting information. Our findings furthermore extend the function of attention to the control of autonomous reactions associated with negative emotions by showing altered blood pressure reactions to emotional stimuli, the latter being of potential clinical relevance.
Resumo:
Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.
Resumo:
The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.
Resumo:
Previously, studies investigating emotional face perception - regardless of whether they involved adults or children - presented participants with static photos of faces in isolation. In the natural world, faces are rarely encountered in isolation. In the few studies that have presented faces in context, the perception of emotional facial expressions is altered when paired with an incongruent context. For both adults and 8- year-old children, reaction times increase and accuracy decreases when facial expressions are presented in an incongruent context depicting a similar emotion (e.g., sad face on a fear body) compared to when presented in a congruent context (e.g., sad face on a sad body; Meeren, van Heijnsbergen, & de Gelder, 2005; Mondloch, 2012). This effect is called a congruency effect and does not exist for dissimilar emotions (e.g., happy and sad; Mondloch, 2012). Two models characterize similarity between emotional expressions differently; the emotional seed model bases similarity on physical features, whereas the dimensional model bases similarity on underlying dimensions of valence an . arousal. Study 1 investigated the emergence of an adult-like pattern of congruency effects in pre-school aged children. Using a child-friendly sorting task, we identified the youngest age at which children could accurately sort isolated facial expressions and body postures and then measured whether an incongruent context disrupted the perception of emotional facial expressions. Six-year-old children showed congruency effects for sad/fear but 4-year-old children did not for sad/happy. This pattern of congruency effects is consistent with both models and indicates that an adult-like pattern exists at the youngest age children can reliably sort emotional expressions in isolation. In Study 2, we compared the two models to determine their predictive abilities. The two models make different predictions about the size of congruency effects for three emotions: sad, anger, and fear. The emotional seed model predicts larger congruency effects when sad is paired with either anger or fear compared to when anger and fear are paired with each other. The dimensional model predicts larger congruency effects when anger and fear are paired together compared to when either is paired with sad. In both a speeded and unspeeded task the results failed to support either model, but the pattern of results indicated fearful bodies have a special effect. Fearful bodies reduced accuracy, increased reaction times more than any other posture, and shifted the pattern of errors. To determine whether the results were specific to bodies, we ran the reverse task to determine if faces could disrupt the perception of body postures. This experiment did not produce congruency effects, meaning faces do not influence the perception of body postures. In the final experiment, participants performed a flanker task to determine whether the effect of fearful bodies was specific to faces or whether fearful bodies would also produce a larger effect in an unrelated task in which faces were absent. Reaction times did not differ across trials, meaning fearful bodies' large effect is specific to situations with faces. Collectively, these studies provide novel insights, both developmentally and theoretically, into how emotional faces are perceived in context.
Resumo:
As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.
Resumo:
The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether similarity is based on physical characteristics or underlying dimensions of valence and arousal. Using a 3- alternative forced-choice task in which stimuli were presented briefly (Exp 1a) or for an unlimited time (Exp 1b) we provide evidence that congruency effects are more complex than either model predicts; the effects are asymmetrical and cannot be accounted for by similarity alone. Fearful postures are especially influential when paired with facial expressions, but not when presented in a flanker task (Exp 2). We suggest refinements to each model that may account for our results and suggest that additional studies be conducted prior to drawing strong theoretical conclusions.
Resumo:
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production.
Resumo:
La présente thèse examine les liens entre le sommeil, la mémoire épisodique et les rêves. Dans une première étude, nous utilisons les technologies de la réalité virtuelle (RV) en liaison avec un paradigme de privation de sommeil paradoxal et de collecte de rêve en vue d'examiner l'hypothèse que le sommeil paradoxal et le rêve sont impliqués dans la consolidation de la mémoire épisodique. Le sommeil paradoxal a été associé au rappel des aspects spatiaux des éléments émotionnels de la tâche RV. De la même façon, l'incorporation de la tâche RV dans les rêves a été associée au rappel des aspects spatiaux de la tâche. De plus, le rappel des aspects factuels et perceptuels de la mémoire épisodique, formé lors de la tâche VR, a été associé au sommeil aux ondes lentes. Une deuxième étude examine l'hypothèse selon laquelle une fonction possible du rêve pourrait être de créer de nouvelles associations entre les éléments de divers souvenirs épisodiques. Un participant a été réveillé 43 fois lors de l'endormissement pour fournir des rapports détaillés de rêves. Les résultats suggèrent qu'un seul rêve peut comporter, dans un même contexte spatiotemporel, divers éléments appartenant aux multiples souvenirs épisodiques. Une troisième étude aborde la question de la cognition lors du sommeil paradoxal, notamment comment les aspects bizarres des rêves, qui sont formés grâce aux nouvelles combinaisons d'éléments de la mémoire épisodique, sont perçus par le rêveur. Les résultats démontrent une dissociation dans les capacités cognitives en sommeil paradoxal caractérisée par un déficit sélectif dans l'appréciation des éléments bizarres des rêves. Les résultats des quatre études suggèrent que le sommeil aux ondes lentes et le sommeil paradoxal sont différemment impliqués dans le traitement de la mémoire épisodique. Le sommeil aux ondes lentes pourrait être implique dans la consolidation de la mémoire épisodique, et le sommeil paradoxal, par l'entremise du rêve, pourrais avoir le rôle d'introduire de la flexibilité dans ce système mnémonique.
Resumo:
Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.
Resumo:
Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.
Resumo:
Les stimuli naturels projetés sur nos rétines nous fournissent de l’information visuelle riche. Cette information varie le long de propriétés de « bas niveau » telles que la luminance, le contraste, et les fréquences spatiales. Alors qu’une partie de cette information atteint notre conscience, une autre partie est traitée dans le cerveau sans que nous en soyons conscients. Les propriétés de l’information influençant l’activité cérébrale et le comportement de manière consciente versus non-consciente demeurent toutefois peu connues. Cette question a été examinée dans les deux derniers articles de la présente thèse, en exploitant les techniques psychophysiques développées dans les deux premiers articles. Le premier article présente la boîte à outils SHINE (spectrum, histogram, and intensity normalization and equalization), développée afin de permettre le contrôle des propriétés de bas niveau de l'image dans MATLAB. Le deuxième article décrit et valide la technique dite des bulles fréquentielles, qui a été utilisée tout au long des études de cette thèse pour révéler les fréquences spatiales utilisées dans diverses tâches de perception des visages. Cette technique offre les avantages d’une haute résolution au niveau des fréquences spatiales ainsi que d’un faible biais expérimental. Le troisième et le quatrième article portent sur le traitement des fréquences spatiales en fonction de la conscience. Dans le premier cas, la méthode des bulles fréquentielles a été utilisée avec l'amorçage par répétition masquée dans le but d’identifier les fréquences spatiales corrélées avec les réponses comportementales des observateurs lors de la perception du genre de visages présentés de façon consciente versus non-consciente. Les résultats montrent que les mêmes fréquences spatiales influencent de façon significative les temps de réponse dans les deux conditions de conscience, mais dans des sens opposés. Dans le dernier article, la méthode des bulles fréquentielles a été combinée à des enregistrements intracrâniens et au Continuous Flash Suppression (Tsuchiya & Koch, 2005), dans le but de cartographier les fréquences spatiales qui modulent l'activation de structures spécifiques du cerveau (l'insula et l'amygdale) lors de la perception consciente versus non-consciente des expressions faciales émotionnelles. Dans les deux régions, les résultats montrent que la perception non-consciente s'effectue plus rapidement et s’appuie davantage sur les basses fréquences spatiales que la perception consciente. La contribution de cette thèse est donc double. D’une part, des contributions méthodologiques à la recherche en perception visuelle sont apportées par l'introduction de la boîte à outils SHINE ainsi que de la technique des bulles fréquentielles. D’autre part, des indications sur les « corrélats de la conscience » sont fournies à l’aide de deux approches différentes.