988 resultados para Auditory temporal processing
Resumo:
BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.
Resumo:
The term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. In natural settings, a sound produced by a living being or an object provides information about the identity and the location of the sound source. Sound's identity is orocessed alono the ventral "What" pathway which consists of regions within the superior and middle temporal cortices as well as the inferior frontal gyrus. This work concerns the creation of individual auditory object representations in narrow semantic categories and their plasticity using electrical imaging. Discrimination of sounds from broad category has been shown to occur along a temporal hierarchy and in different brain regions along the ventral "What" pathway. However, sounds belonging to the same semantic category, such as faces or voices, were shown to be discriminated in specific brain areas and are thought to represent a special class of stimuli. I have investigated how cortical representations of a narrow category, here birdsongs, is modulated by training novices to recognized songs of individual bird species. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory ventral "What" pathway as a function of the level of expertise newly acquired. Correct recognition of trained items induces a sharpening within a left-lateralized semantic network starting around 200ms, whereas untrained items' processing occurs later in lower-level and memory-related regions. With another category of sounds belonging to the same category, here heartbeats, I investigated the cortical representations of correct and incorrect recognition of sounds. Source estimations revealed differential representations partially overlapping with regions involved in the semantic network that is activated when participants became experts in the task. Incorrect recognition also induces a higher activation when compared to correct recognition in regions processing lower-level features. The discrimination of heartbeat sounds is a difficult task and requires a continuous listening. I investigated whether the repetition effects are modulated by participants' behavioral performance. Dynamic source estimations revealed repetition suppression in areas located outside of the semantic network. Therefore, individual environmental sounds become meaningful with training. Their representations mainly involve a left-lateralized network of brain regions that are tuned with expertise, as well as other brain areas, not related to semantic processing, and occurring in early stages of semantic processing. -- Le terme objet sonore" décrit une expérience auditive associée à un événement acoustique produit par une source sonore. Dans l'environnement, un son produit par un être vivant ou un objet fournit des informations concernant l'identité et la localisation de la source sonore. Les informations concernant l'identité d'un son sont traitée le long de la voie ventrale di "Quoi". Cette voie est composée de regions situées dans le cortex temporal et frontal. L'objet de ce travail est d'étudier quels sont les neuro-mecanismes impliqués dans la représentation de nouveaux objets sonores appartenant à une meme catégorie sémantique ainsi que les phénomènes de plasticité à l'aide de l'imagerie électrique. Il a été montré que la discrimination de sons appartenant à différentes catégories sémantiques survient dans différentes aires situées le long la voie «Quoi» et suit une hiérarchie temporelle II a également été montré que la discrimination de sons appartenant à la même catégorie sémantique tels que les visages ou les voix, survient dans des aires spécifiques et représenteraient des stimuli particuliers. J'ai étudié comment les représentations corticales de sons appartenant à une même catégorie sémantique, dans ce cas des chants d'oiseaux, sont modifiées suite à un entraînement Pour ce faire, des sujets novices ont été entraînés à reconnaître des chants d'oiseaux spécifiques L'analyse des estimations des sources neuronales au cours du temps a montré que les representations des objets sonores activent de manière différente des régions situées le long de la vo,e ventrale en fonction du niveau d'expertise acquis grâce à l'entraînement. La reconnaissance des chants pour lesquels les sujets ont été entraînés implique un réseau sémantique principalement situé dans l'hémisphère gauche activé autour de 200ms. Au contraire, la reconnaissance des chants pour lesquels les sujets n'ont pas été entraînés survient plus tardivement dans des régions de plus bas niveau. J'ai ensuite étudié les mécanismes impliqués dans la reconnaissance et non reconnaissance de sons appartenant à une autre catégorie, .es battements de coeur. L'analyse des sources neuronales a montre que certaines régions du réseau sémantique lié à l'expertise acquise sont recrutées de maniere différente en fonction de la reconnaissance ou non reconnaissance du son La non reconnaissance des sons recrute des régions de plus bas niveau. La discrimination des bruits cardiaques est une tâche difficile et nécessite une écoute continue du son. J'ai étudié l'influence des réponses comportementales sur les effets de répétitions. L'analyse des sources neuronales a montré que la reconnaissance ou non reconnaissance des sons induisent des effets de repétition différents dans des régions situées en dehors des aires du réseau sémantique. Ainsi, les sons acquièrent un sens grâce à l'entraînement. Leur représentation corticale implique principalement un réseau d'aires cérébrales situé dans l'hémisphère gauche, dont l'activité est optimisée avec l'acquisition d'un certain niveau d'expertise, ainsi que d'autres régions qui ne sont pas liée au traitement de l'information sémantique. L'activité de ce réseau sémantique survient plus rapidemement que la prédiction par le modèle de la hiérarchie temporelle.
Resumo:
Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.
Resumo:
Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise.
Resumo:
Forensic laboratories mainly focus on the qualification and the quantitation of the illicit drug under analysis as both aspects are used for judiciary purposes. Therefore, information related to cutting agents (adulterants and diluents) detected in illicit drugs is limited in the forensic literature. This article discusses the type and frequency of adulterants and diluents detected in more than 6000 cocaine specimens and 3000 heroin specimens, confiscated in western Switzerland from 2006 to 2014. The results show a homogeneous and quite unchanging adulteration for heroin, while for cocaine it could be characterised as heterogeneous and relatively dynamic. Furthermore, the results indicate that dilution affects more cocaine than heroin. Therefore, the results provided by this study tend to reveal differences between the respective structures of production or distribution of cocaine and heroin. This research seeks to promote the systematic analysis of cutting agents by forensic laboratories. Collecting and processing data related to the presence of cutting agents in illicit drug specimens produces relevant information to understand and to compare the structure of illicit drug markets.
Resumo:
We describe the case of a patient with pure verbal palinacousis and perseveration of inner speech after a right inferior temporal lesion. The superior temporal lobe, including the superior temporal sulcus and the interhemispheric connection between the 2 superior temporal lobes, explored by tractography, were preserved. These regions are involved in voice processing, verbal short-term memory and inner speech. It can then be hypothesised that abnormal activity in this network has occurred. Palinacousis and 'palinendophonia', a term proposed for this symptom not previously reported, may be due to common cognitive processes disorders involved in both voice hearing and inner speech.
Resumo:
Many aspects of human behavior are driven by rewards, yet different people are differentially sensitive to rewards and punishment. In this study, we showthat white matter microstructure inthe uncinate/inferiorfronto-occipitalfasciculus, defined byfractional anisotropy values derived from diffusion tensor magnetic resonance images, correlates with both short-term (indexed by the fMRI blood oxygenation level-dependent response to reward in the nucleus accumbens) and long-term (indexed by the trait measure sensitivity to punishment) reactivityto rewards.Moreover,traitmeasures of reward processingwere also correlatedwith reward-relatedfunctional activation in the nucleus accumbens. The white matter tract revealed by the correlational analysis connects the anterior temporal lobe with the medial and lateral orbitofrontal cortex and also supplies the ventral striatum. The pattern of strong correlations suggests an intimate relationship betweenwhitematter structure and reward-related behaviorthatmay also play a rolein a number of pathological conditions, such as addiction and pathological gambling.
Resumo:
The mismatch negativity is an electrophysiological marker of auditory change detection in the event-related brain potential and has been proposed to reflect an automatic comparison process between an incoming stimulus and the representation of prior items in a sequence. There is evidence for two main functional subcomponents comprising the MMN, generated by temporal and frontal brain areas, respectively. Using data obtained in an MMN paradigm, we performed time-frequency analysis to reveal the changes in oscillatory neural activity in the theta band. The results suggest that the frontal component of the MMN is brought about by an increase in theta power for the deviant trials and, possibly, by an additional contribution of theta phase alignment. By contrast, the temporal component of the MMN, best seen in recordings from mastoid electrodes, is generated by phase resetting of theta rhythm with no concomitant power modulation. Thus, frontal and temporal MMN components do not only differ with regard to their functional significance but also appear to be generated by distinct neurophysiological mechanisms.
Resumo:
Peer-reviewed
Resumo:
This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.
Resumo:
The inferior colliculus is a primary relay for the processing of auditory information in the brainstem. The inferior colliculus is also part of the so-called brain aversion system as animals learn to switch off the electrical stimulation of this structure. The purpose of the present study was to determine whether associative learning occurs between aversion induced by electrical stimulation of the inferior colliculus and visual and auditory warning stimuli. Rats implanted with electrodes into the central nucleus of the inferior colliculus were placed inside an open-field and thresholds for the escape response to electrical stimulation of the inferior colliculus were determined. The rats were then placed inside a shuttle-box and submitted to a two-way avoidance paradigm. Electrical stimulation of the inferior colliculus at the escape threshold (98.12 ± 6.15 (A, peak-to-peak) was used as negative reinforcement and light or tone as the warning stimulus. Each session consisted of 50 trials and was divided into two segments of 25 trials in order to determine the learning rate of the animals during the sessions. The rats learned to avoid the inferior colliculus stimulation when light was used as the warning stimulus (13.25 ± 0.60 s and 8.63 ± 0.93 s for latencies and 12.5 ± 2.04 and 19.62 ± 1.65 for frequencies in the first and second halves of the sessions, respectively, P<0.01 in both cases). No significant changes in latencies (14.75 ± 1.63 and 12.75 ± 1.44 s) or frequencies of responses (8.75 ± 1.20 and 11.25 ± 1.13) were seen when tone was used as the warning stimulus (P>0.05 in both cases). Taken together, the present results suggest that rats learn to avoid the inferior colliculus stimulation when light is used as the warning stimulus. However, this learning process does not occur when the neutral stimulus used is an acoustic one. Electrical stimulation of the inferior colliculus may disturb the signal transmission of the stimulus to be conditioned from the inferior colliculus to higher brain structures such as amygdala
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
Happy emotional states have not been extensively explored in functional magnetic resonance imaging studies using autobiographic recall paradigms. We investigated the brain circuitry engaged during induction of happiness by standardized script-driven autobiographical recall in 11 healthy subjects (6 males), aged 32.4 ± 7.2 years, without physical or psychiatric disorders, selected according to their ability to vividly recall personal experiences. Blood oxygen level-dependent (BOLD) changes were recorded during auditory presentation of personal scripts of happiness, neutral content and negative emotional content (irritability). The same uniform structure was used for the cueing narratives of both emotionally salient and neutral conditions, in order to decrease the variability of findings. In the happiness relative to the neutral condition, there was an increased BOLD signal in the left dorsal prefrontal cortex and anterior insula, thalamus bilaterally, left hypothalamus, left anterior cingulate gyrus, and midportions of the left middle temporal gyrus (P < 0.05, corrected for multiple comparisons). Relative to the irritability condition, the happiness condition showed increased activity in the left insula, thalamus and hypothalamus, and in anterior and midportions of the inferior and middle temporal gyri bilaterally (P < 0.05, corrected), varying in size between 13 and 64 voxels. Findings of happiness-related increased activity in prefrontal and subcortical regions extend the results of previous functional imaging studies of autobiographical recall. The BOLD signal changes identified reflect general aspects of emotional processing, emotional control, and the processing of sensory and bodily signals associated with internally generated feelings of happiness. These results reinforce the notion that happiness induction engages a wide network of brain regions.
Resumo:
The present thesis study is a systematic investigation of information processing at sleep onset, using auditory event-related potentials (ERPs) as a test of the neurocognitive model of insomnia. Insomnia is an extremely prevalent disorder in society resulting in problems with daytime functioning (e.g., memory, concentration, job performance, mood, job and driving safety). Various models have been put forth in an effort to better understand the etiology and pathophysiology of this disorder. One of the newer models, the neurocognitive model of insomnia, suggests that chronic insomnia occurs through conditioned central nervous system arousal. This arousal is reflected through increased information processing which may interfere with sleep initiation or maintenance. The present thesis employed event-related potentials as a direct method to test information processing during the sleep-onset period. Thirteen poor sleepers with sleep-onset insomnia and 1 2 good sleepers participated in the present study. All poor sleepers met the diagnostic criteria for psychophysiological insomnia and had a complaint of problems with sleep initiation. All good sleepers reported no trouble sleeping and no excessive daytime sleepiness. Good and poor sleepers spent two nights at the Brock University Sleep Research Laboratory. The first night was used to screen for sleep disorders; the second night was used to investigate information processing during the sleep-onset period. Both groups underwent a repeated sleep-onsets task during which an auditory oddball paradigm was delivered. Participants signalled detection of a higher pitch target tone with a button press as they fell asleep. In addition, waking alert ERPs were recorded 1 hour before and after sleep on both Nights 1 and 2.As predicted by the neurocognitive model of insomnia, increased CNS activity was found in the poor sleepers; this was reflected by their smaller amplitude P2 component seen during wake of the sleep-onset period. Unlike the P2 component, the Nl, N350, and P300 did not vary between the groups. The smaller P2 seen in our poor sleepers indicates that they have a deficit in the sleep initiation processes. Specifically, poor sleepers do not disengage their attention from the outside environment to the same extent as good sleepers during the sleep-onset period. The lack of findings for the N350 suggest that this sleep component may be intact in those with insomnia and that it is the waking components (i.e., Nl, P2) that may be leading to the deficit in sleep initiation. Further, it may be that the mechanism responsible for the disruption of sleep initiation in the poor sleepers is most reflected by the P2 component. Future research investigating ERPs in insomnia should focus on the identification of the components most sensitive to sleep disruption. As well, methods should be developed in order to more clearly identify the various types of insomnia populations in research contexts (e.g., psychophysiological vs. sleep-state misperception) and the various individual (personality characteristics, motivation) and environmental factors (arousal-related variables) that influence particular ERP components. Insomnia has serious consequences for health, safety, and daytime functioning, thus research efforts should continue in order to help alleviate this highly prevalent condition.
Resumo:
As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.