191 resultados para Auditory-evoked-potentials
em Université de Lausanne, Switzerland
Resumo:
Introduction: Accurate registration of the relative timing between the occurrence of sensory events on a sub-second time scale is crucial for both sensory-motor and cognitive functions (Mauk and Buonomano, 2004; Habib, 2000). Support for this assumption comes notably from evidence that temporal processing impairments are implicated in a range of neurological and psychiatric conditions (e.g. Buhusi & Meck, 2005). For instance, deficits in fast auditory temporal integration have been regularly put forward as resulting in phonologic discrimination impairments at the basis of speech comprehension deficits characterizing e.g. dyslexia (Habib, 2000). At least two aspects of the brain mechanisms of temporal order judgment remain unknown. First, it is unknown when during the course of stimulus processing a temporal ,,stamp‟ is established to guide TOJ perception. Second, the extent of interplay between the cerebral hemispheres in engendering accurate TOJ performance is unresolved Methods: We investigated the spatiotemporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. Results: AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional de-coupling between homotopic PSR areas. Conclusions: These results support a model of temporal order processing wherein behaviorally relevant temporal information - i.e. a temporal 'stamp'- is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Resumo:
Auditory evoked potentials are informative of intact cortical functions of comatose patients. The integrity of auditory functions evaluated using mismatch negativity paradigms has been associated with their chances of survival. However, because auditory discrimination is assessed at various delays after coma onset, it is still unclear whether this impairment depends on the time of the recording. We hypothesized that impairment in auditory discrimination capabilities is indicative of coma progression, rather than of the comatose state itself and that rudimentary auditory discrimination remains intact during acute stages of coma. We studied 30 post-anoxic comatose patients resuscitated from cardiac arrest and five healthy, age-matched controls. Using a mismatch negativity paradigm, we performed two electroencephalography recordings with a standard 19-channel clinical montage: the first within 24 h after coma onset and under mild therapeutic hypothermia, and the second after 1 day and under normothermic conditions. We analysed electroencephalography responses based on a multivariate decoding algorithm that automatically quantifies neural discrimination at the single patient level. Results showed high average decoding accuracy in discriminating sounds both for control subjects and comatose patients. Importantly, accurate decoding was largely independent of patients' chance of survival. However, the progression of auditory discrimination between the first and second recordings was informative of a patient's chance of survival. A deterioration of auditory discrimination was observed in all non-survivors (equivalent to 100% positive predictive value for survivors). We show, for the first time, evidence of intact auditory processing even in comatose patients who do not survive and that progression of sound discrimination over time is informative of a patient's chance of survival. Tracking auditory discrimination in comatose patients could provide new insight to the chance of awakening in a quantitative and automatic fashion during early stages of coma.
Resumo:
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity.
Learning-induced plasticity in auditory spatial representations revealed by electrical neuroimaging.
Resumo:
Auditory spatial representations are likely encoded at a population level within human auditory cortices. We investigated learning-induced plasticity of spatial discrimination in healthy subjects using auditory-evoked potentials (AEPs) and electrical neuroimaging analyses. Stimuli were 100 ms white-noise bursts lateralized with varying interaural time differences. In three experiments, plasticity was induced with 40 min of discrimination training. During training, accuracy significantly improved from near-chance levels to approximately 75%. Before and after training, AEPs were recorded to stimuli presented passively with a more medial sound lateralization outnumbering a more lateral one (7:1). In experiment 1, the same lateralizations were used for training and AEP sessions. Significant AEP modulations to the different lateralizations were evident only after training, indicative of a learning-induced mismatch negativity (MMN). More precisely, this MMN at 195-250 ms after stimulus onset followed from differences in the AEP topography to each stimulus position, indicative of changes in the underlying brain network. In experiment 2, mirror-symmetric locations were used for training and AEP sessions; no training-related AEP modulations or MMN were observed. In experiment 3, the discrimination of trained plus equidistant untrained separations was tested psychophysically before and 0, 6, 24, and 48 h after training. Learning-induced plasticity lasted <6 h, did not generalize to untrained lateralizations, and was not the simple result of strengthening the representation of the trained lateralizations. Thus, learning-induced plasticity of auditory spatial discrimination relies on spatial comparisons, rather than a spatial anchor or a general comparator. Furthermore, cortical auditory representations of space are dynamic and subject to rapid reorganization.
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Resumo:
Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.
Resumo:
Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Resumo:
BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.
Resumo:
Interaural intensity and time differences (IID and ITD) are two binaural auditory cues for localizing sounds in space. This study investigated the spatio-temporal brain mechanisms for processing and integrating IID and ITD cues in humans. Auditory-evoked potentials were recorded, while subjects passively listened to noise bursts lateralized with IID, ITD or both cues simultaneously, as well as a more frequent centrally presented noise. In a separate psychophysical experiment, subjects actively discriminated lateralized from centrally presented stimuli. IID and ITD cues elicited different electric field topographies starting at approximately 75 ms post-stimulus onset, indicative of the engagement of distinct cortical networks. By contrast, no performance differences were observed between IID and ITD cues during the psychophysical experiment. Subjects did, however, respond significantly faster and more accurately when both cues were presented simultaneously. This performance facilitation exceeded predictions from probability summation, suggestive of interactions in neural processing of IID and ITD cues. Supra-additive neural response interactions as well as topographic modulations were indeed observed approximately 200 ms post-stimulus for the comparison of responses to the simultaneous presentation of both cues with the mean of those to separate IID and ITD cues. Source estimations revealed differential processing of IID and ITD cues initially within superior temporal cortices and also at later stages within temporo-parietal and inferior frontal cortices. Differences were principally in terms of hemispheric lateralization. The collective psychophysical and electrophysiological results support the hypothesis that IID and ITD cues are processed by distinct, but interacting, cortical networks that can in turn facilitate auditory localization.