1000 resultados para Auditory Brainstem Response
Resumo:
This study used magnetoencephalography (MEG) to examine the dynamic patterns of neural activity underlying the auditory steady-state response. We examined the continuous time-series of responses to a 32-Hz amplitude modulation. Fluctuations in the amplitude of the evoked response were found to be mediated by non-linear interactions with oscillatory processes both at the same source, in the alpha and beta frequency bands, and in the opposite hemisphere. © 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The two elcctrophysiological tests currently favoured in the clinical measurement of hearing threshold arc the brainstorm evoked potential (BAEP) and the slow vertex response (SVR). However, both tests possess disadvantages. The BAEP is the test of choice in younger patients as it is stable at all levels of arousal, but little information has been obtained to date at a range of frequencies. The SVR is frequency specific but is unreliable in certain adult subjects and is unstable during sleep or in young children. These deficiencies have prompted research into a third group of potentials, the middle latency response (MLR) and the 40HZ responses. This research has compared the SVR and 40HZ response in waking adults and reports that the 40HZ test can provide a viable alternative to the SVR provided that a high degree of subject relaxation is ensured. A second study examined the morphology of the MLR and 40HZ during sleep. This work suggested that these potentials arc markedly different during sleep and that methodological factors have been responsible for masking these changes in previous studies. The clinical possibilities of tone pip BAEPs were then examined as these components were proved to be the only stable responses present in sleep. It was found that threshold estimates to 5OOHz, lOOOHz and 4000Hz stimuli could be made to within 15dBSL in most cases. A final study looked more closely at methods of obtaining frequency specific information in sleeping subjects. Threshold estimates were made using established BAEP parameters and this was compared to a 40HZ procedure which recorded a series of BAEPs over a 100msec. time sweep. Results indicated that the 40mHz procedure was superior to existing techniques in estimating threshold to low frequency stimuli. This research has confirmed a role for the MLR and 40Hz response as alternative measures of hearing capability in waking subjects and proposes that the 40Hz technique is useful in measuring frequency specific thresholds although the responses recorded derive primarily from the brainstem.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
This study examined spoken-word recognition in children with specific language impairment (SLI) and normally developing children matched separately for age and receptive language ability. Accuracy and reaction times on an auditory lexical decision task were compared. Children with SLI were less accurate than both control groups. Two subgroups of children with SLI, distinguished by performance accuracy only, were identified. One group performed within normal limits, while a second group was significantly less accurate. Children with SLI were not slower than the age-matched controls or language-matched controls. Further, the time taken to detect an auditory signal, make a decision, or initiate a verbal response did not account for the differences between the groups. The findings are interpreted as evidence for language-appropriate processing skills acting upon imprecise or underspecified stored representations.
Resumo:
This study aimed to evaluate the neural response in double-array cochlear implant as well as to describe the refractory recovery and the spread of excitation functions. In a prospective study 11 patients were implanted with the double-array cochlear implant. Neural response telemetry (NRT) was performed intra-operatively. NRT threshold could be registered in 6 of the 11 patients, at least in one electrode. The remaining five patients did not show measurable neural response intra-operatively. It was noted that although recovery and spread of excitation functions could be recorded in all the tested electrodes with measurable neural responses, the responses were shown to be different from the usual register in patients with other etiologies.
Resumo:
Acute acoustic trauma (AAT) is a sudden sensorineural hearing loss caused by exposure of the hearing organ to acoustic overstimulation, typically an intense sound impulse, hyperbaric oxygen therapy (HOT), which favors repair of the microcirculation, can be potentially used to treat it. Hence, this study aimed to assess the effects of HOT on guinea pigs exposed to acoustic trauma. Fifteen guinea pigs were exposed to noise in the 4-kHz range with intensity of 110 dB sound level pressure for 72 h. They were assessed by brainstem auditory evoked potential (BAEP) and by distortion product otoacoustic emission (DPOAE) before and after exposure and after HOT at 2.0 absolute atmospheres for 1 h. The cochleae were then analyzed using scanning electron microscopy (SEM). There was a statistically significant difference in the signal-to-noise ratio of the DPOAE amplitudes for the 1- to 4-kHz frequencies and the SEM findings revealed damaged outer hair cells (OHC) after exposure to noise, with recovery after HOT (p = 0.0159), which did not occur on thresholds and amplitudes to BAEP (p = 0.1593). The electrophysiological BAEP data did not demonstrate effectiveness of HOT against AAT damage. However, there was improvement of the anatomical pattern of damage detected by SEM, with a significant reduction of the number of injured cochlear OHC and their functionality detected by DPOAE.
Resumo:
Dopamine (DA) is a neuromodulator in the brainstem involved with the generation and modulation of the autonomic and respiratory activities. Here we evaluated the effect of microinjection of DA intracistema magna (icm) or into the caudal nucleus tractus solitarii (cNTS) on the baseline cardiovascular and respiratory parameters and on the cardiovascular and respiratory responses to chemoreflex activation in awake rats. Guide cannulas were implanted in cisterna magna or cNTS and femoral artery and vein were catheterized. Respiratory frequency (f(R)) was measured by whole-body plethysmography. Chemoreflex was activated with KCN (iv) before and after microinjection of DA icm or into the cNTS bilaterally while mean arterial pressure (MAP), heart rate (HR) and f(R) were recorded. Microinjection of DA icm (n = 13), but not into the cNTS (n = 8) produced a significant decrease in baseline MAP (-15 +/- 1 vs 1 +/- 1 mm Hg) and HR (-55 +/- 11 vs -11 +/- 17 bpm) in relation to control (saline with ascorbic acid, n = 9) but no significant changes in baseline f(R). Microinjection of DA icm or into the cNTS produced no significant changes in the pressor, bradycardic and tachypneic responses to chemoreflex activation. These data show that a) DA icm affects baseline cardiovascular regulation, but not baseline f(R) and autonomic and respiratory components of chemoreflex and b) DA into the cNTS does not affect either the autonomic activity to the cardiovascular system or the autonomic and respiratory responses of chemoreflex activation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABA(A)-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABA(A)-mediated inhibition plays a pronounced role in NCM`s auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM`s neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABA(A) receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABA(A)-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks.
Resumo:
Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM), which is the most widely used method to analyze fMRI datasets.
Resumo:
The electrical stimulation generated by the Cochlear Implant (CI) may improve the neural synchrony and hence contribute to the development of auditory skills in patients with Auditory Neuropathy / Auditory Dyssynchrony (AN/AD). Aim: Prospective cohort cross-sectional study to evaluate the auditory performance and the characteristics of the electrically evoked compound action potential (ECAP) in 18 children with AN/AD and cochlear implants. Material and methods: The auditory perception was evaluated by sound field thresholds and speech perception tests. To evaluate ECAP`s characteristics, the threshold and amplitude of neural response were evaluated at 80Hz and 35Hz. Results: No significant statistical difference was found concerning the development of auditory skills. The ECAP`s characteristics differences at 80 and 35Hz stimulation rate were also not statistically significant. Conclusion: The CI was seen as an efficient resource to develop auditory skills in 94% of the AN/AD patients studied. The auditory perception benefits and the possibility to measure ECAP showed that the electrical stimulation could compensate for the neural dyssynchrony caused by the AN/AD. However, a unique clinical procedure cannot be proposed at this point. Therefore, a careful and complete evaluation of each AN/AD patient before recommending a Cochlear Implant is advised. Clinical Trials: NCT01023932
Resumo:
Spontaneous and tone-evoked changes in light reflectance were recorded from primary auditory cortex (A1) of anesthetized cats (barbiturate induction, ketamine maintenance). Spontaneous 0.1-Hz oscillations of reflectance of 540- and 690-nm light were recorded in quiet. Stimulation with tone pips evoked localized reflectance decreases at 540 nm in 3/10 cats. The distribution of patches activated by tones of different frequencies reflected the known tonotopic organization of auditory cortex. Stimulus-evoked reflectance changes at 690 nm were observed in 9/10 cats but lacked stimulus-dependent topography. In two experiments, stimulus-evoked optical signals at 540 nm were compared with multiunit responses to the same stimuli recorded at multiple sites. A significant correlation (P < 0.05) between magnitude of reflectance decrease and multiunit response strength was evident in only one of five stimulus conditions in each experiment. There was no significant correlation when data were pooled across all stimulus conditions in either experiment. In one experiment, the spatial distribution of activated patches, evident in records of spontaneous activity at 540 nm, was similar to that of patches activated by tonal stimuli. These results suggest that local cerebral blood volume changes reflect the gross tonotopic organization of A1 but are not restricted to the sites of spiking neurons.
Resumo:
The relation of automatic auditory discrimination, measured with MMN, with the type of stimuli has not been well established in the literature, despite its importance as an electrophysiological measure of central sound representation. In this study, MMN response was elicited by pure-tone and speech binaurally passive auditory oddball paradigm in a group of 8 normal young adult subjects at the same intensity level (75 dB SPL). The frequency difference in pure-tone oddball was 100 Hz (standard = 1 000 Hz; deviant = 1 100 Hz; same duration = 100 ms), in speech oddball (standard /ba/; deviant /pa/; same duration = 175 ms) the Portuguese phonemes are both plosive bi-labial in order to maintain a narrow frequency band. Differences were found across electrode location between speech and pure-tone stimuli. Larger MMN amplitude, duration and higher latency to speech were verified compared to pure-tone in Cz and Fz as well as significance differences in latency and amplitude between mastoids. Results suggest that speech may be processed differently than non-speech; also it may occur in a later stage due to overlapping processes since more neural resources are required to speech processing.
Resumo:
Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.
Resumo:
Aim We report three cases of Landau-Kleffner syndrome (LKS) in children (two females, one male) in whom diagnosis was delayed because the sleep electroencephalography (EEG) was initially normal. Method Case histories including EEG, positron emission tomography findings, and long-term outcome were reviewed. Results Auditory agnosia occurred between the age of 2 years and 3 years 6 months, after a period of normal language development. Initial awake and sleep EEG, recorded weeks to months after the onset of language regression, during a nap period in two cases and during a full night of sleep in the third case, was normal. Repeat EEG between 2 months and 2 years later showed epileptiform discharges during wakefulness and strongly activated by sleep, with a pattern of continuous spike-waves during slow-wave sleep in two patients. Patients were diagnosed with LKS and treated with various antiepileptic regimens, including corticosteroids. One patient in whom EEG became normal on hydrocortisone is making significant recovery. The other two patients did not exhibit a sustained response to treatment and remained severely impaired. Interpretation Sleep EEG may be normal in the early phase of acquired auditory agnosia. EEG should be repeated frequently in individuals in whom a firm clinical diagnosis is made to facilitate early treatment.
Resumo:
Neuroimaging studies analyzing neurophysiological signals are typically based on comparing averages of peri-stimulus epochs across experimental conditions. This approach can however be problematic in the case of high-level cognitive tasks, where response variability across trials is expected to be high and in cases where subjects cannot be considered part of a group. The main goal of this thesis has been to address this issue by developing a novel approach for analyzing electroencephalography (EEG) responses at the single-trial level. This approach takes advantage of the spatial distribution of the electric field on the scalp (topography) and exploits repetitions across trials for quantifying the degree of discrimination between experimental conditions through a classification scheme. In the first part of this thesis, I developed and validated this new method (Tzovara et al., 2012a,b). Its general applicability was demonstrated with three separate datasets, two in the visual modality and one in the auditory. This development allowed then to target two new lines of research, one in basic and one in clinical neuroscience, which represent the second and third part of this thesis respectively. For the second part of this thesis (Tzovara et al., 2012c), I employed the developed method for assessing the timing of exploratory decision-making. Using single-trial topographic EEG activity during presentation of a choice's payoff, I could predict the subjects' subsequent decisions. This prediction was due to a topographic difference which appeared on average at ~516ms after the presentation of payoff and was subject-specific. These results exploit for the first time the temporal correlates of individual subjects' decisions and additionally show that the underlying neural generators start differentiating their responses already ~880ms before the button press. Finally, in the third part of this project, I focused on a clinical study with the goal of assessing the degree of intact neural functions in comatose patients. Auditory EEG responses were assessed through a classical mismatch negativity paradigm, during the very early phase of coma, which is currently under-investigated. By taking advantage of the decoding method developed in the first part of the thesis, I could quantify the degree of auditory discrimination at the single patient level (Tzovara et al., in press). Our results showed for the first time that even patients who do not survive the coma can discriminate sounds at the neural level, during the first hours after coma onset. Importantly, an improvement in auditory discrimination during the first 48hours of coma was predictive of awakening and survival, with 100% positive predictive value. - L'analyse des signaux électrophysiologiques en neuroimagerie se base typiquement sur la comparaison des réponses neurophysiologiques à différentes conditions expérimentales qui sont moyennées après plusieurs répétitions d'une tâche. Pourtant, cette approche peut être problématique dans le cas des fonctions cognitives de haut niveau, où la variabilité des réponses entre les essais peut être très élevéeou dans le cas où des sujets individuels ne peuvent pas être considérés comme partie d'un groupe. Le but principal de cette thèse est d'investiguer cette problématique en développant une nouvelle approche pour l'analyse des réponses d'électroencephalographie (EEG) au niveau de chaque essai. Cette approche se base sur la modélisation de la distribution du champ électrique sur le crâne (topographie) et profite des répétitions parmi les essais afin de quantifier, à l'aide d'un schéma de classification, le degré de discrimination entre des conditions expérimentales. Dans la première partie de cette thèse, j'ai développé et validé cette nouvelle méthode (Tzovara et al., 2012a,b). Son applicabilité générale a été démontrée avec trois ensembles de données, deux dans le domaine visuel et un dans l'auditif. Ce développement a permis de cibler deux nouvelles lignes de recherche, la première dans le domaine des neurosciences cognitives et l'autre dans le domaine des neurosciences cliniques, représentant respectivement la deuxième et troisième partie de ce projet. En particulier, pour la partie cognitive, j'ai appliqué cette méthode pour évaluer l'information temporelle de la prise des décisions (Tzovara et al., 2012c). En se basant sur l'activité topographique de l'EEG au niveau de chaque essai pendant la présentation de la récompense liée à un choix, on a pu prédire les décisions suivantes des sujets (en termes d'exploration/exploitation). Cette prédiction s'appuie sur une différence topographique qui apparaît en moyenne ~516ms après la présentation de la récompense. Ces résultats exploitent pour la première fois, les corrélés temporels des décisions au niveau de chaque sujet séparément et montrent que les générateurs neuronaux de ces décisions commencent à différentier leurs réponses déjà depuis ~880ms avant que les sujets appuient sur le bouton. Finalement, pour la dernière partie de ce projet, je me suis focalisée sur une étude Clinique afin d'évaluer le degré des fonctions neuronales intactes chez les patients comateux. Des réponses EEG auditives ont été examinées avec un paradigme classique de mismatch negativity, pendant la phase précoce du coma qui est actuellement sous-investiguée. En utilisant la méthode de décodage développée dans la première partie de la thèse, j'ai pu quantifier le degré de discrimination auditive au niveau de chaque patient (Tzovara et al., in press). Nos résultats montrent pour la première fois que même des patients comateux qui ne vont pas survivre peuvent discriminer des sons au niveau neuronal, lors de la phase aigue du coma. De plus, une amélioration dans la discrimination auditive pendant les premières 48heures du coma a été observée seulement chez des patients qui se sont réveillés par la suite (100% de valeur prédictive pour un réveil).