980 resultados para Auditory Brainstem Response
Resumo:
Hearing is one of the last sensory modalities to be subjected to genetic analysis in Drosophila melanogaster. We describe a behavioral assay for auditory function involving courtship among groups of males triggered by the pulse component of the courtship song. In a mutagenesis screen for mutations that disrupt the auditory response, we have recovered 15 mutations that either reduce or abolish this response. Mutant audiograms indicate that seven mutants reduced the amplitude of the response at all intensities. Another seven abolished the response altogether. The other mutant, 5L3, responded only at high sound intensities, indicating that the threshold was shifted in this mutant. Six mutants were characterized in greater detail. 5L3 had a general courtship defect; courtship of females by 5L3 males also was affected strongly. 5P1 males courted females normally but had reduced success at copulation. 5P1 and 5N18 showed a significant decrement in olfactory response, indicating that the defects in these mutations are not specific to the auditory pathway. Two other mutants, 5M8 and 5N30, produced amotile sperm although in 5N30 this phenotype was genetically separable from the auditory phenotype. Finally, a new adult circling behavior phenotype, the pirouette phenotype, associated with massive neurodegeneration in the brain, was discovered in two mutants, 5G10 and 5N18. This study provides the basis for a genetic and molecular dissection of auditory mechanosensation and auditory behavior.
Resumo:
Objective: To examine the relationship between the auditory brain-stem response (ABR) and its reconstructed waveforms following discrete wavelet transformation (DWT), and to comment on the resulting implications for ABR DWT time-frequency analysis. Methods: ABR waveforms were recorded from 120 normal hearing subjects at 90, 70, 50, 30, 10 and 0 dBnHL, decomposed using a 6 level discrete wavelet transformation (DWT), and reconstructed at individual wavelet scales (frequency ranges) A6, D6, D5 and D4. These waveforms were then compared for general correlations, and for patterns of change due to stimulus level, and subject age, gender and test ear. Results: The reconstructed ABR DWT waveforms showed 3 primary components: a large-amplitude waveform in the low-frequency A6 scale (0-266.6 Hz) with its single peak corresponding in latency with ABR waves III and V; a mid-amplitude waveform in the mid-frequency D6 scale (266.6-533.3 Hz) with its first 5 waves corresponding in latency to ABR waves 1, 111, V, VI and VII; and a small-amplitude, multiple-peaked waveform in the high-frequency D5 scale (533.3-1066.6 Hz) with its first 7 waves corresponding in latency to ABR waves 1, 11, 111, IV, V, VI and VII. Comparisons between ABR waves 1, 111 and V and their corresponding reconstructed ABR DWT waves showed strong correlations and similar, reliable, and statistically robust changes due to stimulus level and subject age, gender and test ear groupings. Limiting these findings, however, was the unexplained absence of a small number (2%, or 117/6720) of reconstructed ABR DWT waves, despite their corresponding ABR waves being present. Conclusions: Reconstructed ABR DWT waveforms can be used as valid time-frequency representations of the normal ABR, but with some limitations. In particular, the unexplained absence of a small number of reconstructed ABR DWT waves in some subjects, probably resulting from 'shift invariance' inherent to the DWT process, needs to be addressed. Significance: This is the first report of the relationship between the ABR and its reconstructed ABR DWT waveforms in a large normative sample. (C) 2004 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
This study used magnetoencephalography (MEG) to examine the dynamic patterns of neural activity underlying the auditory steady-state response. We examined the continuous time-series of responses to a 32-Hz amplitude modulation. Fluctuations in the amplitude of the evoked response were found to be mediated by non-linear interactions with oscillatory processes both at the same source, in the alpha and beta frequency bands, and in the opposite hemisphere. © 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The two elcctrophysiological tests currently favoured in the clinical measurement of hearing threshold arc the brainstorm evoked potential (BAEP) and the slow vertex response (SVR). However, both tests possess disadvantages. The BAEP is the test of choice in younger patients as it is stable at all levels of arousal, but little information has been obtained to date at a range of frequencies. The SVR is frequency specific but is unreliable in certain adult subjects and is unstable during sleep or in young children. These deficiencies have prompted research into a third group of potentials, the middle latency response (MLR) and the 40HZ responses. This research has compared the SVR and 40HZ response in waking adults and reports that the 40HZ test can provide a viable alternative to the SVR provided that a high degree of subject relaxation is ensured. A second study examined the morphology of the MLR and 40HZ during sleep. This work suggested that these potentials arc markedly different during sleep and that methodological factors have been responsible for masking these changes in previous studies. The clinical possibilities of tone pip BAEPs were then examined as these components were proved to be the only stable responses present in sleep. It was found that threshold estimates to 5OOHz, lOOOHz and 4000Hz stimuli could be made to within 15dBSL in most cases. A final study looked more closely at methods of obtaining frequency specific information in sleeping subjects. Threshold estimates were made using established BAEP parameters and this was compared to a 40HZ procedure which recorded a series of BAEPs over a 100msec. time sweep. Results indicated that the 40mHz procedure was superior to existing techniques in estimating threshold to low frequency stimuli. This research has confirmed a role for the MLR and 40Hz response as alternative measures of hearing capability in waking subjects and proposes that the 40Hz technique is useful in measuring frequency specific thresholds although the responses recorded derive primarily from the brainstem.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.
Resumo:
Kiwi are rare and strictly protected birds of iconic status in New Zealand. Yet, perhaps due to their unusual, nocturnal lifestyle, surprisingly little is known about their behaviour or physiology. In the present study, we exploited known correlations between morphology and physiology in the avian inner ear and brainstem to predict the frequency range of best hearing in the North Island brown kiwi. The mechanosensitive hair bundles of the sensory hair cells in the basilar papilla showed the typical change from tall bundles with few stereovilli to short bundles with many stereovilli along the apical-to-basal tonotopic axis. In contrast to most birds, however, the change was considerably less in the basal half of the epithelium. Dendritic lengths in the brainstem nucleus laminaris also showed the typical change along the tonotopic axis. However, as in the basilar papilla, the change was much less pronounced in the presumed high-frequency regions. Together, these morphological data suggest a fovea-like overrepresentation of a narrow high-frequency band in kiwi. Based on known correlations of hair-cell microanatomy and physiological responses in other birds, a specific prediction for the frequency representation along the basilar papilla of the kiwi was derived. The predicted overrepresentation of approximately 4-6 kHz matches potentially salient frequency bands of kiwi vocalisations and may thus be an adaptation to a nocturnal lifestyle in which auditory communication plays a dominant role.
Resumo:
Due to its three-dimensional folding pattern, the human neocortex; poses a challenge for accurate co-registration of grouped functional; brain imaging data. The present study addressed this problem by; employing three-dimensional continuum-mechanical image-warping; techniques to derive average anatomical representations for coregistration; of functional magnetic resonance brain imaging data; obtained from 10 male first-episode schizophrenia patients and 10 age-matched; male healthy volunteers while they performed a version of the; Tower of London task. This novel technique produced an equivalent; representation of blood oxygenation level dependent (BOLD) response; across hemispheres, cortical regions, and groups, respectively, when; compared to intensity average co-registration, using a deformable; Brodmann area atlas as anatomical reference. Somewhat closer; association of Brodmann area boundaries with primary visual and; auditory areas was evident using the gyral pattern average model.; Statistically-thresholded BOLD cluster data confirmed predominantly; bilateral prefrontal and parietal, right frontal and dorsolateral; prefrontal, and left occipital activation in healthy subjects, while; patients’ hemispheric dominance pattern was diminished or reversed,; particularly decreasing cortical BOLD response with increasing task; difficulty in the right superior temporal gyrus. Reduced regional gray; matter thickness correlated with reduced left-hemispheric prefrontal/; frontal and bilateral parietal BOLD activation in patients. This is the; first study demonstrating that reduction of regional gray matter in; first-episode schizophrenia patients is associated with impaired brain; function when performing the Tower of London task, and supports; previous findings of impaired executive attention and working memory; in schizophrenia.
Resumo:
This study was designed to identify the neural networks underlying automatic auditory deviance detection in 10 healthy subjects using functional magnetic resonance imaging. We measured blood oxygenation level-dependent contrasts derived from the comparison of blocks of stimuli presented as a series of standard tones (50 ms duration) alone versus blocks that contained rare duration-deviant tones (100 ms) that were interspersed among a series of frequent standard tones while subjects were watching a silent movie. Possible effects of scanner noise were assessed by a “no tone” condition. In line with previous positron emission tomography and EEG source modeling studies, we found temporal lobe and prefrontal cortical activation that was associated with auditory duration mismatch processing. Data were also analyzed employing an event-related hemodynamic response model, which confirmed activation in response to duration-deviant tones bilaterally in the superior temporal gyrus and prefrontally in the right inferior and middle frontal gyri. In line with previous electrophysiological reports, mismatch activation of these brain regions was significantly correlated with age. These findings suggest a close relationship of the event-related hemodynamic response pattern with the corresponding electrophysiological activity underlying the event-related “mismatch negativity” potential, a putative measure of auditory sensory memory.
Resumo:
Naming an object entails a number of processing stages, including retrieval of a target lexical concept and encoding of its phonological word form. We investigated these stages using the picture-word interference task in an fMRI experiment. Participants named target pictures in the presence of auditorily presented semantically related, phonologically related, or unrelated distractor words or in isolation. We observed BOLD signal changes in left-hemisphere regions associated with lexical-conceptual and phonological processing, including the midto-posterior lateral temporal cortex. However, these BOLD responses manifested as signal reductions for all distractor conditions relative to naming alone. Compared with unrelated words, phonologically related distractors showed further signal reductions, whereas only the pars orbitalis of the left inferior frontal cortex showed a selective reduction in response in the semantic condition. We interpret these findings as indicating that the word forms of lexical competitors are phonologically encoded and that competition during lexical selection is reduced by phonologically related distractors. Since the extended nature of auditory presentation requires a large portion of a word to be presented before its meaning is accessed, we attribute the BOLD signal reductions observed for semantically related and unrelated words to lateral inhibition mechanisms engaged after target name selection has occurred, as has been proposed in some production models.
Resumo:
This thesis examines brain networks involved in auditory attention and auditory working memory using measures of task performance, brain activity, and neuroanatomical connectivity. Auditory orienting and maintenance of attention were compared with visual orienting and maintenance of attention, and top-down controlled attention was compared to bottom-up triggered attention in audition. Moreover, the effects of cognitive load on performance and brain activity were studied using an auditory working memory task. Corbetta and Shulman s (2002) model of visual attention suggests that what is known as the dorsal attention system (intraparietal sulcus/superior parietal lobule, IPS/SPL and frontal eye field, FEF) is involved in the control of top-down controlled attention, whereas what is known as the ventral attention system (temporo-parietal junction, TPJ and areas of the inferior/middle frontal gyrus, IFG/MFG) is involved in bottom-up triggered attention. The present results show that top-down controlled auditory attention also activates IPS/SPL and FEF. Furthermore, in audition, TPJ and IFG/MFG were activated not only by bottom-up triggered attention, but also by top-down controlled attention. In addition, the posterior cerebellum and thalamus were activated by top-down controlled attention shifts and the ventromedial prefrontal cortex (VMPFC) was activated by to-be-ignored, but attention-catching salient changes in auditory input streams. VMPFC may be involved in the evaluation of environmental events causing the bottom-up triggered engagement of attention. Auditory working memory activated a brain network that largely overlapped with the one activated by top-down controlled attention. The present results also provide further evidence of the role of the cerebellum in cognitive processing: During auditory working memory tasks, both activity in the posterior cerebellum (the crus I/II) and reaction speed increased when the cognitive load increased. Based on the present results and earlier theories on the role of the cerebellum in cognitive processing, the function of the posterior cerebellum in cognitive tasks may be related to the optimization of response speed.
Resumo:
Cognitive impairments of attention, memory and executive functions are a fundamental feature of the pathophysiology of schizophrenia. The neurophysiological and neurochemical changes in the auditory cortex are shown to underlie cognitive impairmentsin schizophrenia patients. Functional state of the neural substrate of auditory information processing could be objectively and non-invasively probed with auditory event-related potentials (ERPs) and event- related fields (ERFs). In the current work, we explored the neurochemical effect on the neural origins of auditory information processing in relation to schizophrenia. By means of ERPs/ERFs we aimed to determine how neural substrates of auditory information processing are modulated by antipsychotic medication in schizophrenia spectrum patients (Studies I, II) and by neuropharmacological challenges in healthy human subjects (Studies III, IV). First, with auditory ERPs we investigated the effects of olanzapine (Study I) and risperidone (Study II) in a group of patients with schizophrenia spectrum disorders. After 2 and 4 weeks of treatment, olanzapine has no significant effects on mismatch negativity(MMN) and P300, which, as it has been suggested, respectively reflect preattentive and attention-dependent information processing. After 2 weeks of treatment, risperidone has no significant effect on P300, however risperidone reduces P200 amplitude. This latter effect of risperidone on neural resources responsible for P200 generation could be partly explained through the action of dopamine. Subsequently, we used simultaneous EEG/MEG to investigate the effects of memantine (Study III) and methylphenidate (Study IV) in healthy subjects. We found that memantine modulates MMN response without changing other ERP components. This could be interpreted as being due to the possible influence of memantine through the NMDA receptors on auditory change- detection mechanism, with processing of auditory stimuli remaining otherwise unchanged. Further, we found that methylphenidate does not modulate the MMN response. This finding could indicate no association between catecholaminergic activities and electrophysiological measures of preattentive auditory discrimination processes reflected in the MMN. However, methylphenidate decreases the P200 amplitudes. This could be interpreted as a modulation of auditory information processing reflected in P200 by dopaminergic and noradrenergic systems. Taken together, our set of studies indicates a complex pattern of neurochemical influences produced by the antipsychotic drugs in the neural substrate of auditory information processing in patients with schizophrenia spectrum disorders and by the pharmacological challenges in healthy subjects studied with ERPs and ERFs.
Resumo:
Crickets have two tympanal membranes on the tibiae of each foreleg. Among several field cricket species of the genus Gryllus (Gryllinae), the posterior tympanal membrane (PTM) is significantly larger than the anterior membrane (ATM). Laser Doppler vibrometric measurements have shown that the smaller ATM does not respond as much as the PTM to sound. Hence the PTM has been suggested to be the principal tympanal acoustic input to the auditory organ. In tree crickets (Oecanthinae), the ATM is slightly larger than the PTM. Both membranes are structurally complex, presenting a series of transverse folds on their surface, which are more pronounced on the ATM than on the PTM. The mechanical response of both membranes to acoustic stimulation was investigated using microscanning laser Doppler vibrometry. Only a small portion of the membrane surface deflects in response to sound. Both membranes exhibit similar frequency responses, and move out of phase with each other, producing compressions and rarefactions of the tracheal volume backing the tympanum. Therefore, unlike field crickets, tree crickets may have four instead of two functional tympanal membranes. This is interesting in the context of the outstanding question of the role of spiracular inputs in the auditory system of tree crickets.
Resumo:
Tactile sensation plays an important role in everyday life. While the somatosensory system has been studied extensively, the majority of information has come from studies using animal models. Recent development of high-resolution anatomical and functional imaging techniques has enabled the non-invasive study of human somatosensory cortex and thalamus. This thesis provides new insights into the functional organization of the human brain areas involved in tactile processing using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). The thesis also demonstrates certain optimizations of MEG and fMRI methods. Tactile digit stimulation elicited stimulus-specific responses in a number of brain areas. Contralateral activation was observed in somatosensory thalamus (Study II), primary somatosensory cortex (SI; I, III, IV), and post-auditory belt area (III). Bilateral activation was observed in secondary somatosensory cortex (SII; II, III, IV). Ipsilateral activation was found in the post-central gyrus (area 2 of SI cortex; IV). In addition, phasic deactivation was observed within ipsilateral SI cortex and bilateral primary motor cortex (IV). Detailed investigation of the tactile responses demonstrated that the arrangement of distal-proximal finger representations in area 3b of SI in humans is similar to that found in monkeys (I). An optimized MEG approach was sufficient to resolve such fine detail in functional organization. The SII region appeared to contain double representations for fingers and toes (II). The detection of activations in the SII region and thalamus improved at the individual and group levels when cardiac-gated fMRI was used (II). Better detection of body part representations at the individual level is an important improvement, because identification of individual representations is crucial for studying brain plasticity in somatosensory areas. The posterior auditory belt area demonstrated responses to both auditory and tactile stimuli (III), implicating this area as a physiological substrate for the auditory-tactile interaction observed in earlier psychophysical studies. Comparison of different smoothing parameters (III) demonstrated that proper evaluation of co-activation should be based on individual subject analysis with minimal or no smoothing. Tactile input consistently influenced area 3b of the human ipsilateral SI cortex (IV). The observed phasic negative fMRI response is proposed to result from interhemispheric inhibition via trans-callosal connections. This thesis contributes to a growing body of human data suggesting that processing of tactile stimuli involves multiple brain areas, with different spatial patterns of cortical activation for different stimuli.