775 resultados para colloquium
Resumo:
We review the mechanical origin of auditory-nerve excitation, focusing on comparisons of the magnitudes and phases of basilar-membrane (BM) vibrations and auditory-nerve fiber responses to tones at a basal site of the chinchilla cochlea with characteristic frequency ≈ 9 kHz located 3.5 mm from the oval window. At this location, characteristic frequency thresholds of fibers with high spontaneous activity correspond to magnitudes of BM displacement or velocity in the order of 1 nm or 50 μm/s. Over a wide range of stimulus frequencies, neural thresholds are not determined solely by BM displacement but rather by a function of both displacement and velocity. Near-threshold, auditory-nerve responses to low-frequency tones are synchronous with peak BM velocity toward scala tympani but at 80–90 dB sound pressure level (in decibels relative to 20 microPascals) and at 100–110 dB sound pressure level responses undergo two large phase shifts approaching 180°. These drastic phase changes have no counterparts in BM vibrations. Thus, although at threshold levels the encoding of BM vibrations into spike trains appears to involve only relatively minor signal transformations, the polarity of auditory-nerve responses does not conform with traditional views of how BM vibrations are transmitted to the inner hair cells. The response polarity at threshold levels, as well as the intensity-dependent phase changes, apparently reflect micromechanical interactions between the organ of Corti, the tectorial membrane and the subtectorial fluid, and/or electrical and synaptic processes at the inner hair cells.
Resumo:
In the mammalian cochlea, the basilar membrane's (BM) mechanical responses are amplified, and frequency tuning is sharpened through active feedback from the electromotile outer hair cells (OHCs). To be effective, OHC feedback must be delivered to the correct region of the BM and introduced at the appropriate time in each cycle of BM displacement. To investigate when OHCs contribute to cochlear amplification, a laser-diode interferometer was used to measure tone-evoked BM displacements in the basal turn of the guinea pig cochlea. Measurements were made at multiple sites across the width of the BM, which are tuned to the same characteristic frequency (CF). In response to CF tones, the largest displacements occur in the OHC region and phase lead those measured beneath the outer pillar cells and adjacent to the spiral ligament by about 90°. Postmortem, responses beneath the OHCs are reduced by up to 65 dB, and all regions across the width of the BM move in unison. We suggest that OHCs amplify BM responses to CF tones when the BM is moving at maximum velocity. In regions of the BM where OHCs contribute to its motion, the responses are compressive and nonlinear. We measured the distribution of nonlinear compressive vibrations along the length of the BM in response to a single frequency tone and estimated that OHC amplification is restricted to a 1.25- to 1.40-mm length of BM centered on the CF place.
Resumo:
Mammalian hearing depends on the enhanced mechanical properties of the basilar membrane within the cochlear duct. The enhancement arises through the action of outer hair cells that act like force generators within the organ of Corti. Simple considerations show that underlying mechanism of somatic motility depends on local area changes within the lateral membrane of the cell. The molecular basis for this phenomenon is a dense array of particles that are inserted into the basolateral membrane and that are capable of sensing membrane potential field. We show here that outer hair cells selectively take up fructose, at rates high enough to suggest that a sugar transporter may be part of the motor complex. The relation of these findings to a recent candidate for the molecular motor is also discussed.
Resumo:
As in other excitable cells, the ion channels of sensory receptors produce electrical signals that constitute the cellular response to stimulation. In photoreceptors, olfactory neurons, and some gustatory receptors, these channels essentially report the results of antecedent events in a cascade of chemical reactions. The mechanoelectrical transduction channels of hair cells, by contrast, are coupled directly to the stimulus. As a consequence, the mechanical properties of these channels shape our hearing process from the outset of transduction. Channel gating introduces nonlinearities prominent enough to be measured and even heard. Channels provide a feedback signal that controls the transducer's adaptation to large stimuli. Finally, transduction channels participate in an amplificatory process that sensitizes and sharpens hearing.
Resumo:
The anatomical and biophysical specializations of octopus cells allow them to detect the coincident firing of groups of auditory nerve fibers and to convey the precise timing of that coincidence to their targets. Octopus cells occupy a sharply defined region of the most caudal and dorsal part of the mammalian ventral cochlear nucleus. The dendrites of octopus cells cross the bundle of auditory nerve fibers just proximal to where the fibers leave the ventral and enter the dorsal cochlear nucleus, each octopus cell spanning about one-third of the tonotopic array. Octopus cells are excited by auditory nerve fibers through the activation of rapid, calcium-permeable, α-amino-3-hydroxy-5-methyl-4-isoxazole-propionate receptors. Synaptic responses are shaped by the unusual biophysical characteristics of octopus cells. Octopus cells have very low input resistances (about 7 MΩ), and short time constants (about 200 μsec) as a consequence of the activation at rest of a hyperpolarization-activated mixed-cation conductance and a low-threshold, depolarization-activated potassium conductance. The low input resistance causes rapid synaptic currents to generate rapid and small synaptic potentials. Summation of small synaptic potentials from many fibers is required to bring an octopus cell to threshold. Not only does the low input resistance make individual excitatory postsynaptic potentials brief so that they must be generated within 1 msec to sum but also the voltage-sensitive conductances of octopus cells prevent firing if the activation of auditory nerve inputs is not sufficiently synchronous and depolarization is not sufficiently rapid. In vivo in cats, octopus cells can fire rapidly and respond with exceptionally well-timed action potentials to periodic, broadband sounds such as clicks. Thus both the anatomical specializations and the biophysical specializations make octopus cells detectors of the coincident firing of their auditory nerve fiber inputs.
Resumo:
At the level of the cochlear nucleus (CN), the auditory pathway divides into several parallel circuits, each of which provides a different representation of the acoustic signal. Here, the representation of the power spectrum of an acoustic signal is analyzed for two CN principal cells—chopper neurons of the ventral CN and type IV neurons of the dorsal CN. The analysis is based on a weighting function model that relates the discharge rate of a neuron to first- and second-order transformations of the power spectrum. In chopper neurons, the transformation of spectral level into rate is a linear (i.e., first-order) or nearly linear function. This transformation is a predominantly excitatory process involving multiple frequency components, centered in a narrow frequency range about best frequency, that usually are processed independently of each other. In contrast, type IV neurons encode spectral information linearly only near threshold. At higher stimulus levels, these neurons are strongly inhibited by spectral notches, a behavior that cannot be explained by level transformations of first- or second-order. Type IV weighting functions reveal complex excitatory and inhibitory interactions that involve frequency components spanning a wider range than that seen in choppers. These findings suggest that chopper and type IV neurons form parallel pathways of spectral information transmission that are governed by two different mechanisms. Although choppers use a predominantly linear mechanism to transmit tonotopic representations of spectra, type IV neurons use highly nonlinear processes to signal the presence of wide-band spectral features.
Resumo:
Both mammals and birds use the interaural time difference (ITD) for localization of sound in the horizontal plane. They may localize either real or phantom sound sources, when the signal consists of a narrow frequency band. This ambiguity does not occur with broadband signals. A plot of impulse rates or amplitude of excitatory postsynaptic potentials against ITDs (ITD curve) consists of peaks and troughs. In the external nucleus (ICX) of the owl's inferior colliculus, ITD curves show multiple peaks when the signal is narrow-band, such as tones. Of these peaks, one occurs at ITDi, which is independent of frequency, and others at ITDi ± T, where T is the tonal period. The ITD curve of the same neuron shows a large peak (main peak) at ITDi and no or small peaks (side peaks) at ITDi ± T, when the signal is broadband. ITD curves for postsynaptic potentials indicate that ICX neurons integrate the results of binaural cross-correlation in different frequency bands. However, the difference between the main and side peaks is small. ICX neurons further enhance this difference in the process of converting membrane potentials to impulse rates. Inhibition also appears to augment the difference between the main and side peaks.
Resumo:
The auditory system of monkeys includes a large number of interconnected subcortical nuclei and cortical areas. At subcortical levels, the structural components of the auditory system of monkeys resemble those of nonprimates, but the organization at cortical levels is different. In monkeys, the ventral nucleus of the medial geniculate complex projects in parallel to a core of three primary-like auditory areas, AI, R, and RT, constituting the first stage of cortical processing. These areas interconnect and project to the homotopic and other locations in the opposite cerebral hemisphere and to a surrounding array of eight proposed belt areas as a second stage of cortical processing. The belt areas in turn project in overlapping patterns to a lateral parabelt region with at least rostral and caudal subdivisions as a third stage of cortical processing. The divisions of the parabelt distribute to adjoining auditory and multimodal regions of the temporal lobe and to four functionally distinct regions of the frontal lobe. Histochemically, chimpanzees and humans have an auditory core that closely resembles that of monkeys. The challenge for future researchers is to understand how this complex system in monkeys analyzes and utilizes auditory information.
Resumo:
The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specific vocalizations (“monkey calls”). These response selectivities can be explained by nonlinear spectral and temporal integration mechanisms. In a separate series of experiments, monkey calls were presented at different spatial locations, and the tuning of lateral belt neurons to monkey calls and spatial location was determined. Of the three belt areas the anterolateral area shows the highest degree of specificity for monkey calls, whereas neurons in the caudolateral area display the greatest spatial selectivity. We conclude that the cortical auditory system of primates is divided into at least two processing streams, a spatial stream that originates in the caudal part of the superior temporal gyrus and projects to the parietal cortex, and a pattern or object stream originating in the more anterior portions of the lateral belt. A similar division of labor can be seen in human auditory cortex by using functional neuroimaging.
Resumo:
Peripheral auditory neurons are tuned to single frequencies of sound. In the central auditory system, excitatory (or facilitatory) and inhibitory neural interactions take place at multiple levels and produce neurons with sharp level-tolerant frequency-tuning curves, neurons tuned to parameters other than frequency, cochleotopic (frequency) maps, which are different from the peripheral cochleotopic map, and computational maps. The mechanisms to create the response properties of these neurons have been considered to be solely caused by divergent and convergent projections of neurons in the ascending auditory system. The recent research on the corticofugal (descending) auditory system, however, indicates that the corticofugal system adjusts and improves auditory signal processing by modulating neural responses and maps. The corticofugal function consists of at least the following subfunctions. (i) Egocentric selection for short-term modulation of auditory signal processing according to auditory experience. Egocentric selection, based on focused positive feedback associated with widespread lateral inhibition, is mediated by the cortical neural net working together with the corticofugal system. (ii) Reorganization for long-term modulation of the processing of behaviorally relevant auditory signals. Reorganization is based on egocentric selection working together with nonauditory systems. (iii) Gain control based on overall excitatory, facilitatory, or inhibitory corticofugal modulation. Egocentric selection can be viewed as selective gain control. (iv) Shaping (or even creation) of response properties of neurons. Filter properties of neurons in the frequency, amplitude, time, and spatial domains can be sharpened by the corticofugal system. Sharpening of tuning is one of the functions of egocentric selection.
Resumo:
One of the fascinating properties of the central nervous system is its ability to learn: the ability to alter its functional properties adaptively as a consequence of the interactions of an animal with the environment. The auditory localization pathway provides an opportunity to observe such adaptive changes and to study the cellular mechanisms that underlie them. The midbrain localization pathway creates a multimodal map of space that represents the nervous system's associations of auditory cues with locations in visual space. Various manipulations of auditory or visual experience, especially during early life, that change the relationship between auditory cues and locations in space lead to adaptive changes in auditory localization behavior and to corresponding changes in the functional and anatomical properties of this pathway. Traces of this early learning persist into adulthood, enabling adults to reacquire patterns of connectivity that were learned initially during the juvenile period.
Resumo:
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Bird song, like human speech, is a learned vocal behavior that requires auditory feedback. Both as juveniles, while they learn to sing, and as adults, songbirds use auditory feedback to compare their own vocalizations with an internal model of a target song. Here we describe experiments that explore a role for the songbird anterior forebrain pathway (AFP), a basal ganglia-forebrain circuit, in evaluating song feedback and modifying vocal output. First, neural recordings in anesthetized, juvenile birds show that single AFP neurons are specialized to process the song stimuli that are compared during sensorimotor learning. AFP neurons are tuned to both the bird's own song and the tutor song, even when these stimuli are manipulated to be very different from each other. Second, behavioral experiments in adult birds demonstrate that lesions to the AFP block the deterioration of song that normally follows deafening. This observation suggests that deafening results in an instructive signal, indicating a mismatch between feedback and the internal song model, and that the AFP is involved in generating or transmitting this instructive signal. Finally, neural recordings from behaving birds reveal robust singing-related activity in the AFP. This activity is likely to originate from premotor areas and could be modulated by auditory feedback of the bird's own voice. One possibility is that this activity represents an efference copy, predicting the sensory consequences of motor commands. Overall, these studies illustrate that sensory and motor processes are highly interrelated in this circuit devoted to vocal learning, as is true for brain areas involved in speech.
Resumo:
Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory–motor interactions between vocal production and perception systems.