5 resultados para Sound Localization
em National Center for Biotechnology Information - NCBI
Resumo:
Barn owls can localize a sound source using either the map of auditory space contained in the optic tectum or the auditory forebrain. The auditory thalamus, nucleus ovoidalis (N.Ov), is situated between these two auditory areas, and its inactivation precludes the use of the auditory forebrain for sound localization. We examined the sources of inputs to the N.Ov as well as their patterns of termination within the nucleus. We also examined the response of single neurons within the N.Ov to tonal stimuli and sound localization cues. Afferents to the N.Ov originated with a diffuse population of neurons located bilaterally within the lateral shell, core, and medial shell subdivisions of the central nucleus of the inferior colliculus. Additional afferent input originated from the ipsilateral ventral nucleus of the lateral lemniscus. No afferent input was provided to the N.Ov from the external nucleus of the inferior colliculus or the optic tectum. The N.Ov was tonotopically organized with high frequencies represented dorsally and low frequencies ventrally. Although neurons in the N.Ov responded to localization cues, there was no apparent topographic mapping of these cues within the nucleus, in contrast to the tectal pathway. However, nearly all possible types of binaural response to sound localization cues were represented. These findings suggest that in the thalamo-telencephalic auditory pathway, sound localization is subserved by a nontopographic representation of auditory space.
Resumo:
Computational maps are of central importance to a neuronal representation of the outside world. In a map, neighboring neurons respond to similar sensory features. A well studied example is the computational map of interaural time differences (ITDs), which is essential to sound localization in a variety of species and allows resolution of ITDs of the order of 10 μs. Nevertheless, it is unclear how such an orderly representation of temporal features arises. We address this problem by modeling the ontogenetic development of an ITD map in the laminar nucleus of the barn owl. We show how the owl's ITD map can emerge from a combined action of homosynaptic spike-based Hebbian learning and its propagation along the presynaptic axon. In spike-based Hebbian learning, synaptic strengths are modified according to the timing of pre- and postsynaptic action potentials. In unspecific axonal learning, a synapse's modification gives rise to a factor that propagates along the presynaptic axon and affects the properties of synapses at neighboring neurons. Our results indicate that both Hebbian learning and its presynaptic propagation are necessary for map formation in the laminar nucleus, but the latter can be orders of magnitude weaker than the former. We argue that the algorithm is important for the formation of computational maps, when, in particular, time plays a key role.
Resumo:
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory–motor interactions between vocal production and perception systems.