14 resultados para Auditory sentence processing
em National Center for Biotechnology Information - NCBI
Resumo:
Magnetoencephalographic responses recorded from auditory cortex evoked by brief and rapidly successive stimuli differed between adults with poor vs. good reading abilities in four important ways. First, the response amplitude evoked by short-duration acoustic stimuli was stronger in the post-stimulus time range of 150–200 ms in poor readers than in normal readers. Second, response amplitude to rapidly successive and brief stimuli that were identical or that differed significantly in frequency were substantially weaker in poor readers compared with controls, for interstimulus intervals of 100 or 200 ms, but not for an interstimulus interval of 500 ms. Third, this neurological deficit closely paralleled subjects’ ability to distinguish between and to reconstruct the order of presentation of those stimulus sequences. Fourth, the average distributed response coherence evoked by rapidly successive stimuli was significantly weaker in the β- and γ-band frequency ranges (20–60 Hz) in poor readers, compared with controls. These results provide direct electrophysiological evidence supporting the hypothesis that reading disabilities are correlated with the abnormal neural representation of brief and rapidly successive sensory inputs, manifested in this study at the entry level of the cortical auditory/aural speech representational system(s).
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Recent studies of corticofugal modulation of auditory information processing indicate that cortical neurons mediate both a highly focused positive feedback to subcortical neurons “matched” in tuning to a particular acoustic parameter and a widespread lateral inhibition to “unmatched” subcortical neurons. This cortical function for the adjustment and improvement of subcortical information processing is called egocentric selection. Egocentric selection enhances the neural representation of frequently occurring signals in the central auditory system. For our present studies performed with the big brown bat (Eptesicus fuscus), we hypothesized that egocentric selection adjusts the frequency map of the inferior colliculus (IC) according to auditory experience based on associative learning. To test this hypothesis, we delivered acoustic stimuli paired with electric leg stimulation to the bat, because such paired stimuli allowed the animal to learn that the acoustic stimulus was behaviorally important and to make behavioral and neural adjustments based on the acquired importance of the acoustic stimulus. We found that acoustic stimulation alone evokes a change in the frequency map of the IC; that this change in the IC becomes greater when the acoustic stimulation is made behaviorally relevant by pairing it with electrical stimulation; that the collicular change is mediated by the corticofugal system; and that the IC itself can sustain the change evoked by the corticofugal system for some time. Our data support the hypothesis.
Resumo:
Cerebral organization during sentence processing in English and in American Sign Language (ASL) was characterized by employing functional magnetic resonance imaging (fMRI) at 4 T. Effects of deafness, age of language acquisition, and bilingualism were assessed by comparing results from (i) normally hearing, monolingual, native speakers of English, (ii) congenitally, genetically deaf, native signers of ASL who learned English late and through the visual modality, and (iii) normally hearing bilinguals who were native signers of ASL and speakers of English. All groups, hearing and deaf, processing their native language, English or ASL, displayed strong and repeated activation within classical language areas of the left hemisphere. Deaf subjects reading English did not display activation in these regions. These results suggest that the early acquisition of a natural language is important in the expression of the strong bias for these areas to mediate language, independently of the form of the language. In addition, native signers, hearing and deaf, displayed extensive activation of homologous areas within the right hemisphere, indicating that the specific processing requirements of the language also in part determine the organization of the language systems of the brain.
Resumo:
Peripheral auditory neurons are tuned to single frequencies of sound. In the central auditory system, excitatory (or facilitatory) and inhibitory neural interactions take place at multiple levels and produce neurons with sharp level-tolerant frequency-tuning curves, neurons tuned to parameters other than frequency, cochleotopic (frequency) maps, which are different from the peripheral cochleotopic map, and computational maps. The mechanisms to create the response properties of these neurons have been considered to be solely caused by divergent and convergent projections of neurons in the ascending auditory system. The recent research on the corticofugal (descending) auditory system, however, indicates that the corticofugal system adjusts and improves auditory signal processing by modulating neural responses and maps. The corticofugal function consists of at least the following subfunctions. (i) Egocentric selection for short-term modulation of auditory signal processing according to auditory experience. Egocentric selection, based on focused positive feedback associated with widespread lateral inhibition, is mediated by the cortical neural net working together with the corticofugal system. (ii) Reorganization for long-term modulation of the processing of behaviorally relevant auditory signals. Reorganization is based on egocentric selection working together with nonauditory systems. (iii) Gain control based on overall excitatory, facilitatory, or inhibitory corticofugal modulation. Egocentric selection can be viewed as selective gain control. (iv) Shaping (or even creation) of response properties of neurons. Filter properties of neurons in the frequency, amplitude, time, and spatial domains can be sharpened by the corticofugal system. Sharpening of tuning is one of the functions of egocentric selection.
Resumo:
Syntax denotes a rule system that allows one to predict the sequencing of communication signals. Despite its significance for both human speech processing and animal acoustic communication, the representation of syntactic structure in the mammalian brain has not been studied electrophysiologically at the single-unit level. In the search for a neuronal correlate for syntax, we used playback of natural and temporally destructured complex species-specific communication calls—so-called composites—while recording extracellularly from neurons in a physiologically well defined area (the FM–FM area) of the mustached bat’s auditory cortex. Even though this area is known to be involved in the processing of target distance information for echolocation, we found that units in the FM–FM area were highly responsive to composites. The finding that neuronal responses were strongly affected by manipulation in the time domain of the natural composite structure lends support to the hypothesis that syntax processing in mammals occurs at least at the level of the nonprimary auditory cortex.
Resumo:
The auditory system of monkeys includes a large number of interconnected subcortical nuclei and cortical areas. At subcortical levels, the structural components of the auditory system of monkeys resemble those of nonprimates, but the organization at cortical levels is different. In monkeys, the ventral nucleus of the medial geniculate complex projects in parallel to a core of three primary-like auditory areas, AI, R, and RT, constituting the first stage of cortical processing. These areas interconnect and project to the homotopic and other locations in the opposite cerebral hemisphere and to a surrounding array of eight proposed belt areas as a second stage of cortical processing. The belt areas in turn project in overlapping patterns to a lateral parabelt region with at least rostral and caudal subdivisions as a third stage of cortical processing. The divisions of the parabelt distribute to adjoining auditory and multimodal regions of the temporal lobe and to four functionally distinct regions of the frontal lobe. Histochemically, chimpanzees and humans have an auditory core that closely resembles that of monkeys. The challenge for future researchers is to understand how this complex system in monkeys analyzes and utilizes auditory information.
Resumo:
The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specific vocalizations (“monkey calls”). These response selectivities can be explained by nonlinear spectral and temporal integration mechanisms. In a separate series of experiments, monkey calls were presented at different spatial locations, and the tuning of lateral belt neurons to monkey calls and spatial location was determined. Of the three belt areas the anterolateral area shows the highest degree of specificity for monkey calls, whereas neurons in the caudolateral area display the greatest spatial selectivity. We conclude that the cortical auditory system of primates is divided into at least two processing streams, a spatial stream that originates in the caudal part of the superior temporal gyrus and projects to the parietal cortex, and a pattern or object stream originating in the more anterior portions of the lateral belt. A similar division of labor can be seen in human auditory cortex by using functional neuroimaging.
Resumo:
The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl’s inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2–3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems.
Resumo:
Reading and listening involve complex psychological processes that recruit many brain areas. The anatomy of processing English words has been studied by a variety of imaging methods. Although there is widespread agreement on the general anatomical areas involved in comprehending words, there are still disputes about the computations that go on in these areas. Examination of the time relations (circuitry) among these anatomical areas can aid in understanding their computations. In this paper, we concentrate on tasks that involve obtaining the meaning of a word in isolation or in relation to a sentence. Our current data support a finding in the literature that frontal semantic areas are active well before posterior areas. We use the subject’s attention to amplify relevant brain areas involved either in semantic classification or in judging the relation of the word to a sentence to test the hypothesis that frontal areas are concerned with lexical semantics and posterior areas are more involved in comprehension of propositions that involve several words.
Resumo:
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
Modular organization of intrinsic connections associated with spectral tuning in cat auditory cortex
Resumo:
Many response properties in primary auditory cortex (AI) are segregated spatially and organized topographically as those in primary visual cortex. Intensive study has not revealed an intrinsic, anatomical organizing principle related to an AI functional topography. We used retrograde anatomic tracing and topographic physiologic mapping of acoustic response properties to reveal long-range (≥1.5 mm) convergent intrinsic horizontal connections between AI subregions with similar bandwidth and characteristic frequency selectivity. This suggests a modular organization for processing spectral bandwidth in AI.
Resumo:
Working memory refers to the ability of the brain to store and manipulate information over brief time periods, ranging from seconds to minutes. As opposed to long-term memory, which is critically dependent upon hippocampal processing, critical substrates for working memory are distributed in a modality-specific fashion throughout cortex. N-methyl-D-aspartate (NMDA) receptors play a crucial role in the initiation of long-term memory. Neurochemical mechanisms underlying the transient memory storage required for working memory, however, remain obscure. Auditory sensory memory, which refers to the ability of the brain to retain transient representations of the physical features (e.g., pitch) of simple auditory stimuli for periods of up to approximately 30 sec, represents one of the simplest components of the brain working memory system. Functioning of the auditory sensory memory system is indexed by the generation of a well-defined event-related potential, termed mismatch negativity (MMN). MMN can thus be used as an objective index of auditory sensory memory functioning and a probe for investigating underlying neurochemical mechanisms. Monkeys generate cortical activity in response to deviant stimuli that closely resembles human MMN. This study uses a combination of intracortical recording and pharmacological micromanipulations in awake monkeys to demonstrate that both competitive and noncompetitive NMDA antagonists block the generation of MMN without affecting prior obligatory activity in primary auditory cortex. These findings suggest that, on a neurophysiological level, MMN represents selective current flow through open, unblocked NMDA channels. Furthermore, they suggest a crucial role of cortical NMDA receptors in the assessment of stimulus familiarity/unfamiliarity, which is a key process underlying working memory performance.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.