980 resultados para Auditory Brainstem Response


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overlapping sound pressure waves that enter our brain via the ears and auditory nerves must be organized into a coherent percept. Modelling the regularities of the auditory environment and detecting unexpected changes in these regularities, even in the absence of attention, is a necessary prerequisite for orientating towards significant information as well as speech perception and communication, for instance. The processing of auditory information, in particular the detection of changes in the regularities of the auditory input, gives rise to neural activity in the brain that is seen as a mismatch negativity (MMN) response of the event-related potential (ERP) recorded by electroencephalography (EEG). --- As the recording of MMN requires neither a subject s behavioural response nor attention towards the sounds, it can be done even with subjects with problems in communicating or difficulties in performing a discrimination task, for example, from aphasic and comatose patients, newborns, and even fetuses. Thus with MMN one can follow the evolution of central auditory processing from the very early, often critical stages of development, and also in subjects who cannot be examined with the more traditional behavioural measures of auditory discrimination. Indeed, recent studies show that central auditory processing, as indicated by MMN, is affected in different clinical populations, such as schizophrenics, as well as during normal aging and abnormal childhood development. Moreover, the processing of auditory information can be selectively impaired for certain auditory attributes (e.g., sound duration, frequency) and can also depend on the context of the sound changes (e.g., speech or non-speech). Although its advantages over behavioral measures are undeniable, a major obstacle to the larger-scale routine use of the MMN method, especially in clinical settings, is the relatively long duration of its measurement. Typically, approximately 15 minutes of recording time is needed for measuring the MMN for a single auditory attribute. Recording a complete central auditory processing profile consisting of several auditory attributes would thus require from one hour to several hours. In this research, I have contributed to the development of new fast multi-attribute MMN recording paradigms in which several types and magnitudes of sound changes are presented in both speech and non-speech contexts in order to obtain a comprehensive profile of auditory sensory memory and discrimination accuracy in a short measurement time (altogether approximately 15 min for 5 auditory attributes). The speed of the paradigms makes them highly attractive for clinical research, their reliability brings fidelity to longitudinal studies, and the language context is especially suitable for studies on language impairments such as dyslexia and aphasia. In addition I have presented an even more ecological paradigm, and more importantly, an interesting result in view of the theory of MMN where the MMN responses are recorded entirely without a repetitive standard tone. All in all, these paradigms contribute to the development of the theory of auditory perception, and increase the feasibility of MMN recordings in both basic and clinical research. Moreover, they have already proven useful in studying for instance dyslexia, Asperger syndrome and schizophrenia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of the continuous wavelet transform (CWT) to provide good time and frequency localization has made it a popular tool in time-frequency analysis of signals. Wavelets exhibit constant-Q property, which is also possessed by the basilar membrane filters in the peripheral auditory system. The basilar membrane filters or auditory filters are often modeled by a Gammatone function, which provides a good approximation to experimentally determined responses. The filterbank derived from these filters is referred to as a Gammatone filterbank. In general, wavelet analysis can be likened to a filterbank analysis and hence the interesting link between standard wavelet analysis and Gammatone filterbank. However, the Gammatone function does not exactly qualify as a wavelet because its time average is not zero. We show how bona fide wavelets can be constructed out of Gammatone functions. We analyze properties such as admissibility, time-bandwidth product, vanishing moments, which are particularly relevant in the context of wavelets. We also show how the proposed auditory wavelets are produced as the impulse response of a linear, shift-invariant system governed by a linear differential equation with constant coefficients. We propose analog circuit implementations of the proposed CWT. We also show how the Gammatone-derived wavelets can be used for singularity detection and time-frequency analysis of transient signals. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurons in the songbird forebrain nucleus HVc are highly sensitive to auditory temporal context and have some of the most complex auditory tuning properties yet discovered. HVc is crucial for learning, perceiving, and producing song, thus it is important to understand the neural circuitry and mechanisms that give rise to these remarkable auditory response properties. This thesis investigates these issues experimentally and computationally.

Extracellular studies reported here compare the auditory context sensitivity of neurons in HV c with neurons in the afferent areas of field L. These demonstrate that there is a substantial increase in the auditory temporal context sensitivity from the areas of field L to HVc. Whole-cell recordings of HVc neurons from acute brain slices are described which show that excitatory synaptic transmission between HVc neurons involve the release of glutamate and the activation of both AMPA/kainate and NMDA-type glutamate receptors. Additionally, widespread inhibitory interactions exist between HVc neurons that are mediated by postsynaptic GABA_A receptors. Intracellular recordings of HVc auditory neurons in vivo provides evidence that HV c neurons encode information about temporal structure using a variety of cellular and synaptic mechanisms including syllable-specific inhibition, excitatory post-synaptic potentials with a range of different time courses, and burst-firing, and song-specific hyperpolarization.

The final part of this thesis presents two computational approaches for representing and learning temporal structure. The first method utilizes comput ational elements that are analogous to temporal combination sensitive neurons in HVc. A network of these elements can learn using local information and lateral inhibition. The second method presents a more general framework which allows a network to discover mixtures of temporal features in a continuous stream of input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lateral intraparietal area (LIP) of macaque posterior parietal cortex participates in the sensorimotor transformations underlying visually guided eye movements. Area LIP has long been considered unresponsive to auditory stimulation. However, recent studies have shown that neurons in LIP respond to auditory stimuli during an auditory-saccade task, suggesting possible involvement of this area in auditory-to-oculomotor as well as visual-to-oculomotor processing. This dissertation describes investigations which clarify the role of area LIP in auditory-to-oculomotor processing.

Extracellular recordings were obtained from a total of 332 LIP neurons in two macaque monkeys, while the animals performed fixation and saccade tasks involving auditory and visual stimuli. No auditory activity was observed in area LIP before animals were trained to make saccades to auditory stimuli, but responses to auditory stimuli did emerge after auditory-saccade training. Auditory responses in area LIP after auditory-saccade training were significantly stronger in the context of an auditory-saccade task than in the context of a fixation task. Compared to visual responses, auditory responses were also significantly more predictive of movement-related activity in the saccade task. Moreover, while visual responses often had a fast transient component, responses to auditory stimuli in area LIP tended to be gradual in onset and relatively prolonged in duration.

Overall, the analyses demonstrate that responses to auditory stimuli in area LIP are dependent on auditory-saccade training, modulated by behavioral context, and characterized by slow-onset, sustained response profiles. These findings suggest that responses to auditory stimuli are best interpreted as supramodal (cognitive or motor) responses, rather than as modality-specific sensory responses. Auditory responses in area LIP seem to reflect the significance of auditory stimuli as potential targets for eye movements, and may differ from most visual responses in the extent to which they arc abstracted from the sensory parameters of the stimulus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous tone-tone masking in conjunction with the envelope-following response (EFR) recording was used to obtain tuning curves in porpoises Phocoena phocoena and Neophocaena phocaenoides asiaeorientalis. The EFR was evoked by amplitude-modulated probes with a modulation rate of 1000 Hz and carrier frequencies from 22.5 to 140 kHz. Equivalent rectangular quality Q(ERB) of the obtained tuning curves varied from 8.3-8.6 at lower (22.5-32 kHz) probe frequencies to 44.8-47.4 at high (128-140 kHz) frequencies. The QERB dependence on probe frequency could be approximated by regression lines with a slope of 0.83 to 0.86 in log-log scale., which corresponded to almost frequency-proportional quality and almost constant bandwidth of 34 kHz. Thus, the frequency representation in the porpoise auditory system is much closer to a constant-bandwidth rather that to a constant-quality manner. (c) 2006 Acoustical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans and song-learning birds communicate acoustically using learned vocalizations. The characteristic features of this social communication behavior include vocal control by forebrain motor areas, a direct cortical projection to brainstem vocal motor neurons, and dependence on auditory feedback to develop and maintain learned vocalizations. These features have so far not been found in closely related primate and avian species that do not learn vocalizations. Male mice produce courtship ultrasonic vocalizations with acoustic features similar to songs of song-learning birds. However, it is assumed that mice lack a forebrain system for vocal modification and that their ultrasonic vocalizations are innate. Here we investigated the mouse song system and discovered that it includes a motor cortex region active during singing, that projects directly to brainstem vocal motor neurons and is necessary for keeping song more stereotyped and on pitch. We also discovered that male mice depend on auditory feedback to maintain some ultrasonic song features, and that sub-strains with differences in their songs can match each other's pitch when cross-housed under competitive social conditions. We conclude that male mice have some limited vocal modification abilities with at least some neuroanatomical features thought to be unique to humans and song-learning birds. To explain our findings, we propose a continuum hypothesis of vocal learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In view of the evidence that cognitive deficits in schizophrenia are critically important for long-term outcome, it is essential to establish the effects that the various antipsychotic compounds have on cognition, particularly second-generation drugs. This parallel group, placebo-controlled study aimed to compare the effects in healthy volunteers (n = 128) of acute doses of the atypical antipsychotics amisulpride (300 mg) and risperidone (3 mg) to those of chlorpromazine (100 mg) on tests thought relevant to the schizophrenic process: auditory and visual latent inhibition, prepulse inhibition of the acoustic startle response, executive function and eye movements. The drugs tested were not found to affect auditory latent inhibition, prepulse inhibition or executive functioning as measured by the Cambridge Neuropsychological Test Battery and the FAS test of verbal fluency. However, risperidone disrupted and amisulpride showed a trend to disrupt visual latent inhibition. Although amisulpride did not affect eye movements, both risperidone and chlorpromazine decreased peak saccadic velocity and increased antisaccade error rates, which, in the risperidone group, correlated with drug-induced akathisia. It was concluded that single doses of these drugs appear to have little effect on cognition, but may affect eye movement parameters in accordance with the amount of sedation and akathisia they produce. The effect risperidone had on latent inhibition is likely to relate to its serotonergic properties. Furthermore, as the trend for disrupted visual latent inhibition following amisulpride was similar in nature to that which would be expected with amphetamine, it was concluded that its behaviour in this model is consistent with its preferential presynaptic dopamine antagonistic activity in low dose and its efficacy in the negative symptoms of schizophrenia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The core difficulty in developmental dyslexia across languages is a "phonological deficit", a specific difficulty with the neural representation of the sound structure of words. Recent data across languages suggest that this phonological deficit arises in part from inefficient auditory processing of the rate of change of the amplitude envelope at syllable onset (inefficient sensory processing of rise time). Rise time is a complex percept that also involves changes in duration and perceived intensity. Understanding the neural mechanisms that give rise to the phonological deficit in dyslexia is important for optimising educational interventions. In a three-deviant passive 'oddball' paradigm and a corresponding blocked 'deviant-alone' control condition we recorded ERPs to tones varying in rise time, duration and intensity in children with dyslexia and typically developing children longitudinally. We report here results from test Phases 1 and 2, when participants were aged 8-10. years. We found an MMN to duration, but not to rise time nor intensity deviants, at both time points for both groups. For rise time, duration and intensity we found group effects in both the Oddball and Blocked conditions. There was a slower fronto-central P1 response in the dyslexic group compared to controls. The amplitude of the P1 fronto-centrally to tones with slower rise times and lower intensity was smaller compared to tones with sharper rise times and higher intensity in the Oddball condition, for children with dyslexia only. The latency of this ERP component for all three stimuli was shorter on the right compared to the left hemisphere, only for the dyslexic group in the Blocked condition. Furthermore, we found decreased N1c amplitude to tones with slower rise times compared to tones with sharper rise times for children with dyslexia, only in the Oddball condition. Several other effects of stimulus type, age and laterality were also observed. Our data suggest that neuronal responses underlying some aspects of auditory sensory processing may be impaired in dyslexia. © 2011 Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The oscillation of neuronal circuits reflected in the EEG gamma frequency may be fundamental to the perceptual process referred to as binding (the integration of various thoughts and perceptions into a coherent picture). The aim of our study was to expand our knowledge of the developmental course ofEEG gamma in the auditory modality. 2 We investigated EEG 40 Hz gamma band responses (35.2 to 43.0 Hz) using an auditory novelty oddball paradigm alone and with a visual-number-series distracter task in 208 participants as a function of age (7 years to adult) at 9 sites across the sagital and lateral axes (F3, Fz, F4, C3, Cz, C4, P3, Pz, P4). Gamma responses were operationally defined as change in power or a change in phase synchrony level from baseline within two time windows. The evoked gamma response was defined as a significant change from baseline occurring between 0 to 150 ms after stimulus onset; the induced gamma response was measured from 250 to 750 ms after stimulus onset. A significant evoked gamma band response was found when measuring changes in both power and phase synchrony. The increase in both measures was maximal at frontal regions. Decreases in both measures were found when participants were distracted by a secondary task. For neither measure were developmental effects noted. However, evoked gamma power was significantly enhanced with the presentation of a novel stimulus, especially at the right frontal site (F4); frontal evoked gamma phase synchrony also showed enhancement for novel stimuli but only for our two oldest age groups (16-18 year olds and adults). Induced gamma band responses also varied with task-dependent cognitive stimulus properties. In the induced gamma power response in all age groups, target stimuli generated the highest power values at the parietal region, while the novel stimuli were always below baseline. Target stimuli increased induced synchrony in all regions for all participants, but the novel stimulus selectively affected participants dependent on their age and gender. Adult participants, for example, exhibited a reduction in gamma power, but an increase in synchrony to the novel stimulus within the same region. Induced gamma synchrony was more sensitive to the gender of the participant than was induced gamma power. While induced gamma power produced little effects of age, gamma synchrony did have age effects. These results confirm that the perceptual process which regulates gamma power is distinct from that which governs the synchronization for neuronal firing, and both gamma power and synchrony are important factors to be considered for the "binding" hypothesis. However, there is surprisingly little effect of age on the absolute levels of or distribution of EEG gamma in the age range investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La neuropatía auditiva es un desorden caracterizado por hipoacusia neurosensorial y ausencia de potenciales evocados auditivos de tallo cerebral, con otoemisiones acústicas presentes, encontrando una pérdida de la audición en presencia de función coclear, siendo esta sugestiva anomalía de una alteración de la sincronía neural. La neuropatía presenta una baja incidencia en niños con funciones auditivas normales y una incidencia variable en niños con hipoacusias severas, el manejo actual de la neuropatía va encaminado a la rehabilitación auditiva, usando sistemas de amplificación (audífonos o implantes cocleares). Se realizo un estudio de corte transversal con el objetivo de comparar la respuesta en niños con neuropatía auditiva y niños con hipoacusia neurosensorial en cuanto a la ganancia funcional con sistemas de amplificación. Fueron tomados 4 niños con diagnostico confirmado de la patología y se compararon con un grupo control de 16 niños con hipoacusias neurosensoriales de otras etiologías, se comparo el valor de la ganancia funcional con audífono y con implante coclear, obtenido de las audiometrías. La ganancia funciona global con ambos sistemas de amplificación no muestra diferencias significativas comparados los dos grupos, comparando el grupo de pacientes con neuropatía auditiva se encontraron diferencias significativas entre audífono e implante para las frecuencias medias y agudas. Se puede concluir que el audífono en pacientes con neuropatía auditiva es el sistema de amplificación que ofrece mejores valores de ganancia funcional, aun mejor que el implante coclear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Auditory discrimination is significantly impaired in Wernicke’s aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Methods: Behavioural auditory discrimination thresholds of CVC syllables and pure tones were measured in WA (n=7) and control (n=7) participants. Threshold results were used to develop multiple-deviant mismatch negativity (MMN) oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). Results: MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. Conclusions: The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.