3 resultados para auditory processing
em DRUM (Digital Repository at the University of Maryland)
Resumo:
Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.
Resumo:
While humans can easily segregate and track a speaker's voice in a loud noisy environment, most modern speech recognition systems still perform poorly in loud background noise. The computational principles behind auditory source segregation in humans is not yet fully understood. In this dissertation, we develop a computational model for source segregation inspired by auditory processing in the brain. To support the key principles behind the computational model, we conduct a series of electro-encephalography experiments using both simple tone-based stimuli and more natural speech stimulus. Most source segregation algorithms utilize some form of prior information about the target speaker or use more than one simultaneous recording of the noisy speech mixtures. Other methods develop models on the noise characteristics. Source segregation of simultaneous speech mixtures with a single microphone recording and no knowledge of the target speaker is still a challenge. Using the principle of temporal coherence, we develop a novel computational model that exploits the difference in the temporal evolution of features that belong to different sources to perform unsupervised monaural source segregation. While using no prior information about the target speaker, this method can gracefully incorporate knowledge about the target speaker to further enhance the segregation.Through a series of EEG experiments we collect neurological evidence to support the principle behind the model. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of the remarkable perceptual ability of humans to segregate acoustic sources, and of its psychophysical manifestations in navigating complex sensory environments. Results from EEG experiments provide further insights into the assumptions behind the model and provide motivation for future single unit studies that can provide more direct evidence for the principle of temporal coherence.
Resumo:
Everyday, humans and animals navigate complex acoustic environments, where multiple sound sources overlap. Somehow, they effortlessly perform an acoustic scene analysis and extract relevant signals from background noise. Constant updating of the behavioral relevance of ambient sounds requires the representation and integration of incoming acoustical information with internal representations such as behavioral goals, expectations and memories of previous sound-meaning associations. Rapid plasticity of auditory representations may contribute to our ability to attend and focus on relevant sounds. In order to better understand how auditory representations are transformed in the brain to incorporate behavioral contextual information, we explored task-dependent plasticity in neural responses recorded at four levels of the auditory cortical processing hierarchy of ferrets: the primary auditory cortex (A1), two higher-order auditory areas (dorsal PEG and ventral-anterior PEG) and dorso-lateral frontal cortex. In one study we explored the laminar profile of rapid-task related plasticity in A1 and found that plasticity occurred at all depths, but was greatest in supragranular layers. This result suggests that rapid task-related plasticity in A1 derives primarily from intracortical modulation of neural selectivity. In two other studies we explored task-dependent plasticity in two higher-order areas of the ferret auditory cortex that may correspond to belt (secondary) and parabelt (tertiary) auditory areas. We found that representations of behaviorally-relevant sounds are progressively enhanced during performance of auditory tasks. These selective enhancement effects became progressively larger as you ascend the auditory cortical hierarchy. We also observed neuronal responses to non-auditory, task-related information (reward timing, expectations) in the parabelt area that were very similar to responses previously described in frontal cortex. These results suggests that auditory representations in the brain are transformed from the more veridical spectrotemporal information encoded in earlier auditory stages to a more abstract representation encoding sound behavioral meaning in higher-order auditory areas and dorso-lateral frontal cortex.