5 resultados para auditory EEG

em DRUM (Digital Repository at the University of Maryland)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

While humans can easily segregate and track a speaker's voice in a loud noisy environment, most modern speech recognition systems still perform poorly in loud background noise. The computational principles behind auditory source segregation in humans is not yet fully understood. In this dissertation, we develop a computational model for source segregation inspired by auditory processing in the brain. To support the key principles behind the computational model, we conduct a series of electro-encephalography experiments using both simple tone-based stimuli and more natural speech stimulus. Most source segregation algorithms utilize some form of prior information about the target speaker or use more than one simultaneous recording of the noisy speech mixtures. Other methods develop models on the noise characteristics. Source segregation of simultaneous speech mixtures with a single microphone recording and no knowledge of the target speaker is still a challenge. Using the principle of temporal coherence, we develop a novel computational model that exploits the difference in the temporal evolution of features that belong to different sources to perform unsupervised monaural source segregation. While using no prior information about the target speaker, this method can gracefully incorporate knowledge about the target speaker to further enhance the segregation.Through a series of EEG experiments we collect neurological evidence to support the principle behind the model. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of the remarkable perceptual ability of humans to segregate acoustic sources, and of its psychophysical manifestations in navigating complex sensory environments. Results from EEG experiments provide further insights into the assumptions behind the model and provide motivation for future single unit studies that can provide more direct evidence for the principle of temporal coherence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drowsy driving impairs motorists’ ability to operate vehicles safely, endangering both the drivers and other people on the road. The purpose of the project is to find the most effective wearable device to detect drowsiness. Existing research has demonstrated several options for drowsiness detection, such as electroencephalogram (EEG) brain wave measurement, eye tracking, head motions, and lane deviations. However, there are no detailed trade-off analyses for the cost, accuracy, detection time, and ergonomics of these methods. We chose to use two different EEG headsets: NeuroSky Mindwave Mobile (single-electrode) and Emotiv EPOC (14- electrode). We also tested a camera and gyroscope-accelerometer device. We can successfully determine drowsiness after five minutes of training using both single and multi-electrode EEGs. Devices were evaluated using the following criteria: time needed to achieve accurate reading, accuracy of prediction, rate of false positives vs. false negatives, and ergonomics and portability. This research will help improve detection devices, and reduce the number of future accidents due to drowsy driving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current trends in speech-language pathology focus on early intervention as the preferred tool for promoting the best possible outcomes in children with language disorders. Neuroimaging techniques are being studied as promising tools for flagging at-risk infants. In this study, the auditory brainstem response (ABR) to the syllables /ba/ and /ga/ was examined in 41 infants between 3 and 12 months of age as a possible tool to predict language development in toddlerhood. The MacArthur-Bates Communicative Development Inventory (MCDI) was used to assess language development at 18 months of age. The current study compared the periodicity of the responses to the stop consonants and phase differences between /ba/ and /ga/ in both at-risk and low-risk groups. The study also examined whether there are correlations among ABR measures (periodicity and phase differentiation) and language development. The study found that these measures predict language development at 18 months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Everyday, humans and animals navigate complex acoustic environments, where multiple sound sources overlap. Somehow, they effortlessly perform an acoustic scene analysis and extract relevant signals from background noise. Constant updating of the behavioral relevance of ambient sounds requires the representation and integration of incoming acoustical information with internal representations such as behavioral goals, expectations and memories of previous sound-meaning associations. Rapid plasticity of auditory representations may contribute to our ability to attend and focus on relevant sounds. In order to better understand how auditory representations are transformed in the brain to incorporate behavioral contextual information, we explored task-dependent plasticity in neural responses recorded at four levels of the auditory cortical processing hierarchy of ferrets: the primary auditory cortex (A1), two higher-order auditory areas (dorsal PEG and ventral-anterior PEG) and dorso-lateral frontal cortex. In one study we explored the laminar profile of rapid-task related plasticity in A1 and found that plasticity occurred at all depths, but was greatest in supragranular layers. This result suggests that rapid task-related plasticity in A1 derives primarily from intracortical modulation of neural selectivity. In two other studies we explored task-dependent plasticity in two higher-order areas of the ferret auditory cortex that may correspond to belt (secondary) and parabelt (tertiary) auditory areas. We found that representations of behaviorally-relevant sounds are progressively enhanced during performance of auditory tasks. These selective enhancement effects became progressively larger as you ascend the auditory cortical hierarchy. We also observed neuronal responses to non-auditory, task-related information (reward timing, expectations) in the parabelt area that were very similar to responses previously described in frontal cortex. These results suggests that auditory representations in the brain are transformed from the more veridical spectrotemporal information encoded in earlier auditory stages to a more abstract representation encoding sound behavioral meaning in higher-order auditory areas and dorso-lateral frontal cortex.