997 resultados para auditory processing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

While humans can easily segregate and track a speaker's voice in a loud noisy environment, most modern speech recognition systems still perform poorly in loud background noise. The computational principles behind auditory source segregation in humans is not yet fully understood. In this dissertation, we develop a computational model for source segregation inspired by auditory processing in the brain. To support the key principles behind the computational model, we conduct a series of electro-encephalography experiments using both simple tone-based stimuli and more natural speech stimulus. Most source segregation algorithms utilize some form of prior information about the target speaker or use more than one simultaneous recording of the noisy speech mixtures. Other methods develop models on the noise characteristics. Source segregation of simultaneous speech mixtures with a single microphone recording and no knowledge of the target speaker is still a challenge. Using the principle of temporal coherence, we develop a novel computational model that exploits the difference in the temporal evolution of features that belong to different sources to perform unsupervised monaural source segregation. While using no prior information about the target speaker, this method can gracefully incorporate knowledge about the target speaker to further enhance the segregation.Through a series of EEG experiments we collect neurological evidence to support the principle behind the model. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of the remarkable perceptual ability of humans to segregate acoustic sources, and of its psychophysical manifestations in navigating complex sensory environments. Results from EEG experiments provide further insights into the assumptions behind the model and provide motivation for future single unit studies that can provide more direct evidence for the principle of temporal coherence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Naturwiss., Diss., 2009

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Les déficits auditifs spatiaux se produisent fréquemment après une lésion hémisphérique ; un précédent case report suggérait que la capacité explicite à reconnaître des positions sonores, comme dans la localisation des sons, peut être atteinte alors que l'utilisation implicite d'indices sonores pour la reconnaissance d'objets sonores dans un environnement bruyant reste préservée. En testant systématiquement des patients avec lésion hémisphérique inaugurale, nous avons montré que (1) l'utilisation explicite et/ou implicite des indices sonores peut être perturbée ; (2) la dissociation entre l'atteinte de l'utilisation explicite des indices sonores versus une préservation de l'utilisation implicite de ces indices est assez fréquente ; et (3) différents types de déficits dans la localisation des sons peuvent être associés avec une utilisation implicite préservée de ces indices sonores. Conceptuellement, la dissociation entre l'utilisation explicite et implicite de ces indices sonores peut illustrer la dichotomie des deux voies du système auditif. Nos résultats parlent en faveur d'une évaluation systématique des fonctions auditives spatiales dans un contexte clinique, surtout quand l'adaptation à un environnement sonore est en jeu. De plus, des études systématiques sont nécessaires afin de mettre en lien les troubles de l'utilisation explicite versus implicite de ces indices sonores avec les difficultés à effectuer les activités de la vie quotidienne, afin d'élaborer des stratégies de réhabilitation appropriées et afin de s'assurer jusqu'à quel point l'utilisation explicite et implicite des indices spatiaux peut être rééduquée à la suite d'un dommage cérébral.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Retrieval-Induced Forgetting (RIF) paradigm includes three phases: (a) study/encoding of category exemplars, (b) practicing retrieval of a sub-set of those category exemplars, and (c) recall of all exemplars. At the final recall phase, recall of items that belong to the same categories as those items that undergo retrieval-practice, but that do not undergo retrieval-practice, is impaired. The received view is that this is because retrieval of target category-exemplars (e.g., ‘Tiger’ in the category Four-legged animal) requires inhibition of non-target category-exemplars (e.g., ‘Dog’ and ‘Lion’) that compete for retrieval. Here, we used the RIF paradigm to investigate whether ignoring auditory items during the retrieval-practice phase modulates the inhibitory process. In two experiments, RIF was present when retrieval-practice was conducted in quiet and when conducted in the presence of spoken words that belonged to a category other than that of the items that were targets for retrieval-practice. In contrast, RIF was abolished when words that either were identical to the retrieval-practice words or were only semantically related to the retrieval-practice words were presented as background speech. The results suggest that the act of ignoring speech can reduce inhibition of the non-practiced category-exemplars, thereby eliminating RIF, but only when the spoken words are competitors for retrieval (i.e., belong to the same semantic category as the to-be-retrieved items).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magnetoencephalographic responses recorded from auditory cortex evoked by brief and rapidly successive stimuli differed between adults with poor vs. good reading abilities in four important ways. First, the response amplitude evoked by short-duration acoustic stimuli was stronger in the post-stimulus time range of 150–200 ms in poor readers than in normal readers. Second, response amplitude to rapidly successive and brief stimuli that were identical or that differed significantly in frequency were substantially weaker in poor readers compared with controls, for interstimulus intervals of 100 or 200 ms, but not for an interstimulus interval of 500 ms. Third, this neurological deficit closely paralleled subjects’ ability to distinguish between and to reconstruct the order of presentation of those stimulus sequences. Fourth, the average distributed response coherence evoked by rapidly successive stimuli was significantly weaker in the β- and γ-band frequency ranges (20–60 Hz) in poor readers, compared with controls. These results provide direct electrophysiological evidence supporting the hypothesis that reading disabilities are correlated with the abnormal neural representation of brief and rapidly successive sensory inputs, manifested in this study at the entry level of the cortical auditory/aural speech representational system(s).