995 resultados para Auditory Frequency Discrimination


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews a study to determine if loss of speech discrimination is related to age and patients with audiograms showing steep high-frequency losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is a review of a study to evaluate the usefulness of a laboratory approach to auditory training with hearing impaired children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: An auditory perceptual learning paradigm was used to investigate whether implicit memories are formed during general anesthesia. METHODS: Eighty-seven patients who had an American Society of Anesthesiologists physical status of I-III and were scheduled to undergo an elective surgery with general anesthesia were randomly assigned to one of two groups. One group received auditory stimulation during surgery, whereas the other did not. The auditory stimulation consisted of pure tones presented via headphones. The Bispectral Index level was maintained between 40 and 50 during surgery. To assess learning, patients performed an auditory frequency discrimination task after surgery, and comparisons were made between the groups. General anesthesia was induced with thiopental and maintained with a mixture of fentanyl and sevoflurane. RESULTS: There was no difference in the amount of learning between the two groups (mean +/- SD improvement: stimulated patients 9.2 +/- 11.3 Hz, controls 9.4 +/- 14.1 Hz). There was also no difference in initial thresholds (mean +/- SD initial thresholds: stimulated patients 31.1 +/- 33.4 Hz, controls 28.4 +/- 34.2 Hz). These results suggest that perceptual learning was not induced during anesthesia. No correlation between the bispectral index and the initial level of performance was found (Pearson r = -0.09, P = 0.59). CONCLUSION: Perceptual learning was not induced by repetitive auditory stimulation during anesthesia. This result may indicate that perceptual learning requires top-down processing, which is suppressed by the anesthetic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’implant cochléaire devient une ressource importante pour contrer la surdité alors qu’il a été démontré qu’une privation auditive précoce ou tardive affecte le développement des systèmes auditif et visuel. Le but des études présentées dans cette thèse est d’évaluer l’impact développemental d’une privation auditive sur les systèmes auditif et visuel. En premier lieu, l’étude du développement chez une population entendante a montré que les systèmes auditif et visuel se développent à des rythmes distincts et qu’ils atteignent leur maturité respective à des âges différents. Ces conclusions suggèrent que les mécanismes qui sous-tendent ces deux systèmes sont différents et que leur développement respectif est indépendant. Aussi, tel qu’observé par une mesure comportementale et électrophysiologique, la discrimination fréquentielle auditive chez les personnes porteuses d’un implant cochléaire est altérée et corrélée aux performances de perception de la parole. Ces deux études suggèrent que suite à une privation auditive, le traitement auditif diffère d’une personne malentendante à une autre, et que ces différences touchent les processus de bas-niveaux, tel que suggéré par la disparité présente dans les performances de discrimination fréquentielle. La dernière étude observe qu’une privation auditive affecte aussi le développement de la modalité visuelle, tel qu’indiqué par une diminution des capacités de discrimination visuelle observée chez des malentendants. Cette indication appuie l’hypothèse qu’un développement normal de chacun des sens est requis pour un développement optimal des autres sens. Globalement, les résultats présentés dans cette thèse suggèrent que les systèmes auditif et visuel se développent de façon distincte, mais demeurent toutefois interreliés. En effet, une privation auditive affecte non seulement le développement des habiletés auditives, mais aussi celui des habiletés visuelles, suggérant une interdépendance entre les deux systèmes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The results from previous studies have indicated that a pre-attentive component of the event-related potential (ERP), the mismatch negativity (MMN), may be an objective measure of the automatic auditory processing of phonemes and words. Aims: This article reviews the relationship between the MMN data and psycholinguistic models of spoken word processing, in order to determine whether the MMN may be used to objectively pinpoint spoken word processing deficits in individuals with aphasia. Main Contribution: This article outlines the ways in which the MMN data support psycholinguistic models currently used in the clinical management of aphasic individuals. Furthermore, the cell assembly model of the neurophysiological mechanisms underlying spoken word processing is discussed in relation to the MMN and psycholinguistic models. Conclusions: The MMN data support current theoretical psycholinguistic and neurophysiological models of spoken word processing. Future MMN studies that include normal and aphasic populations will further elucidate the role that the MMN may play in the clinical management of aphasic individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated whether there are differences in the Speech-Evoked Auditory Brainstem Response among children with Typical Development (TD), (Central) Auditory Processing Disorder (C) APD, and Language Impairment (LI). The speech-evoked Auditory Brainstem Response was tested in 57 children (ages 6-12). The children were placed into three groups: TD (n = 18), (C)APD (n = 18) and LI (n = 21). Speech-evoked ABR were elicited using the five-formant syllable/da/. Three dimensions were defined for analysis, including timing, harmonics, and pitch. A comparative analysis of the responses between the typical development children and children with (C)APD and LI revealed abnormal encoding of the speech acoustic features that are characteristics of speech perception in children with (C)APD and LI, although the two groups differed in their abnormalities. While the children with (C)APD might had a greater difficulty distinguishing stimuli based on timing cues, the children with LI had the additional difficulty of distinguishing speech harmonics, which are important to the identification of speech sounds. These data suggested that an inefficient representation of crucial components of speech sounds may contribute to the difficulties with language processing found in children with LI. Furthermore, these findings may indicate that the neural processes mediated by the auditory brainstem differ among children with auditory processing and speech-language disorders. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.