963 resultados para Auditory hallucinations.
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Learning-induced plasticity in auditory spatial representations revealed by electrical neuroimaging.
Resumo:
Auditory spatial representations are likely encoded at a population level within human auditory cortices. We investigated learning-induced plasticity of spatial discrimination in healthy subjects using auditory-evoked potentials (AEPs) and electrical neuroimaging analyses. Stimuli were 100 ms white-noise bursts lateralized with varying interaural time differences. In three experiments, plasticity was induced with 40 min of discrimination training. During training, accuracy significantly improved from near-chance levels to approximately 75%. Before and after training, AEPs were recorded to stimuli presented passively with a more medial sound lateralization outnumbering a more lateral one (7:1). In experiment 1, the same lateralizations were used for training and AEP sessions. Significant AEP modulations to the different lateralizations were evident only after training, indicative of a learning-induced mismatch negativity (MMN). More precisely, this MMN at 195-250 ms after stimulus onset followed from differences in the AEP topography to each stimulus position, indicative of changes in the underlying brain network. In experiment 2, mirror-symmetric locations were used for training and AEP sessions; no training-related AEP modulations or MMN were observed. In experiment 3, the discrimination of trained plus equidistant untrained separations was tested psychophysically before and 0, 6, 24, and 48 h after training. Learning-induced plasticity lasted <6 h, did not generalize to untrained lateralizations, and was not the simple result of strengthening the representation of the trained lateralizations. Thus, learning-induced plasticity of auditory spatial discrimination relies on spatial comparisons, rather than a spatial anchor or a general comparator. Furthermore, cortical auditory representations of space are dynamic and subject to rapid reorganization.
Resumo:
The kitten's auditory cortex (including the first and second auditory fields AI and AII) is known to send transient axons to either ipsi- or contralateral visual areas 17 and 18. By the end of the first postnatal month the transitory axons, but not their neurons of origin, are eliminated. Here we investigated where these neurons project after the elimination of the transitory axon. Eighteen kittens received early (postnatal day (pd) 2 - 5) injections of long lasting retrograde fluorescent traces in visual areas 17 and 18 and late (pd 35 - 64) injections of other retrograde fluorescent tracers in either hemisphere, mostly in areas known to receive projections from AI and AII in the adult cat. The middle ectosylvian gyrus was analysed for double-labelled neurons in the region corresponding approximately to AI and AII. Late injections in the contralateral (to the analysed AI, AII) hemisphere including all of the known auditory areas, as well as some visual and 'association' areas, did not relabel neurons which had had transient projections to either ipsi- or contralateral visual areas 17 - 18. Thus, AI and AII neurons after eliminating their transient juvenile projections to visual areas 17 and 18 do not project to the other hemisphere. In contrast, relabelling was obtained with late injections in several locations in the ipsilateral hemisphere; it was expressed as per cent of the population labelled by the early injections. Few neurons (0 - 2.5%) were relabelled by large injections in the caudal part of the posterior ectosylvian gyrus and the adjacent posterior suprasylvian sulcus (areas DP, P, VP). Multiple injections in the middle ectosylvian gyrus relabelled a considerably larger percentage of neurons (13%). Single small injections in the middle ectosylvian gyrus (areas AI, AII), the caudal part of the anterior ectosylvian gyrus and the rostral part of the posterior ectosylvian gyrus relabelled 3.1 - 7.0% of neurons. These neurons were generally near (<2.0 mm) the outer border of the late injection sites. Neurons with transient projections to ipsi- or contralateral visual areas 17 and 18 were relabelled in similar proportions by late injections at any given location. Thus, AI or AII neurons which send a transitory axon to ipsi- or contralateral visual areas 17 and 18 are most likely to form short permanent cortical connections. In that respect, they are similar to medial area 17 neurons that form transitory callosal axons and short permanent axons to ipsilateral visual areas 17 and 18.
Resumo:
Les comportements de recherche de sécurité avec des personnes qui ont un diagnostic de schizophrénie et qui présentent des hallucinations auditives verbales sont fréquents. Ils nécessitent d'être étudiés afin de développer des interventions pour soulager les patients.
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Resumo:
Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.
Resumo:
Since the early days of functional magnetic resonance imaging (fMRI), retinotopic mapping emerged as a powerful and widely-accepted tool, allowing the identification of individual visual cortical fields and furthering the study of visual processing. In contrast, tonotopic mapping in auditory cortex proved more challenging primarily because of the smaller size of auditory cortical fields. The spatial resolution capabilities of fMRI have since advanced, and recent reports from our labs and several others demonstrate the reliability of tonotopic mapping in human auditory cortex. Here we review the wide range of stimulus procedures and analysis methods that have been used to successfully map tonotopy in human auditory cortex. We point out that recent studies provide a remarkably consistent view of human tonotopic organisation, although the interpretation of the maps continues to vary. In particular, there remains controversy over the exact orientation of the primary gradients with respect to Heschl's gyrus, which leads to different predictions about the location of human A1, R, and surrounding fields. We discuss the development of this debate and argue that literature is converging towards an interpretation that core fields A1 and R fold across the rostral and caudal banks of Heschl's gyrus, with tonotopic gradients laid out in a distinctive V-shaped manner. This suggests an organisation that is largely homologous with non-human primates. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Resumo:
The cortical auditory fields of the two hemispheres are interconnected via the corpus callosum. We have investigated the topographical arrangement of auditory callosal axons in the cat. Following circumscribed biocytin injections in the primary (AI), secondary (AII), anterior (AAF) and posterior (PAF) auditory fields, labelled axons have been found in the posterior two-thirds of the corpus callosum. Callosal axons labelled by small individual cortical injections did not form a tight bundle at the callosal midsagittal plane but spread over as much as one-third of the corpus callosum. Axons originating from different auditory fields were roughly topographically ordered, reflecting to some extent the rostro-caudal position of the field of origin. Axons from AAF crossed on average more rostrally than axons from AI; the latter crossed more rostrally than axons from PAF and AII. Callosal axons originating in a discrete part of the cortex travelled first in a relatively tight bundle to the telo-diencephalic junction and then dispersed progressively. In conclusion, the cat corpus callosum does not contain a sector reserved for auditory axons, nor a strictly topographically ordered auditory pathway. This observation is of relevance to neuropsychological and neuropathological observations in man.
Resumo:
Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.
Resumo:
Mismatch negativity (MMN) overlaps with other auditory event-related potential (ERP) components. We examined the ERPs of 50 9- to 11-year-old children for vowels /i/, /y/ and equivalent complex tones. The goal was to separate MMN from obligatory ERP components using principal component analysis and equal probability control condition. In addition to the contrast of the deviant minus standard response, we employed the contrast of the deviant minus control response, to see whether the obligatory processing contributes to MMN in children. When looking for differences in speech deviant minus standard contrast, MMN starts around 112 ms. However, when both contrasts are examined, MMN emerges for speech at 160 ms whereas for nonspeech MMN is observed at 112 ms regardless of contrast. We argue that this discriminative response to speech stimuli at 112 ms is obligatory in nature rather than reflecting change detection processing.
Resumo:
BACKGROUND: An auditory perceptual learning paradigm was used to investigate whether implicit memories are formed during general anesthesia. METHODS: Eighty-seven patients who had an American Society of Anesthesiologists physical status of I-III and were scheduled to undergo an elective surgery with general anesthesia were randomly assigned to one of two groups. One group received auditory stimulation during surgery, whereas the other did not. The auditory stimulation consisted of pure tones presented via headphones. The Bispectral Index level was maintained between 40 and 50 during surgery. To assess learning, patients performed an auditory frequency discrimination task after surgery, and comparisons were made between the groups. General anesthesia was induced with thiopental and maintained with a mixture of fentanyl and sevoflurane. RESULTS: There was no difference in the amount of learning between the two groups (mean +/- SD improvement: stimulated patients 9.2 +/- 11.3 Hz, controls 9.4 +/- 14.1 Hz). There was also no difference in initial thresholds (mean +/- SD initial thresholds: stimulated patients 31.1 +/- 33.4 Hz, controls 28.4 +/- 34.2 Hz). These results suggest that perceptual learning was not induced during anesthesia. No correlation between the bispectral index and the initial level of performance was found (Pearson r = -0.09, P = 0.59). CONCLUSION: Perceptual learning was not induced by repetitive auditory stimulation during anesthesia. This result may indicate that perceptual learning requires top-down processing, which is suppressed by the anesthetic.