996 resultados para Auditory Perceptual Disorders
Resumo:
Thirty-two pouch-young tammar wallabies were used to discover the generators of the auditory brainstem response (ABR) during development by the use of simultaneous ABR and focal brainstem recordings. A click response from the auditory nerve root (ANR) in the wallaby was recorded from postnatal day (PND) 101, when no central auditory station was functional, and coincided with the ABR, a simple positive wave. The response of the cochlear nucleus (CN) was detected from PND 110, when the ABR had developed 1 positive and 1 negative peak. The dominant component of the focal ANR response, the N-1 wave, coincided with the first half of the ABR P wave, and that of the focal CN response, the N-1 wave, coincided with the later two thirds. In older animals, the ANR response coincided with the ABR's N-1, wave, while the CN response coincided with the ABR's P-2, N-2 and P-3 waves, with its contribution to the ABR P-2 dominant. The protracted development of the marsupial auditory system which facilitated these correlations makes the tammar wallaby a particularly suitable model. Copyright (C) 2001 S. Karger AG, Basel.
Resumo:
To discover the developmental relationship between the auditory brainstem response (ABR) and the focal inferior colliculus (IC) response, 32 young tammar wallabies were used, by the application of simultaneous ABR and focal brainstem recordings, in response to acoustic clicks and tone bursts of seven frequencies. The ic or the tammar wallaby undergoes a rapid functional development from postnatal day (PND) 114 to 160. The earliest (PND 114) auditory evoked response was recorded from the rostral IC. With development, more caudal parts of the IC became functional until age about PND 127, when all parts of the IC were responsive to sound. Along a dorsoventral direction, the duration of the IC response decreased, the peak latency shortened, while the amplitude increased, reaching a maximum value at the central IC, then decreased. After PND 160, the best frequency (BF) of the ventral IC was the highest, with values between 12.5 and 16 kHz, the BF of the dorsal IC was the lowest, varying between 3.2 and 6.4 kHz, while the BF of the central IC was between 6.4 and 12.5 kHz. Between PND 114 and 125, the IC response did not have temporal correlation with the ABR. Between PND 140 and 160, only the early components of the responses from the ventral and central IC correlated with the P4 waves of the ABR. After PND 160, responses recorded from different depths of the IC had a temporal correlation with the ABR. (C) 2001 Published by Elsevier Science B.V.
Resumo:
Compression amplification significantly alters the acoustic speech signal in comparison to linear amplification. The central hypothesis of the present study was that the compression settings of a two-channel aid that best preserved the acoustic properties of speech compared to linear amplification would yield the best perceptual results, and that the compression settings that most altered the acoustic properties of speech compared to linear would yield significantly poorer speech perception. On the basis of initial acoustic analysis of the test stimuli recorded through a hearing aid, two different compression amplification settings were chosen for the perceptual study. Participants were 74 adults with mild to moderate sensorineural hearing impairment. Overall, the speech perception results supported the hypothesis. A further aim of the study was to determine if variation in participants' speech perception with compression amplification (compared to linear amplification) could be explained by the individual characteristics of age, degree of loss, dynamic range, temporal resolution, and frequency selectivity; however, no significant relationships were found.
Resumo:
Auditory event-related potentials (AERPs) are widely used in diverse fields of today’s neuroscience, concerning auditory processing, speech perception, language acquisition, neurodevelopment, attention and cognition in normal aging, gender, developmental, neurologic and psychiatric disorders. However, its transposition to clinical practice has remained minimal. Mainly due to scarce literature on normative data across age, wide spectrumof results, variety of auditory stimuli used and to different neuropsychological meanings of AERPs components between authors. One of the most prominent AERP components studied in last decades was N1, which reflects auditory detection and discrimination. Subsequently, N2 indicates attention allocation and phonological analysis. The simultaneous analysis of N1 and N2 elicited by feasible novelty experimental paradigms, such as auditory oddball, seems an objective method to assess central auditory processing. The aim of this systematic review was to bring forward normative values for auditory oddball N1 and N2 components across age. EBSCO, PubMed, Web of Knowledge and Google Scholarwere systematically searched for studies that elicited N1 and/or N2 by auditory oddball paradigm. A total of 2,764 papers were initially identified in the database, of which 19 resulted from hand search and additional references, between 1988 and 2013, last 25 years. A final total of 68 studiesmet the eligibility criteria with a total of 2,406 participants from control groups for N1 (age range 6.6–85 years; mean 34.42) and 1,507 for N2 (age range 9–85 years; mean 36.13). Polynomial regression analysis revealed thatN1latency decreases with aging at Fz and Cz,N1 amplitude at Cz decreases from childhood to adolescence and stabilizes after 30–40 years and at Fz the decrement finishes by 60 years and highly increases after this age. Regarding N2, latency did not covary with age but amplitude showed a significant decrement for both Cz and Fz. Results suggested reliable normative values for Cz and Fz electrode locations; however, changes in brain development and components topography over age should be considered in clinical practice.
Resumo:
Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.
Resumo:
Abstract : GABA, the primary inhibitory neurotransmitter, and its receptors play an important role in modulating neuronal activity in the central nervous system and are implicated in many neurological disorders. In this study, GABAA and GABAB receptor subunit expression was visualized by immunohistochemistry in human auditory areas TC (= primary auditory area), TB, and TA. Both hemispheres from nine neurologically normal subjects and from four patients with subacute or chronic stroke were included. In normal brains, GABAA receptor subunit (α1, α2, & β2/3) labeling produced neuropil staining throughout all cortical layers as well as labeling fibers and neurons in layer VI for all auditory areas. Densitometry profiles displayed differences in GABAA subunit expression between primary and non-primary areas. In contrast to the neuropil labeling of GABAA subunits, GABAB1 and GABAB2 subunit immunoreactivity was revealed on neuronal somata and proximal dendritic shafts of pyramidal and non-pyramidal neurons in layers II-III, more strongly on supra- than in infragranular layers. No differences were observed between auditory areas. In stroke cases, we observed a downregulation of the GABAA receptor α2 subunit in granular and infragranular layers, while the other GABAA and the two GABAB receptor subunits remained unchanged. Our results demonstrate a strong presence of GABAA and GABAB receptors in the human auditory cortex, suggesting a crucial role of GABA in shaping auditory responses in the primary and non-primary auditory areas. The differential laminar and area expression of GABAA subunits that we have found in the auditory areas and which is partially different from that in other cortical areas speaks in favor of a fine turning of GABA-ergic transmission in these different compartments. In contrast, GABAB expression displayed laminar, but not areal differences; its basic pattern was also very similar to that of other cortical areas, suggesting a more uniform role within the cerebral cortex. In subacute and chronic stroke, the selective GABAA α2 subunit downregulation is likely to influence postlesional plasticity and susceptibility to medication. The absence of changes in the GABAB receptors suggests different regulation than in other pathological conditions, such as epilepsy, schizophrenia or bipolar disorder, in which a downregulation has been reported. Résumé : GABA, le principal neurotransmetteur inhibiteur, et ses récepteurs jouent un rôle important en tant que modulateur de l'activité neuronale dans le système nerveux central et sont impliqués dans de nombreux désordres neurologiques. Dans cette étude, l'expression des sous-unités des récepteur GABAA et GABAB a été visualisée par immunohistochimie dans les aires auditives du cortex humains: le TC (= aire auditif primaire), le TB, et le TA. Les deux hémisphères de neuf sujets considérés normaux du point de vue neurologique et de quatre patients ayant subis un accident cérébro-vasculaire et se trouvant dans la phase subaiguë ou chronique étaient inclues. Dans les cerveaux normaux, les immunohistochimies contre les sous-unités α1, α2, & β2/3 du récepteur GABAA ont marqué le neuropil dans toutes les couches corticales ainsi que les fibres et les neurones de la couche VI dans toutes les aires auditives. Le profile densitométrique montre des différences dans l'expression des sous-unités du récepteur GABAA entre les aires primaires et non-primaires. Contrairement au marquage de neuropil par les sous-unités du recepteur GABAA, 1'immunoréactivité des sous-unités GABAB1 et GABAB2 a été révélée sur les corps cellulaires neuronaux et les dendrites proximaux des neurones pyramidaux et non-pyramidaux dans les couches II-III et est plus dense dans les couches supragranulaires que dans les couches infragranulaires. Aucune différence n'a été observée entre les aires auditives. Dans des cas lésionnels, nous avons observé une diminution de la sous-unité α2 du récepteur GABAA dans les couches granulaires et infragranulaires, alors que le marquage des autres sous-unités du récepteur GABAA et des deux sous-unités de récepteur GABAB reste inchangé. Nos résultats démontrent une présence forte des récepteurs GABAA et GABAB dans le cortex auditif humain, suggérant un rôle crucial du neurotransmetteur GABA dans la formation de la réponse auditive dans les aires auditives primaires et non-primaires. L'expression différentielle des sous-unités de GABAA entre les couches corticales et entre les aires auditives et qui est partiellement différente de celle observée dans d'autres aires corticales préconise une modulation fine de la transmission GABA-ergic en ces différents compartiments. En revanche, l'expression de GABAB a montré des différences laminaires, mais non régionales ; son motif d'expression de base est également très semblable à celui d'autres aires corticales, suggérant un rôle plus uniforme dans le cortex cérébral. Dans les phases subaiguë et chronique des accidents cérébro-vasculaires, la diminution sélective de la sous-unité α2 du recepteur GABAA est susceptible d'influencer la plasticité et la susceptibilité postlésionnelle au médicament. L'absence de changement pour les récepteurs GABAB suggère que le récepteur est régulé différemment après un accident cerebro-vasculaire par rapport à d'autres conditions pathologiques, telles que l'épilepsie, la schizophrénie ou le désordre bipolaire, dans lesquels une diminution de ces sous-unités a été rapportée.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Resumo:
Dans le domaine de la perception, l'apprentissage est contraint par la présence d'une architecture fonctionnelle constituée d'aires corticales distribuées et très spécialisées. Dans le domaine des troubles visuels d'origine cérébrale, l'apprentissage d'un patient hémi-anopsique ou agnosique sera limité par ses capacités perceptives résiduelles, mais un déficit de reconnaissance visuelle de nature apparemment perceptive, peut également être associé à une altération des représentations en mémoire à long terme. Des réseaux neuronaux distincts pour la reconnaissance - cortex temporal - et pour la localisation des sons - cortex pariétal - ont été décrits chez l'homme. L'étude de patients cérébro-lésés confirme le rôle des indices spatiaux dans un traitement auditif explicite du « where » et dans la discrimination implicite du « what ». Cette organisation, similaire à ce qui a été décrit dans la modalité visuelle, faciliterait les apprentissages perceptifs. Plus généralement, l'apprentissage implicite fonde une grande partie de nos connaissances sur le monde en nous rendant sensible, à notre insu, aux règles et régularités de notre environnement. Il serait impliqué dans le développement cognitif, la formation des réactions émotionnelles ou encore l'apprentissage par le jeune enfant de sa langue maternelle. Le caractère inconscient de cet apprentissage est confirmé par l'étude des temps de réaction sériels de patients amnésiques dans l'acquisition d'une grammaire artificielle. Son évaluation pourrait être déterminante dans la prise en charge ré-adaptative. [In the field of perception, learning is formed by a distributed functional architecture of very specialized cortical areas. For example, capacities of learning in patients with visual deficits - hemianopia or visual agnosia - from cerebral lesions are limited by perceptual abilities. Moreover a visual deficit in link with abnormal perception may be associated with an alteration of representations in long term (semantic) memory. Furthermore, perception and memory traces rely on parallel processing. This has been recently demonstrated for human audition. Activation studies in normal subjects and psychophysical investigations in patients with focal hemispheric lesions have shown that auditory information relevant to sound recognition and that relevant to sound localisation are processed in parallel, anatomically distinct cortical networks, often referred to as the "What" and "Where" processing streams. Parallel processing may appear counterintuitive from the point of view of a unified perception of the auditory world, but there are advantages, such as rapidity of processing within a single stream, its adaptability in perceptual learning or facility of multisensory interactions. More generally, implicit learning mechanisms are responsible for the non-conscious acquisition of a great part of our knowledge about the world, using our sensitivity to the rules and regularities structuring our environment. Implicit learning is involved in cognitive development, in the generation of emotional processing and in the acquisition of natural language. Preserved implicit learning abilities have been shown in amnesic patients with paradigms like serial reaction time and artificial grammar learning tasks, confirming that implicit learning mechanisms are not sustained by the cognitive processes and the brain structures that are damaged in amnesia. In a clinical perspective, the assessment of implicit learning abilities in amnesic patients could be critical for building adapted neuropsychological rehabilitation programs.]
Resumo:
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.