987 resultados para auditory masking
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
Evidence from neuropsychological and activation studies (Clarke et al., 2oo0, Maeder et al., 2000) suggests that sound recognitionand localisation are processed by two anatomically and functionally distinct cortical networks. We report here on a case of a patientthat had an interruption of auditory information and we show: i) the effects of this interruption on cortical auditory processing; ii)the effect of the workload on activation pattern.A 36 year old man suffered from a small left mesencephalic haemotrhage, due to cavernous angioma; the let% inferior colliculuswas resected in the surgical approach of the vascular malformation. In the acute stage, the patient complained of auditoryhallucinations and of auditory loss in right ear, while tonal audiometry was normal. At 12 months, auditory recognition, auditorylocalisation (assessed by lTD and IID cues) and auditory motion perception were normal (Clarke et al., 2000), while verbal dichoticlistening was deficient on the right side.Sound recognition and sound localisation activation patterns were investigated with fMRI, using a passive and an activeparadigm. In normal subjects, distinct cortical networks were involved in sound recognition and localisation, both in passive andactive paradigm (Maeder et al., 2OOOa, 2000b).Passive listening of environmental and spatial stimuli as compared to rest strongly activated right auditory cortex, but failed toactivate left primary auditory cortex. The specialised networks for sound recognition and localisation could not be visual&d onthe right and only minimally on the left convexity. A very different activation pattern was obtained in the active condition wherea motor response was required. Workload not only increased the activation of the right auditory cortex, but also allowed theactivation of the left primary auditory cortex. The specialised networks for sound recognition and localisation were almostcompletely present in both hemispheres.These results show that increasing the workload can i) help to recruit cortical region in the auditory deafferented hemisphere;and ii) lead to processing auditory information within specific cortical networks.References:Clarke et al. (2000). Neuropsychologia 38: 797-807.Mae.der et al. (2OOOa), Neuroimage 11: S52.Maeder et al. (2OOOb), Neuroimage 11: S33
Resumo:
Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
Ly49A is an inhibitory receptor, which counteracts natural killer (NK) cell activation on the engagement with H-2D(d) (D(d)) MHC class I molecules (MHC-I) on target cells. In addition to binding D(d) on apposed membranes, Ly49A interacts with D(d) ligand expressed in the plane of the NK cells' membrane. Indeed, multivalent, soluble MHC-I ligand binds inefficiently to Ly49A unless the NK cells' D(d) complexes are destroyed. However, it is not known whether masked Ly49A remains constitutively associated with cis D(d) also during target cell interaction. Alternatively, it is possible that Ly49A has to be unmasked to significantly interact with its ligand on target cells. These two scenarios suggest distinct roles of Ly49A/D(d) cis interaction for NK cell function. Here, we show that Ly49A contributes to target cell adhesion and efficiently accumulates at synapses with D(d)-expressing target cells when NK cells themselves lack D(d). When NK cells express D(d), Ly49A no longer contributes to adhesion, and ligand-driven recruitment to the cellular contact site is strongly reduced. The destruction of D(d) complexes on NK cells, which unmasks Ly49A, is necessary and sufficient to restore Ly49A adhesive function and recruitment to the synapse. Thus, cis D(d) continuously sequesters a considerable fraction of Ly49A receptors, preventing efficient Ly49A recruitment to the synapse with D(d)+ target cells. The reduced number of Ly49A receptors that can functionally interact with D(d) on target cells explains the modest inhibitory capacity of Ly49A in D(d) NK cells. This property renders Ly49A NK cells more sensitive to react to diseased host cells.
Resumo:
A transitory projection from primary and secondary auditory areas to the contralateral and ipsilateral areas 17 and 18 exists in newborn kittens. Distinct neuronal populations project to ipsilateral areas 17-18, contralateral areas 17-18 and contralateral auditory cortex; they are at different depth in layers II, III, and IV. By postnatal day 38 the auditory to visual projections have been lost, apparently by elimination of axons rather than by neuronal death. While it was previously reported that the elimination of transitory axons is responsible for focusing the origin of callosal connections to restricted portions of sensory areas it now appears that similar events play a more general role in the organization of cortico-cortical networks. Indeed, the elimination of juvenile projections is largely responsible for determining which areas will be connected in the adult.
Resumo:
Both neural and behavioral responses to stimuli are influenced by the state of the brain immediately preceding their presentation, notably by pre-stimulus oscillatory activity. Using frequency analysis of high-density electroencephalogram coupled with source estimations, the present study investigated the role of pre-stimulus oscillatory activity in auditory spatial temporal order judgments (TOJ). Oscillations within the beta range (i.e. 18-23Hz) were significantly stronger before accurate than inaccurate TOJ trials. Distributed source estimations identified bilateral posterior sylvian regions as the principal contributors to pre-stimulus beta oscillations. Activity within the left posterior sylvian region was significantly stronger before accurate than inaccurate TOJ trials. We discuss our results in terms of a modulation of sensory gating mechanisms mediated by beta activity.
Resumo:
ABSTRACT (English)An accurate processing of the order between sensory events at the millisecond time scale is crucial for both sensori-motor and cognitive functions. Temporal order judgment (TOJ) tasks, is the ability of discriminating the order of presentation of several stimuli presented in a rapid succession. The aim of the present thesis is to further investigate the spatio-temporal brain mechanisms supporting TOJ. In three studies we focus on the dependency of TOJ accuracy on the brain states preceding the presentation of TOJ stimuli, the neural correlates of accurate vs. inaccurate TOJ and whether and how TOJ performance can be improved with training.In "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011), we investigated if the brain activity immediately preceding the presentation of the stimuli modulates TOJ performance. By contrasting the electrophysiological activity before the stimulus presentation as a function of TOJ accuracy we observed a stronger pre-stimulus beta (20Hz) oscillatory activity within the left posterior sylvian region (PSR) before accurate than inaccurate TOJ trials.In "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), and "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), we investigated the spatio-temporal brain dynamics underlying auditory TOJ. In both studies we observed a topographic modulation as a function of TOJ performance at ~40ms after the onset of the first sound, indicating the engagement of distinct configurations of intracranial generators. Source estimations in the first study revealed a bilateral PSR activity for both accurate and inaccurate TOJ trials. Moreover, activity within left, but not right, PSR correlated with TOJ performance. Source estimations in the second study revealed a training-induced left lateralization of the initial bilateral (i.e. PSR) brain response. Moreover, the activity within the left PSR region correlated with TOJ performance.Based on these results, we suggest that a "temporal stamp" is established within left PSR on the first sound within the pair at early stages (i.e. ~40ms) of cortical processes, but is critically modulated by inputs from right PSR (Bernasconi et al., 2010a; b). The "temporal stamp" on the first sound may be established via a sensory gating or prior entry mechanism.Behavioral and brain responses to identical stimuli can vary due to attention modulation, vary with experimental and task parameters or "internal noise". In a fourth experiment (Bernasconi et al., 2011b) we investigated where and when "neural noise" manifest during the stimulus processing. Contrasting the AEPs of identical sound perceived as High vs. Low pitch, a topographic modulation occurred at ca. 100ms after the onset of the sound. Source estimation revealed activity within regions compatible with pitch discrimination. Thus, we provided neurophysiological evidence for the variation in perception induced by "neural noise".ABSTRACT (French)Un traitement précis de l'ordre des événements sensoriels sur une échelle de temps de milliseconde est crucial pour les fonctions sensori-motrices et cognitives. Les tâches de jugement d'ordre temporel (JOT), consistant à présenter plusieurs stimuli en succession rapide, sont traditionnellement employées pour étudier les mécanismes neuronaux soutenant le traitement d'informations sensorielles qui varient rapidement. Le but de cette thèse est d'étudier le mécanisme cérébral soutenant JOT. Dans les trois études présentées nous nous sommes concentrés sur les états du cerveau précédant la présentation des stimuli de JOT, les bases neurales pour un JOT correct vs. incorrect et sur la possibilité et les moyens d'améliorer l'exécution du JOT grâce à un entraînement.Dans "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011),, nous nous sommes intéressé à savoir si l'activité oscillatoire du cerveau au pré-stimulus modulait la performance du JOT. Nous avons contrasté l'activité électrophysiologique en fonction de la performance TOJ, mesurant une activité oscillatoire beta au pré-stimulus plus fort dans la région sylvian postérieure gauche (PSR) liée à un JOT correct.Dans "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), et "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), nous avons étudié la dynamique spatio-temporelle dans le cerveau impliqué dans le traitement du JOT auditif. Dans ses deux études, nous avons observé une modulation topographique à ~40ms après le début du premier son, en fonction de la performance JOT, indiquant l'engagement des configurations de générateurs intra- crâniens distincts. La localisation de source dans la première étude indique une activité bilatérale de PSR pour des JOT corrects vs. incorrects. Par ailleurs, l'activité dans PSR gauche, mais pas dans le droit, est corrélée avec la performance du JOT. La localisation de source dans la deuxième étude indiquait une latéralisation gauche induite par l'entraînement d'une réponse initialement bilatérale du cerveau. D'ailleurs, l'activité dans la région PSR gauche corrèlait avec la performance de TOJ.Basé sur ces résultats, nous proposons qu'un « timbre-temporel » soit établi très tôt (c.-à-d. à ~40ms) sur le premier son par le PSR gauche, mais module par l'activité du PSR droite (Bernasconi et al., 2010a ; b). « Le timbre- temporel » sur le premier son peut être établi par le mécanisme neuronal de type « sensory gating » ou « prior entry ».Les réponses comportementales et du cerveau aux stimuli identiques peut varier du à des modulations d'attention ou à des variations dans les paramètres des tâches ou au bruit interne du cerveau. Dans une quatrième expérience (Bernasconi et al. 2011B), nous avons étudié où et quand le »bruit neuronal« se manifeste pendant le traitement des stimuli. En contrastant les AEPs de sons identiques perçus comme aigus vs. grave, nous avons mesuré une modulation topographique à env. 100ms après l'apparition du son. L'estimation de source a révélé une activité dans les régions compatibles avec la discrimination de fréquences. Ainsi, nous avons fourni des preuves neurophysiologiques de la variation de la perception induite par le «bruit neuronal».
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.