159 resultados para auditory EEG
Resumo:
Evidence from neuropsychological and activation studies (Clarke et al., 2oo0, Maeder et al., 2000) suggests that sound recognitionand localisation are processed by two anatomically and functionally distinct cortical networks. We report here on a case of a patientthat had an interruption of auditory information and we show: i) the effects of this interruption on cortical auditory processing; ii)the effect of the workload on activation pattern.A 36 year old man suffered from a small left mesencephalic haemotrhage, due to cavernous angioma; the let% inferior colliculuswas resected in the surgical approach of the vascular malformation. In the acute stage, the patient complained of auditoryhallucinations and of auditory loss in right ear, while tonal audiometry was normal. At 12 months, auditory recognition, auditorylocalisation (assessed by lTD and IID cues) and auditory motion perception were normal (Clarke et al., 2000), while verbal dichoticlistening was deficient on the right side.Sound recognition and sound localisation activation patterns were investigated with fMRI, using a passive and an activeparadigm. In normal subjects, distinct cortical networks were involved in sound recognition and localisation, both in passive andactive paradigm (Maeder et al., 2OOOa, 2000b).Passive listening of environmental and spatial stimuli as compared to rest strongly activated right auditory cortex, but failed toactivate left primary auditory cortex. The specialised networks for sound recognition and localisation could not be visual&d onthe right and only minimally on the left convexity. A very different activation pattern was obtained in the active condition wherea motor response was required. Workload not only increased the activation of the right auditory cortex, but also allowed theactivation of the left primary auditory cortex. The specialised networks for sound recognition and localisation were almostcompletely present in both hemispheres.These results show that increasing the workload can i) help to recruit cortical region in the auditory deafferented hemisphere;and ii) lead to processing auditory information within specific cortical networks.References:Clarke et al. (2000). Neuropsychologia 38: 797-807.Mae.der et al. (2OOOa), Neuroimage 11: S52.Maeder et al. (2OOOb), Neuroimage 11: S33
Resumo:
Introduction: Neuronal oscillations have been the focus of increasing interest in the neuroscientific community, in part because they have been considered as a possible integrating mechanism through which internal states can influence stimulus processing in a top-down way (Engel et al., 2001). Moreover, increasing evidence indicates that oscillations in different frequency bands interact with one other through coupling mechanisms (Jensen and Colgin, 2007). The existence and the importance of these cross-frequency couplings during various tasks have been verified by recent studies (Canolty et al., 2006; Lakatos et al., 2007). In this study, we measure the strength and directionality of two types of couplings - phase-amplitude couplings and phase-phase couplings - between various bands in EEG data recorded during an illusory contour experiment that were identified using a recently-proposed adaptive frequency tracking algorithm (Van Zaen et al., 2010). Methods: The data used in this study have been taken from a previously published study examining the spatiotemporal mechanisms of illusory contour processing (Murray et al., 2002). The EEG in the present study were from a subset of nine subjects. Each stimulus was composed of 'pac-man' inducers presented in two orientations: IC, when an illusory contour was present, and NC, when no contour could be detected. The signals recorded by the electrodes P2, P4, P6, PO4 and PO6 were averaged, and filtered into the following bands: 4-8Hz, 8-12Hz, 15-25Hz, 35-45Hz, 45-55Hz, 55-65Hz and 65-75Hz. An adaptive frequency tracking algorithm (Van Zaen et al., 2010) was then applied in each band in order to extract the main oscillation and estimate its frequency. This additional step ensures that clean phase information is obtained when taking the Hilbert transform. The frequency estimated by the tracker was averaged over sliding windows and then used to compare the two conditions. Two types of cross-frequency couplings were considered: phase-amplitude couplings and phase-phase couplings. Both types were measured with the phase locking value (PLV, Lachaux et al., 1999) over sliding windows. The phase-amplitude couplings were computed with the phase of the low frequency oscillation and the phase of the amplitude of the high frequency one. Different coupling coefficients were used when measuring phase-phase couplings in order to estimate different m:n synchronizations (4:3, 3:2, 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1 and 9:1) and to take into account the frequency differences across bands. Moreover, the direction of coupling was estimated with a directionality index (Bahraminasab et al., 2008). Finally, the two conditions IC and NC were compared with ANOVAs with 'subject' as a random effect and 'condition' as a fixed effect. Before computing the statistical tests, the PLV values were transformed into approximately normal variables (Penny et al., 2008). Results: When comparing the mean estimated frequency across conditions, a significant difference was found only in the 4-8Hz band, such that the frequency within this band was significantly higher for IC than NC stimuli starting at ~250ms post-stimulus onset (Fig. 1; solid line shows IC and dashed line NC). Significant differences in phase-amplitude couplings were obtained only when the 4-8 Hz band was taken as the low frequency band. Moreover, in all significant situations, the coupling strength is higher for the NC than IC condition. An example of significant difference between conditions is shown in Fig. 2 for the phase-amplitude coupling between the 4-8Hz and 55-65Hz bands (p-value in top panel and mean PLV values in the bottom panel). A decrease in coupling strength was observed shortly after stimulus onset for both conditions and was greater for the condition IC. This phenomenon was observed with all other frequency bands. The results obtained for the phase-phase couplings were more complex. As for the phase-amplitude couplings, all significant differences were obtained when the 4-8Hz band was considered as the low frequency band. The stimulus condition exhibiting the higher coupling strength depended on the ratio of the coupling coefficients. When this ratio was small, the IC condition exhibited the higher phase-phase coupling strength. When this ratio was large, the NC condition exhibited the higher coupling strength. Fig. 3 shows the phase-phase couplings between the 4-8Hz and 35-45Hz bands for the coupling coefficient 6:1, and the coupling strength was significantly higher for the IC than NC condition. By contrast, for the coupling coefficient 9:1 the NC condition gave the higher coupling strength (Fig. 4). Control analyses verified that it is not a consequence of the frequency difference between the two conditions in the 4-8Hz band. The directionality measures indicated a transfer of information from the low frequency components towards the high frequency ones. Conclusions: Adaptive tracking is a feasible method for EEG analyses, revealing information both about stimulus-related differences and coupling patterns across frequencies. Theta oscillations play a central role in illusory shape processing and more generally in visual processing. The presence vs. absence of illusory shapes was paralleled by faster theta oscillations. Phase-amplitude couplings were decreased more for IC than NC and might be due to a resetting mechanism. The complex patterns in phase-phase coupling between theta and beta/gamma suggest that the contribution of these oscillations to visual binding and stimulus processing are not as straightforward as conventionally held. Causality analyses further suggest that theta oscillations drive beta/gamma oscillations (see also Schroeder and Lakatos, 2009). The present findings highlight the need for applying more sophisticated signal analyses in order to establish a fuller understanding of the functional role of neural oscillations.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
A transitory projection from primary and secondary auditory areas to the contralateral and ipsilateral areas 17 and 18 exists in newborn kittens. Distinct neuronal populations project to ipsilateral areas 17-18, contralateral areas 17-18 and contralateral auditory cortex; they are at different depth in layers II, III, and IV. By postnatal day 38 the auditory to visual projections have been lost, apparently by elimination of axons rather than by neuronal death. While it was previously reported that the elimination of transitory axons is responsible for focusing the origin of callosal connections to restricted portions of sensory areas it now appears that similar events play a more general role in the organization of cortico-cortical networks. Indeed, the elimination of juvenile projections is largely responsible for determining which areas will be connected in the adult.
Resumo:
Both neural and behavioral responses to stimuli are influenced by the state of the brain immediately preceding their presentation, notably by pre-stimulus oscillatory activity. Using frequency analysis of high-density electroencephalogram coupled with source estimations, the present study investigated the role of pre-stimulus oscillatory activity in auditory spatial temporal order judgments (TOJ). Oscillations within the beta range (i.e. 18-23Hz) were significantly stronger before accurate than inaccurate TOJ trials. Distributed source estimations identified bilateral posterior sylvian regions as the principal contributors to pre-stimulus beta oscillations. Activity within the left posterior sylvian region was significantly stronger before accurate than inaccurate TOJ trials. We discuss our results in terms of a modulation of sensory gating mechanisms mediated by beta activity.
Resumo:
ABSTRACT (English)An accurate processing of the order between sensory events at the millisecond time scale is crucial for both sensori-motor and cognitive functions. Temporal order judgment (TOJ) tasks, is the ability of discriminating the order of presentation of several stimuli presented in a rapid succession. The aim of the present thesis is to further investigate the spatio-temporal brain mechanisms supporting TOJ. In three studies we focus on the dependency of TOJ accuracy on the brain states preceding the presentation of TOJ stimuli, the neural correlates of accurate vs. inaccurate TOJ and whether and how TOJ performance can be improved with training.In "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011), we investigated if the brain activity immediately preceding the presentation of the stimuli modulates TOJ performance. By contrasting the electrophysiological activity before the stimulus presentation as a function of TOJ accuracy we observed a stronger pre-stimulus beta (20Hz) oscillatory activity within the left posterior sylvian region (PSR) before accurate than inaccurate TOJ trials.In "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), and "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), we investigated the spatio-temporal brain dynamics underlying auditory TOJ. In both studies we observed a topographic modulation as a function of TOJ performance at ~40ms after the onset of the first sound, indicating the engagement of distinct configurations of intracranial generators. Source estimations in the first study revealed a bilateral PSR activity for both accurate and inaccurate TOJ trials. Moreover, activity within left, but not right, PSR correlated with TOJ performance. Source estimations in the second study revealed a training-induced left lateralization of the initial bilateral (i.e. PSR) brain response. Moreover, the activity within the left PSR region correlated with TOJ performance.Based on these results, we suggest that a "temporal stamp" is established within left PSR on the first sound within the pair at early stages (i.e. ~40ms) of cortical processes, but is critically modulated by inputs from right PSR (Bernasconi et al., 2010a; b). The "temporal stamp" on the first sound may be established via a sensory gating or prior entry mechanism.Behavioral and brain responses to identical stimuli can vary due to attention modulation, vary with experimental and task parameters or "internal noise". In a fourth experiment (Bernasconi et al., 2011b) we investigated where and when "neural noise" manifest during the stimulus processing. Contrasting the AEPs of identical sound perceived as High vs. Low pitch, a topographic modulation occurred at ca. 100ms after the onset of the sound. Source estimation revealed activity within regions compatible with pitch discrimination. Thus, we provided neurophysiological evidence for the variation in perception induced by "neural noise".ABSTRACT (French)Un traitement précis de l'ordre des événements sensoriels sur une échelle de temps de milliseconde est crucial pour les fonctions sensori-motrices et cognitives. Les tâches de jugement d'ordre temporel (JOT), consistant à présenter plusieurs stimuli en succession rapide, sont traditionnellement employées pour étudier les mécanismes neuronaux soutenant le traitement d'informations sensorielles qui varient rapidement. Le but de cette thèse est d'étudier le mécanisme cérébral soutenant JOT. Dans les trois études présentées nous nous sommes concentrés sur les états du cerveau précédant la présentation des stimuli de JOT, les bases neurales pour un JOT correct vs. incorrect et sur la possibilité et les moyens d'améliorer l'exécution du JOT grâce à un entraînement.Dans "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011),, nous nous sommes intéressé à savoir si l'activité oscillatoire du cerveau au pré-stimulus modulait la performance du JOT. Nous avons contrasté l'activité électrophysiologique en fonction de la performance TOJ, mesurant une activité oscillatoire beta au pré-stimulus plus fort dans la région sylvian postérieure gauche (PSR) liée à un JOT correct.Dans "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), et "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), nous avons étudié la dynamique spatio-temporelle dans le cerveau impliqué dans le traitement du JOT auditif. Dans ses deux études, nous avons observé une modulation topographique à ~40ms après le début du premier son, en fonction de la performance JOT, indiquant l'engagement des configurations de générateurs intra- crâniens distincts. La localisation de source dans la première étude indique une activité bilatérale de PSR pour des JOT corrects vs. incorrects. Par ailleurs, l'activité dans PSR gauche, mais pas dans le droit, est corrélée avec la performance du JOT. La localisation de source dans la deuxième étude indiquait une latéralisation gauche induite par l'entraînement d'une réponse initialement bilatérale du cerveau. D'ailleurs, l'activité dans la région PSR gauche corrèlait avec la performance de TOJ.Basé sur ces résultats, nous proposons qu'un « timbre-temporel » soit établi très tôt (c.-à-d. à ~40ms) sur le premier son par le PSR gauche, mais module par l'activité du PSR droite (Bernasconi et al., 2010a ; b). « Le timbre- temporel » sur le premier son peut être établi par le mécanisme neuronal de type « sensory gating » ou « prior entry ».Les réponses comportementales et du cerveau aux stimuli identiques peut varier du à des modulations d'attention ou à des variations dans les paramètres des tâches ou au bruit interne du cerveau. Dans une quatrième expérience (Bernasconi et al. 2011B), nous avons étudié où et quand le »bruit neuronal« se manifeste pendant le traitement des stimuli. En contrastant les AEPs de sons identiques perçus comme aigus vs. grave, nous avons mesuré une modulation topographique à env. 100ms après l'apparition du son. L'estimation de source a révélé une activité dans les régions compatibles avec la discrimination de fréquences. Ainsi, nous avons fourni des preuves neurophysiologiques de la variation de la perception induite par le «bruit neuronal».
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Resumo:
Deletions on the short arm of chromosome 4 cause Wolf-Hirschhorn syndrome (WHS) and Pitt-Rogers-Danks syndrome (PRDS). WHS is associated with severe growth and mental retardation, microcephaly, a characteristic facies and congenital malformations. The PRDS phenotype is similar to WHS but generally less severe. Seizures occur in the majority of WHS and PRDS patients. Sgrò et al. [17] described a stereotypic electroclinical pattern in four unrelated WHS patients, consisting of intermittent bursts of 2-3 Hz high voltage slow waves with spike wave activity in the parietal areas during drowsiness and sleep associated with myoclonic jerks. We report a patient with PRDS and the typical EEG pattern and review 14 WHS patients with similar EEG findings reported in the literature. CONCLUSION: Awareness and recognition of the characteristic electroclinical findings in Wolf-Hirschhorn syndrome and Pitt-Rogers-Danks syndrome might help in the early diagnosis of such patients.
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.
Resumo:
Hearing loss can be caused by a variety of insults, including acoustic trauma and exposure to ototoxins, that principally effect the viability of sensory hair cells via the MAP kinase (MAPK) cell death signaling pathway that incorporates c-Jun N-terminal kinase (JNK). We evaluated the otoprotective efficacy of D-JNKI-1, a cell permeable peptide that blocks the MAPK-JNK signal pathway. The experimental studies included organ cultures of neonatal mouse cochlea exposed to an ototoxic drug and cochleae of adult guinea pigs that were exposed to either an ototoxic drug or acoustic trauma. Results obtained from the organ of Corti explants demonstrated that the MAPK-JNK signal pathway is associated with injury and that blocking of this signal pathway prevented apoptosis in areas of aminoglycoside damage. Treatment of the neomycin-exposed organ of Corti explants with D-JNKI-1 completely prevented hair cell death initiated by this ototoxin. Results from in vivo studies showed that direct application of D-JNKI-1 into the scala tympani of the guinea pig cochlea prevented nearly all hair cell death and permanent hearing loss induced by neomycin ototoxicity. Local delivery of D-JNKI-1 also prevented acoustic trauma-induced permanent hearing loss in a dose-dependent manner. These results indicate that the MAPK-JNK signal pathway is involved in both ototoxicity and acoustic trauma-induced hair cell loss and permanent hearing loss. Blocking this signal pathway with D-JNKI-1 is of potential therapeutic value for long-term protection of both the morphological integrity and physiological function of the organ of Corti during times of oxidative stress.
Resumo:
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Resumo:
Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials in response to auditory, somatosensory, or simultaneous auditory-somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition). Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40-84 ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86-128 ms) were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena.
Resumo:
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.