273 resultados para evoked brain stem auditory response
Resumo:
Mismatch negativity (MMN) overlaps with other auditory event-related potential (ERP) components. We examined the ERPs of 50 9- to 11-year-old children for vowels /i/, /y/ and equivalent complex tones. The goal was to separate MMN from obligatory ERP components using principal component analysis and equal probability control condition. In addition to the contrast of the deviant minus standard response, we employed the contrast of the deviant minus control response, to see whether the obligatory processing contributes to MMN in children. When looking for differences in speech deviant minus standard contrast, MMN starts around 112 ms. However, when both contrasts are examined, MMN emerges for speech at 160 ms whereas for nonspeech MMN is observed at 112 ms regardless of contrast. We argue that this discriminative response to speech stimuli at 112 ms is obligatory in nature rather than reflecting change detection processing.
Resumo:
Background We previously reported the results of a phase II study for patients with newly diagnosed primary central nervous system lymphoma treated with autologous peripheral blood stem-cell transplantation (aPBSCT) and response-adapted whole-brain radiotherapy (WBRT). Now, we update the initial results. Patients and methods From 1999 to 2004, 23 patients received high-dose methotrexate. In case of at least partial remission, high-dose busulfan/thiotepa (HD-BuTT) followed by aPBSCT was carried out. Patients refractory to induction or without complete remission after HD-BuTT received WBRT. Eight patients still alive in 2011 were contacted and Mini-Mental State Examination (MMSE) and the European Organisation for Research and Treatment of Cancer quality-of-life questionnaire (QLQ)-C30 were carried out. Results Of eight patients still alive, median follow-up is 116.9 months. Only one of nine irradiated patients is still alive with a severe neurologic deficit. In seven of eight patients treated with HD-BuTT, health condition and quality of life are excellent. MMSE and QLQ-C30 showed remarkably good results in patients who did not receive WBRT. All of them have a Karnofsky score of 90%-100%. Conclusions Follow-up shows an overall survival of 35%. In six of seven patients where WBRT could be avoided, no long-term neurotoxicity has been observed and all patients have an excellent quality of life.
Resumo:
Several groups have demonstrated the existence of self-renewing stem cells in embryonic and adult mouse brain. In vitro, these cells proliferate in response to epidermal growth factor, forming clusters of nestin-positive cells that may be dissociated and subcultured repetitively. Here we show that, in stem cell clusters derived from rat embryonic striatum, cell proliferation decreased with increasing number of passages and in response to elevated concentrations of potassium (30 mM KCl). In monolayer culture, the appearance of microtubule-associated protein type-5-immunoreactive (MAP-5(+)) cells (presumptive neurons) in response to basic fibroblast growth factor (bFGF) was reduced at low cell density and with increasing number of passages. In the presence of bFGF, elevated potassium caused a more differentiated neuronal phenotype, characterized by an increased proportion of MAP-5(+) cells, extensive neuritic branching, and higher specific activity of glutamic acid decarboxylase. Dissociated stem cells were able to invade cultured brain cell aggregates containing different proportions of neurons and glial cells, whereas they required the presence of a considerable proportion of glial cells in the host cultures to become neurofilament H-positive. The latter observation supports the view that astrocyte-derived factors influence early differentiation of the neuronal cell lineage.
Resumo:
Interaural intensity and time differences (IID and ITD) are two binaural auditory cues for localizing sounds in space. This study investigated the spatio-temporal brain mechanisms for processing and integrating IID and ITD cues in humans. Auditory-evoked potentials were recorded, while subjects passively listened to noise bursts lateralized with IID, ITD or both cues simultaneously, as well as a more frequent centrally presented noise. In a separate psychophysical experiment, subjects actively discriminated lateralized from centrally presented stimuli. IID and ITD cues elicited different electric field topographies starting at approximately 75 ms post-stimulus onset, indicative of the engagement of distinct cortical networks. By contrast, no performance differences were observed between IID and ITD cues during the psychophysical experiment. Subjects did, however, respond significantly faster and more accurately when both cues were presented simultaneously. This performance facilitation exceeded predictions from probability summation, suggestive of interactions in neural processing of IID and ITD cues. Supra-additive neural response interactions as well as topographic modulations were indeed observed approximately 200 ms post-stimulus for the comparison of responses to the simultaneous presentation of both cues with the mean of those to separate IID and ITD cues. Source estimations revealed differential processing of IID and ITD cues initially within superior temporal cortices and also at later stages within temporo-parietal and inferior frontal cortices. Differences were principally in terms of hemispheric lateralization. The collective psychophysical and electrophysiological results support the hypothesis that IID and ITD cues are processed by distinct, but interacting, cortical networks that can in turn facilitate auditory localization.
Resumo:
The hypothalamic neuropeptide oxytocin (OT), which controls childbirth and lactation, receives increasing attention for its effects on social behaviors, but how it reaches central brain regions is still unclear. Here we gained by recombinant viruses selective genetic access to hypothalamic OT neurons to study their connectivity and control their activity by optogenetic means. We found axons of hypothalamic OT neurons in the majority of forebrain regions, including the central amygdala (CeA), a structure critically involved in OT-mediated fear suppression. In vitro, exposure to blue light of channelrhodopsin-2-expressing OT axons activated a local GABAergic circuit that inhibited neurons in the output region of the CeA. Remarkably, in vivo, local blue-light-induced endogenous OT release robustly decreased freezing responses in fear-conditioned rats. Our results thus show widespread central projections of hypothalamic OT neurons and demonstrate that OT release from local axonal endings can specifically control region-associated behaviors.
The role of energetic value in dynamic brain response adaptation during repeated food image viewing.
Resumo:
The repeated presentation of simple objects as well as biologically salient objects can cause the adaptation of behavioral and neural responses during the visual categorization of these objects. Mechanisms of response adaptation during repeated food viewing are of particular interest for better understanding food intake beyond energetic needs. Here, we measured visual evoked potentials (VEPs) and conducted neural source estimations to initial and repeated presentations of high-energy and low-energy foods as well as non-food images. The results of our study show that the behavioral and neural responses to food and food-related objects are not uniformly affected by repetition. While the repetition of images displaying low-energy foods and non-food modulated VEPs as well as their underlying neural sources and increased behavioral categorization accuracy, the responses to high-energy images remained largely invariant between initial and repeated encounters. Brain mechanisms when viewing images of high-energy foods thus appear less susceptible to repetition effects than responses to low-energy and non-food images. This finding is likely related to the superior reward value of high-energy foods and might be one reason why in particular high-energetic foods are indulged although potentially leading to detrimental health consequences.
Resumo:
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Resumo:
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity.
Resumo:
Inhibitory control, a core component of executive functions, refers to our ability to suppress intended or ongoing cognitive or motor processes. Mostly based on Go/NoGo paradigms, a considerable amount of literature reports that inhibitory control of responses to "NoGo" stimuli is mediated by top-down mechanisms manifesting ∼200 ms after stimulus onset within frontoparietal networks. However, whether inhibitory functions in humans can be trained and the supporting neurophysiological mechanisms remain unresolved. We addressed these issues by contrasting auditory evoked potentials (AEPs) to left-lateralized "Go" and right NoGo stimuli recorded at the beginning versus the end of 30 min of active auditory spatial Go/NoGo training, as well as during passive listening of the same stimuli before versus after the training session, generating two separate 2 × 2 within-subject designs. Training improved Go/NoGo proficiency. Response times to Go stimuli decreased. During active training, AEPs to NoGo, but not Go, stimuli modulated topographically with training 61-104 ms after stimulus onset, indicative of changes in the underlying brain network. Source estimations revealed that this modulation followed from decreased activity within left parietal cortices, which in turn predicted the extent of behavioral improvement. During passive listening, in contrast, effects were limited to topographic modulations of AEPs in response to Go stimuli over the 31-81 ms interval, mediated by decreased right anterior temporoparietal activity. We discuss our results in terms of the development of an automatic and bottom-up form of inhibitory control with training and a differential effect of Go/NoGo training during active executive control versus passive listening conditions.
Resumo:
Optimal behavior relies on flexible adaptation to environmental requirements, notably based on the detection of errors. The impact of error detection on subsequent behavior typically manifests as a slowing down of RTs following errors. Precisely how errors impact the processing of subsequent stimuli and in turn shape behavior remains unresolved. To address these questions, we used an auditory spatial go/no-go task where continual feedback informed participants of whether they were too slow. We contrasted auditory-evoked potentials to left-lateralized go and right no-go stimuli as a function of performance on the preceding go stimuli, generating a 2 × 2 design with "preceding performance" (fast hit [FH], slow hit [SH]) and stimulus type (go, no-go) as within-subject factors. SH trials yielded SH trials on the following trials more often than did FHs, supporting our assumption that SHs engaged effects similar to errors. Electrophysiologically, auditory-evoked potentials modulated topographically as a function of preceding performance 80-110 msec poststimulus onset and then as a function of stimulus type at 110-140 msec, indicative of changes in the underlying brain networks. Source estimations revealed a stronger activity of prefrontal regions to stimuli after successful than error trials, followed by a stronger response of parietal areas to the no-go than go stimuli. We interpret these results in terms of a shift from a fast automatic to a slow controlled form of inhibitory control induced by the detection of errors, manifesting during low-level integration of task-relevant features of subsequent stimuli, which in turn influences response speed.
Resumo:
Ochratoxin A (OTA), a fungal contaminant of basic food commodities, is known to be highly cytotoxic, but the pathways underlying adverse effects at subcytotoxic concentrations remain to be elucidated. Recent reports indicate that OTA affects cell cycle regulation. Therefore, 3D brain cell cultures were used to study OTA effects on mitotically active neural stem/progenitor cells, comparing highly differentiated cultures with their immature counterparts. Changes in the rate of DNA synthesis were related to early changes in the mRNA expression of neural stem/progenitor cell markers. OTA at 10nM, a concentration below the cytotoxic level, was ineffective in immature cultures, whereas in mature cultures it significantly decreased the rate of DNA synthesis together with the mRNA expression of key transcriptional regulators such as Sox2, Mash1, Hes5, and Gli1; the cell cycle activator cyclin D2; the phenotypic markers nestin, doublecortin, and PDGFRα. These effects were largely prevented by Sonic hedgehog (Shh) peptide (500ngml(-1)) administration, indicating that OTA impaired the Shh pathway and the Sox2 regulatory transcription factor critical for stem cell self-renewal. Similar adverse effects of OTA in vivo might perturb the regulation of stem cell proliferation in the adult brain and in other organs exhibiting homeostatic and/or regenerative cell proliferation.
Resumo:
BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.
Resumo:
Normal visual perception requires differentiating foreground from background objects. Differences in physical attributes sometimes determine this relationship. Often such differences must instead be inferred, as when two objects or their parts have the same luminance. Modal completion refers to such perceptual "filling-in" of object borders that are accompanied by concurrent brightness enhancement, in turn termed illusory contours (ICs). Amodal completion is filling-in without concurrent brightness enhancement. Presently there are controversies regarding whether both completion processes use a common neural mechanism and whether perceptual filling-in is a bottom-up, feedforward process initiating at the lowest levels of the cortical visual pathway or commences at higher-tier regions. We previously examined modal completion (Murray et al., 2002) and provided evidence that the earliest modal IC sensitivity occurs within higher-tier object recognition areas of the lateral occipital complex (LOC). We further proposed that previous observations of IC sensitivity in lower-tier regions likely reflect feedback modulation from the LOC. The present study tested these proposals, examining the commonality between modal and amodal completion mechanisms with high-density electrical mapping, spatiotemporal topographic analyses, and the local autoregressive average distributed linear inverse source estimation. A common initial mechanism for both types of completion processes (140 msec) that manifested as a modulation in response strength within higher-tier visual areas, including the LOC and parietal structures, is demonstrated, whereas differential mechanisms were evident only at a subsequent time period (240 msec), with amodal completion relying on continued strong responses in these structures.
Resumo:
Neural stem cells have been proposed as a new and promising treatment modality in various pathologies of the central nervous system, including malignant brain tumors. However, the underlying mechanism by which neural stem cells target tumor areas remains elusive. Monitoring of these cells is currently done by use of various modes of molecular imaging, such as optical imaging, magnetic resonance imaging and positron emission tomography, which is a novel technology for visualizing metabolism and signal transduction to gene expression. In this new context, the microenvironment of (malignant) brain tumors and the blood-brain barrier gains increased interest. The authors of this review give a unique overview of the current molecular-imaging techniques used in different therapeutic experimental brain tumor models in relation to neural stem cells. Such methods for molecular imaging of gene-engineered neural stem/progenitor cells are currently used to trace the location and temporal level of expression of therapeutic and endogenous genes in malignant brain tumors, closing the gap between in vitro and in vivo integrative biology of disease in neural stem cell transplantation.
Resumo:
Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.