963 resultados para auditory cortex
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Resumo:
Early blindness results in occipital cortex neurons responding to a wide range of auditory and tactile stimuli. These changes in tuning properties are accompanied by an extensive reorganization of the occipital cortex that includes alterations in anatomical structure, neurochemical and metabolic pathways. Although it has been established in animal models that neurochemical pathways are heavily affected by early visual deprivation, the effects of blindness on these pathways in humans is still not well characterized. Here, using (1)H magnetic resonance spectroscopy in nine early blind and normally sighted subjects, we find that early blindness is associated with higher levels of creatine, choline and myo-Inositol and indications of lower levels of GABA within the occipital cortex. These results suggest that the cross-modal responses associated with early blindness may, at least in part, be driven by changes within occipital biochemical pathways.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Resumo:
Complex auditory hallucinations are often characterized by hearing voices and are then called auditory verbal hallucinations (AVHs). While AVHs have been extensively investigated in psychiatric patients suffering from schizophrenia, reports from neurological patients are rare and, in most cases, incomplete. Here, we characterize AVHs in 9 patients suffering from pharmacoresistant epilepsy by analyzing the phenomenology of AVHs and patients' neuropsychological and lesion profiles. From a cohort of 352 consecutively examined patients with epilepsy, 9 patients suffering AVHs were identified and studied by means of a semistructured interview, neuropsychological tests, and multimodal imaging, relying on a combination of functional and structural neuroimaging data and surface and intracranial EEG. We found that AVHs in patients with epilepsy were associated with prevalent language deficits and damage to posterior language areas and basal language areas in the left temporal cortex. Auditory verbal hallucinations, most of the times, consisted in hearing a single voice of the same gender and language as the patient and had specific spatial features, being, most of the times, perceived in the external space, contralateral to the lesion. We argue that the consistent location of AVHs in the contralesional external space, the prominence of associated language deficits, and the prevalence of lesions to the posterior temporal language areas characterize AVHs of neurological origin, distinguishing them from those of psychiatric origin.
Resumo:
BACKGROUND: An auditory perceptual learning paradigm was used to investigate whether implicit memories are formed during general anesthesia. METHODS: Eighty-seven patients who had an American Society of Anesthesiologists physical status of I-III and were scheduled to undergo an elective surgery with general anesthesia were randomly assigned to one of two groups. One group received auditory stimulation during surgery, whereas the other did not. The auditory stimulation consisted of pure tones presented via headphones. The Bispectral Index level was maintained between 40 and 50 during surgery. To assess learning, patients performed an auditory frequency discrimination task after surgery, and comparisons were made between the groups. General anesthesia was induced with thiopental and maintained with a mixture of fentanyl and sevoflurane. RESULTS: There was no difference in the amount of learning between the two groups (mean +/- SD improvement: stimulated patients 9.2 +/- 11.3 Hz, controls 9.4 +/- 14.1 Hz). There was also no difference in initial thresholds (mean +/- SD initial thresholds: stimulated patients 31.1 +/- 33.4 Hz, controls 28.4 +/- 34.2 Hz). These results suggest that perceptual learning was not induced during anesthesia. No correlation between the bispectral index and the initial level of performance was found (Pearson r = -0.09, P = 0.59). CONCLUSION: Perceptual learning was not induced by repetitive auditory stimulation during anesthesia. This result may indicate that perceptual learning requires top-down processing, which is suppressed by the anesthetic.
Resumo:
Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.
Resumo:
Cortical electrical stimulation mapping was used to study neural substrates of the function of writing in the temporoparietal cortex. We identified the sites involved in oral language (sentence reading and naming) and writing from dictation, in order to spare these areas during removal of brain tumours in 30 patients (23 in the left, and 7 in the right hemisphere). Electrostimulation of the cortex impaired writing ability in 62 restricted cortical areas (.25 cm2). These were found in left temporoparietal lobes and were mostly located along the superior temporal gyrus (Brodmann's areas 22 and 42). Stimulation of right temporoparietal lobes in right-handed patients produced no writing impairments. However there was a high variability of location between individuals. Stimulation resulted in combined symptoms (affecting oral language and writing) in fourteen patients, whereas in eight other patients, stimulation-induced pure agraphia symptoms with no oral language disturbance in twelve of the identified areas. Each detected area affected writing in a different way. We detected the various different stages of the auditory-to-motor pathway of writing from dictation: either through comprehension of the dictated sentences (word deafness areas), lexico-semantic retrieval, or phonologic processing. In group analysis, barycentres of all different types of writing interferences reveal a hierarchical functional organization along the superior temporal gyrus from initial word recognition to lexico-semantic and phonologic processes along the ventral and the dorsal comprehension pathways, supporting the previously described auditory-to-motor process. The left posterior Sylvian region supports different aspects of writing function that are extremely specialized and localized, sometimes being segregated in a way that could account for the occurrence of pure agraphia that has long-been described in cases of damage to this region.
Resumo:
Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Resumo:
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.
Resumo:
En raison de l’utilisation d’un mode de communication totalement différent de celui des entendants, le langage des signes, et de l’absence quasi-totale d’afférences en provenance du système auditif, il y a de fortes chances que d’importantes modifications fonctionnelles et structurales s’effectuent dans le cerveau des individus sourds profonds. Les études antérieures suggèrent que cette réorganisation risque d’avoir des répercussions plus importantes sur les structures corticales situées le long de la voie visuelle dorsale qu’à l’intérieur de celles situées à l’intérieur de la voie ventrale. L’hypothèse proposée par Ungerleider et Mishkin (1982) quant à la présence de deux voies visuelles dans les régions occipitales, même si elle demeure largement acceptée dans la communauté scientifique, s’en trouve aussi relativement contestée. Une voie se projetant du cortex strié vers les régions pariétales postérieures, est impliquée dans la vision spatiale, et l’autre se projetant vers les régions du cortex temporal inférieur, est responsable de la reconnaissance de la forme. Goodale et Milner (1992) ont par la suite proposé que la voie dorsale, en plus de son implication dans le traitement de l’information visuo-spatiale, joue un rôle dans les ajustements sensori-moteurs nécessaires afin de guider les actions. Dans ce contexte, il est tout à fait plausible de considérer qu’un groupe de personne utilisant un langage sensori-moteur comme le langage des signes dans la vie de tous les jours, s’expose à une réorganisation cérébrale ciblant effectivement la voie dorsale. L’objectif de la première étude est d’explorer ces deux voies visuelles et plus particulièrement, la voie dorsale, chez des individus entendants par l’utilisation de deux stimuli de mouvement dont les caractéristiques physiques sont très similaires, mais qui évoquent un traitement relativement différent dans les régions corticales visuelles. Pour ce faire, un stimulus de forme définie par le mouvement et un stimulus de mouvement global ont été utilisés. Nos résultats indiquent que les voies dorsale et ventrale procèdent au traitement d’une forme définie par le mouvement, tandis que seule la voie dorsale est activée lors d’une tâche de mouvement global dont les caractéristiques psychophysiques sont relativement semblables. Nous avons utilisé, subséquemment, ces mêmes stimulations activant les voies dorsales et ventrales afin de vérifier quels pourraient être les différences fonctionnelles dans les régions visuelles et auditives chez des individus sourds profonds. Plusieurs études présentent la réorganisation corticale dans les régions visuelles et auditives en réponse à l’absence d’une modalité sensorielle. Cependant, l’implication spécifique des voies visuelles dorsale et ventrale demeure peu étudiée à ce jour, malgré plusieurs résultats proposant une implication plus importante de la voie dorsale dans la réorganisation visuelle chez les sourds. Suite à l’utilisation de l’imagerie cérébrale fonctionnelle pour investiguer ces questions, nos résultats ont été à l’encontre de cette hypothèse suggérant une réorganisation ciblant particulièrement la voie dorsale. Nos résultats indiquent plutôt une réorganisation non-spécifique au type de stimulation utilisé. En effet, le gyrus temporal supérieur est activé chez les sourds suite à la présentation de toutes nos stimulations visuelles, peu importe leur degré de complexité. Le groupe de participants sourds montre aussi une activation du cortex associatif postérieur, possiblement recruté pour traiter l’information visuelle en raison de l’absence de compétition en provenance des régions temporales auditives. Ces résultats ajoutent aux données déjà recueillies sur les modifications fonctionnelles qui peuvent survenir dans tout le cerveau des personnes sourdes, cependant les corrélats anatomiques de la surdité demeurent méconnus chez cette population. Une troisième étude se propose donc d’examiner les modifications structurales pouvant survenir dans le cerveau des personnes sourdes profondes congénitales ou prélinguales. Nos résultats montrent que plusieurs régions cérébrales semblent être différentes entre le groupe de participants sourds et celui des entendants. Nos analyses ont montré des augmentations de volume, allant jusqu’à 20%, dans les lobes frontaux, incluant l’aire de Broca et d’autres régions adjacentes impliqués dans le contrôle moteur et la production du langage. Les lobes temporaux semblent aussi présenter des différences morphométriques même si ces dernières ne sont pas significatives. Enfin, des différences de volume sont également recensées dans les parties du corps calleux contenant les axones permettant la communication entre les régions temporales et occipitales des deux hémisphères.