998 resultados para Auditory sentence processing
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Memory is a multi-component cognitive ability to retain and retrieve information presented in different modalities. Research on memory development has shown that the memory capacity and the processes improve gradually from early childhood to adolescence. Findings related to the sex-differences in memory abilities in early childhood have been inconsistent. Although previous research has demonstrated the effects of the modality of stimulus presentation (auditory versus verbal) and the type of material to be remembered (visual/spatial versus auditory/verbal) on the memory processes and memory organization, the recent research with children is rather limited. The present study is a secondary analysis of data, originally collected from 530 typically developing Turkish children and adolescents. The purpose of the present study was to examine the age-related developments and sex differences in auditory-verbal and visual-spatial short-term memory (STM) in 177 typically developing male and female children, 5 to 8 years of age. Dot-Locations and Word-Lists from the Children's Memory Scale were used to measure visual-spatial and auditory-verbal STM performances, respectively. The findings of the present study suggest age-related differences in both visual-spatial and auditory-verbal STM. Sex-differences were observed only in one visual-spatial STM subtest performance. Modality comparisons revealed age- and task-related differences between auditory-verbal and visual-spatial STM performances. There were no sex-related effects in terms of modality specific performances. Overall, the results of this study provide evidence of STM development in early childhood, and these effects were mostly independent of sex and the modality of the task.
Resumo:
As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.
Resumo:
When the second of two targets (T2) is presented temporally close to the first target (T1) in rapid serial visual presentation, accuracy to detect/identify T2 is markedly reduced as compared to longer target separations. This is known as the attentional blink (AB), and is thought to reflect a limitation of selective attention. While most individuals show an AB, research has demonstrated that individuals are variously susceptible to this effect. To explain these differences, Dale and Arnell (2010) examined whether dispositional differences in attentional breadth, as measured by the Navon letter task, could predict individual AB magnitude. They found that individuals who showed a natural bias toward the broad, global level of Navon letter stimuli were less susceptible to the AB as compared to individuals who showed a natural bias toward the detailed, local aspects of Navon letter stimuli. This suggests that individuals who naturally broaden their attention can overcome the AB. However, it was unclear how stable these individual differences were over time, and whether a variety of global/local tasks could predict AB performance. As such, the purpose of this dissertation was to investigate, through four empirical studies, the nature of individual differences in both global/local bias and the AB, and how these differences in attentional breadth can modulate AB performance. Study 1 was designed to examine the stability of dispositional global/local biases over time, as well as the relationships among three different global/local processing measures. Study 2 examined the stability of individual differences in the AB, as well as the relationship among two distinct AB tasks. Study 3 examined whether the three distinct global/local tasks used in Study 1 could predict performance on the two AB tasks from Study 2. Finally, Study 4 explored whether individual differences in global/local bias could be manipulated by exposing participants to high/low spatial frequencies and Navon stimuli. In Study 1, I showed that dispositional differences in global/local bias were reliable over a period of at least a week, demonstrating that these individual biases may be trait-like. However, the three tasks that purportedly measure global/local bias were unrelated to each other, suggesting that they measure unique aspects of global/local processing. In Study 2, I found that individual variation in AB performance was also reliable over a period of at least a week, and that the two AB task versions were correlated. Study 3 showed that dispositional global/local biases, as measured by the three tasks from Study 1, predicted AB magnitude, such that individuals who were naturally globally biased had smaller ABs. Finally, in Study 4 I demonstrated that these dispositional global/local biases are resistant to both spatial frequency and Navon letter manipulations, indicating that these differences are robust and intractable. Overall, the results of the four studies in this dissertation help clarify the role of individual differences in attentional breadth in selective attention.
Resumo:
Based on the theoretical framework of Dressler and Dziubalska-Kołaczyk (2006a,b), the Strong Morphonotactic Hypothesis will be tested. It assumes that phonotactics helps in decomposition of words into morphemes: if a certain sequence occurs only or only by default over a morpheme boundary and is thus a prototypical morphonotactic sequence, it should be processed faster and more accurately than a purely phonotactic sequence. Studies on typical and atypical first language acquisition in English, Lithuanian and Polish have shown significant differences between the acquisition of morphonotactic and phonotactic consonant clusters: Morphonotactic clusters are acquired earlier and faster by typically developing children, but are more problematic for children with Specific Language Impairment. However, results on acquisition are less clear for German. The focus of this contribution is whether and how German-speaking adults differentiate between morphonotactic and phonotactic consonant clusters and vowel-consonant sequences in visual word recognition. It investigates whether sub-lexical letter sequences are found faster when the target sequence is separated from the word stem by a morphological boundary than when it is a part of a morphological root. An additional factor that is addressed concerns the position of the target cluster in the word. Due to the bathtub effect, sequences in peripheral positions in a word are more salient and thus facilitate processing more than word-internal positions. Moreover, for adults the primacy effect most favors word-initial position (whereas for young children the recency effect most favors word- final position). Our study discusses effects of phonotactic vs. morphonotactic cluster status and of position within the word.
Resumo:
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production.
Resumo:
This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.
Resumo:
Self-regulation is considered a powerful predictor of behavioral and mental health outcomes during adolescence and emerging adulthood. In this dissertation I address some electrophysiological and genetic correlates of this important skill set in a series of four studies. Across all studies event-related potentials (ERPs) were recorded as participants responded to tones presented in attended and unattended channels in an auditory selective attention task. In Study 1, examining these ERPs in relation to parental reports on the Behavior Rating Inventory of Executive Function (BRIEF) revealed that an early frontal positivity (EFP) elicited by to-be-ignored/unattended tones was larger in those with poorer self-regulation. As is traditionally found, N1 amplitudes were more negative for the to-be-attended rather than unattended tones. Additionally, N1 latencies to unattended tones correlated with parent-ratings on the BRIEF, where shorter latencies predicted better self-regulation. In Study 2 I tested a model of the associations between selfregulation scores and allelic variations in monoamine neurotransmitter genes, and their concurrent links to ERP markers of attentional control. Allelic variations in dopaminerelated genes predicted both my ERP markers and self-regulatory variables, and played a moderating role in the association between the two. In Study 3 I examined whether training in Integra Mindfulness Martial Arts, an intervention program which trains elements of self-regulation, would lead to improvement in ERP markers of attentional control and parent-report BRIEF scores in a group of adolescents with self-regulatory difficulties. I found that those in the treatment group amplified their processing of attended relative to unattended stimuli over time, and reduced their levels of problematic behaviour whereas those in the waitlist control group showed little to no change on both of these metrics. In Study 4 I examined potential associations between self-regulation and attentional control in a group of emerging adults. Both event-related spectral perturbations (ERSPs) and intertrial coherence (ITC) in the alpha and theta range predicted individual differences in self-regulation. Across the four studies I was able to conclude that real-world self-regulation is indeed associated with the neural markers of attentional control. Targeted interventions focusing on attentional control may improve self-regulation in those experiencing difficulties in this regard.
Resumo:
Neural models of the processing of illusory contour (ICs) diverge from one another in terms of their emphasis on bottom-up versus top-down constituents. The current study uses a dichoptic fusion paradigm to block top-down awareness of ICs in order to examine possible bottom-up effects. Group results indicate that the N170 ERP component is particularly sensitive to ICs at central occipital sites when top-down awareness of the stimulus is permitted. Furthermore, single-subject statistics reveal that the IC N170 ERP effect is highly variable across individuals in terms of timing and topographical spread. The results suggest that the ubiquitous N170 effect to ICs found in the literature depends, at least in part, on participants’ awareness of the stimulus. Therefore a strong bottom-up model of IC processing at the time of the N170 is unlikely.
Resumo:
The current study sought to investigate the nature of empathic responding and emotion processing in persons who have experienced Mild Head Injury (MHI) and how this relationship between empathetic responding and head injury status may differ in those with higher psychopathic characteristics (i.e., subclinical psychopathy). One-hundred university students (36% reporting having a previous MHI) completed an Emotional Processing Task (EPT) using images of neutral and negative valence (IAPS, 2008) designed to evoke empathy; physiological responses were recorded. Additionally, participants completed measures of cognitive competence and various individual differences (empathy - QCAE; Reniers, 2011; Psychopathy - SRP-III, Williams, Paulhus & Hare, 2007) and demographics questionnaires. MHI was found to be associated with lower affective empathy and physiological reactivity (pulse rate) while viewing images irrespective of valence, reflecting a pattern of generalized underarousal. The empathic deficits observed correlated with the individual’s severity of injury such that the greater number of injury characteristics and symptoms endorsed by a subject, the more dampened the affective and cognitive empathy reactions to stimuli and the lower his/her physiological reactivity. Importantly, psychopathy interacted with head injury status such that the effects of psychopathy were significant only for individuals indicating a MHI. This group, i.e., MHI subjects who scored higher on psychopathy, displayed the greatest compromise in empathic responding. Interestingly, the Callous Affect component of psychopathy was found to account for the empathic and emotion processing deficits observed for individuals who report a MHI; in contrast, the Interpersonal Manipulation component emerged as a better predictor of empathic and emotion deficits observed in the No MHI group. These different patterns may indicate the involvement of different underlying processes in the manifestation of empathic deficits associated with head injury or subclinical psychopathy. It also highlights the importance of assessing for prior head injury in populations with higher psychopathic characteristics due to the possible combined/enhanced influences. The results of this study have important social implications for persons who have experienced a concussion or limited neural trauma since even subtle injury to the head may be sufficient to produce dampened emotion processing, thereby impacting one’s social interactions and engagement (i.e., at risk for social isolation or altered interpersonal success). Individuals who experience MHI in conjunction with certain personality profiles (e.g., higher psychopathic characteristics) may be particularly at risk for being less capable of empathic compassion and socially-acceptable pragmatics and, as a result, may not be responsive to another person’s emotional well-being.
Resumo:
The use of information and communication technologies in the health and social service sectors, and the development of multi-centred and international research networks present many benefits for society: for example, better follow-up on an individual’s states of health, better quality of care, better control of expenses, and better communication between healthcare professionals. However, this approach raises issues relative to the protection of privacy: more specifically, to the processing of individual health information.
Resumo:
Le rôle du collicule inférieur dans les divers processus auditif demeure à ce jour méconnu chez l’humain. À l’aide d’évaluations comportementales et électrophysiologiques, le but des études consiste à examiner l’intégrité fonctionnelle du système nerveux auditif chez une personne ayant une lésion unilatérale du collicule inférieur. Les résultats de ces études suggèrent que le collicule inférieur n’est pas impliqué dans la détection de sons purs, la reconnaissance de la parole dans le silence et l’interaction binaurale. Cependant, ces données suggèrent que le collicule inférieur est impliqué dans la reconnaissance de mots dans le bruit présentés monauralement, la discrimination de la fréquence, la reconnaissance de la durée, la séparation binaurale, l’intégration binaurale, la localisation de sources sonores et, finalement, l’intégration multisensorielle de la parole.
Resumo:
La douleur est une expérience perceptive comportant de nombreuses dimensions. Ces dimensions de douleur sont inter-reliées et recrutent des réseaux neuronaux qui traitent les informations correspondantes. L’élucidation de l'architecture fonctionnelle qui supporte les différents aspects perceptifs de l'expérience est donc une étape fondamentale pour notre compréhension du rôle fonctionnel des différentes régions de la matrice cérébrale de la douleur dans les circuits corticaux qui sous tendent l'expérience subjective de la douleur. Parmi les diverses régions du cerveau impliquées dans le traitement de l'information nociceptive, le cortex somatosensoriel primaire et secondaire (S1 et S2) sont les principales régions généralement associées au traitement de l'aspect sensori-discriminatif de la douleur. Toutefois, l'organisation fonctionnelle dans ces régions somato-sensorielles n’est pas complètement claire et relativement peu d'études ont examiné directement l'intégration de l'information entre les régions somatiques sensorielles. Ainsi, plusieurs questions demeurent concernant la relation hiérarchique entre S1 et S2, ainsi que le rôle fonctionnel des connexions inter-hémisphériques des régions somatiques sensorielles homologues. De même, le traitement en série ou en parallèle au sein du système somatosensoriel constitue un autre élément de questionnement qui nécessite un examen plus approfondi. Le but de la présente étude était de tester un certain nombre d'hypothèses sur la causalité dans les interactions fonctionnelle entre S1 et S2, alors que les sujets recevaient des chocs électriques douloureux. Nous avons mis en place une méthode de modélisation de la connectivité, qui utilise une description de causalité de la dynamique du système, afin d'étudier les interactions entre les sites d'activation définie par un ensemble de données provenant d'une étude d'imagerie fonctionnelle. Notre paradigme est constitué de 3 session expérimentales en utilisant des chocs électriques à trois différents niveaux d’intensité, soit modérément douloureux (niveau 3), soit légèrement douloureux (niveau 2), soit complètement non douloureux (niveau 1). Par conséquent, notre paradigme nous a permis d'étudier comment l'intensité du stimulus est codé dans notre réseau d'intérêt, et comment la connectivité des différentes régions est modulée dans les conditions de stimulation différentes. Nos résultats sont en faveur du mode sériel de traitement de l’information somatosensorielle nociceptive avec un apport prédominant de la voie thalamocorticale vers S1 controlatérale au site de stimulation. Nos résultats impliquent que l'information se propage de S1 controlatéral à travers notre réseau d'intérêt composé des cortex S1 bilatéraux et S2. Notre analyse indique que la connexion S1→S2 est renforcée par la douleur, ce qui suggère que S2 est plus élevé dans la hiérarchie du traitement de la douleur que S1, conformément aux conclusions précédentes neurophysiologiques et de magnétoencéphalographie. Enfin, notre analyse fournit des preuves de l'entrée de l'information somatosensorielle dans l'hémisphère controlatéral au côté de stimulation, avec des connexions inter-hémisphériques responsable du transfert de l'information à l'hémisphère ipsilatéral.
Resumo:
La voix humaine constitue la partie dominante de notre environnement auditif. Non seulement les humains utilisent-ils la voix pour la parole, mais ils sont tout aussi habiles pour en extraire une multitude d’informations pertinentes sur le locuteur. Cette expertise universelle pour la voix humaine se reflète dans la présence d’aires préférentielles à celle-ci le long des sillons temporaux supérieurs. À ce jour, peu de données nous informent sur la nature et le développement de cette réponse sélective à la voix. Dans le domaine visuel, une vaste littérature aborde une problématique semblable en ce qui a trait à la perception des visages. L’étude d’experts visuels a permis de dégager les processus et régions impliqués dans leur expertise et a démontré une forte ressemblance avec ceux utilisés pour les visages. Dans le domaine auditif, très peu d’études se sont penchées sur la comparaison entre l’expertise pour la voix et d’autres catégories auditives, alors que ces comparaisons pourraient contribuer à une meilleure compréhension de la perception vocale et auditive. La présente thèse a pour dessein de préciser la spécificité des processus et régions impliqués dans le traitement de la voix. Pour ce faire, le recrutement de différents types d’experts ainsi que l’utilisation de différentes méthodes expérimentales ont été préconisés. La première étude a évalué l’influence d’une expertise musicale sur le traitement de la voix humaine, à l’aide de tâches comportementales de discrimination de voix et d’instruments de musique. Les résultats ont démontré que les musiciens amateurs étaient meilleurs que les non-musiciens pour discriminer des timbres d’instruments de musique mais aussi les voix humaines, suggérant une généralisation des apprentissages perceptifs causés par la pratique musicale. La seconde étude avait pour but de comparer les potentiels évoqués auditifs liés aux chants d’oiseaux entre des ornithologues amateurs et des participants novices. L’observation d’une distribution topographique différente chez les ornithologues à la présentation des trois catégories sonores (voix, chants d’oiseaux, sons de l’environnement) a rendu les résultats difficiles à interpréter. Dans la troisième étude, il était question de préciser le rôle des aires temporales de la voix dans le traitement de catégories d’expertise chez deux groupes d’experts auditifs, soit des ornithologues amateurs et des luthiers. Les données comportementales ont démontré une interaction entre les deux groupes d’experts et leur catégorie d’expertise respective pour des tâches de discrimination et de mémorisation. Les résultats obtenus en imagerie par résonance magnétique fonctionnelle ont démontré une interaction du même type dans le sillon temporal supérieur gauche et le gyrus cingulaire postérieur gauche. Ainsi, les aires de la voix sont impliquées dans le traitement de stimuli d’expertise dans deux groupes d’experts auditifs différents. Ce résultat suggère que la sélectivité à la voix humaine, telle que retrouvée dans les sillons temporaux supérieurs, pourrait être expliquée par une exposition prolongée à ces stimuli. Les données présentées démontrent plusieurs similitudes comportementales et anatomo-fonctionnelles entre le traitement de la voix et d’autres catégories d’expertise. Ces aspects communs sont explicables par une organisation à la fois fonctionnelle et économique du cerveau. Par conséquent, le traitement de la voix et d’autres catégories sonores se baserait sur les mêmes réseaux neuronaux, sauf en cas de traitement plus poussé. Cette interprétation s’avère particulièrement importante pour proposer une approche intégrative quant à la spécificité du traitement de la voix.
Resumo:
En raison de l’utilisation d’un mode de communication totalement différent de celui des entendants, le langage des signes, et de l’absence quasi-totale d’afférences en provenance du système auditif, il y a de fortes chances que d’importantes modifications fonctionnelles et structurales s’effectuent dans le cerveau des individus sourds profonds. Les études antérieures suggèrent que cette réorganisation risque d’avoir des répercussions plus importantes sur les structures corticales situées le long de la voie visuelle dorsale qu’à l’intérieur de celles situées à l’intérieur de la voie ventrale. L’hypothèse proposée par Ungerleider et Mishkin (1982) quant à la présence de deux voies visuelles dans les régions occipitales, même si elle demeure largement acceptée dans la communauté scientifique, s’en trouve aussi relativement contestée. Une voie se projetant du cortex strié vers les régions pariétales postérieures, est impliquée dans la vision spatiale, et l’autre se projetant vers les régions du cortex temporal inférieur, est responsable de la reconnaissance de la forme. Goodale et Milner (1992) ont par la suite proposé que la voie dorsale, en plus de son implication dans le traitement de l’information visuo-spatiale, joue un rôle dans les ajustements sensori-moteurs nécessaires afin de guider les actions. Dans ce contexte, il est tout à fait plausible de considérer qu’un groupe de personne utilisant un langage sensori-moteur comme le langage des signes dans la vie de tous les jours, s’expose à une réorganisation cérébrale ciblant effectivement la voie dorsale. L’objectif de la première étude est d’explorer ces deux voies visuelles et plus particulièrement, la voie dorsale, chez des individus entendants par l’utilisation de deux stimuli de mouvement dont les caractéristiques physiques sont très similaires, mais qui évoquent un traitement relativement différent dans les régions corticales visuelles. Pour ce faire, un stimulus de forme définie par le mouvement et un stimulus de mouvement global ont été utilisés. Nos résultats indiquent que les voies dorsale et ventrale procèdent au traitement d’une forme définie par le mouvement, tandis que seule la voie dorsale est activée lors d’une tâche de mouvement global dont les caractéristiques psychophysiques sont relativement semblables. Nous avons utilisé, subséquemment, ces mêmes stimulations activant les voies dorsales et ventrales afin de vérifier quels pourraient être les différences fonctionnelles dans les régions visuelles et auditives chez des individus sourds profonds. Plusieurs études présentent la réorganisation corticale dans les régions visuelles et auditives en réponse à l’absence d’une modalité sensorielle. Cependant, l’implication spécifique des voies visuelles dorsale et ventrale demeure peu étudiée à ce jour, malgré plusieurs résultats proposant une implication plus importante de la voie dorsale dans la réorganisation visuelle chez les sourds. Suite à l’utilisation de l’imagerie cérébrale fonctionnelle pour investiguer ces questions, nos résultats ont été à l’encontre de cette hypothèse suggérant une réorganisation ciblant particulièrement la voie dorsale. Nos résultats indiquent plutôt une réorganisation non-spécifique au type de stimulation utilisé. En effet, le gyrus temporal supérieur est activé chez les sourds suite à la présentation de toutes nos stimulations visuelles, peu importe leur degré de complexité. Le groupe de participants sourds montre aussi une activation du cortex associatif postérieur, possiblement recruté pour traiter l’information visuelle en raison de l’absence de compétition en provenance des régions temporales auditives. Ces résultats ajoutent aux données déjà recueillies sur les modifications fonctionnelles qui peuvent survenir dans tout le cerveau des personnes sourdes, cependant les corrélats anatomiques de la surdité demeurent méconnus chez cette population. Une troisième étude se propose donc d’examiner les modifications structurales pouvant survenir dans le cerveau des personnes sourdes profondes congénitales ou prélinguales. Nos résultats montrent que plusieurs régions cérébrales semblent être différentes entre le groupe de participants sourds et celui des entendants. Nos analyses ont montré des augmentations de volume, allant jusqu’à 20%, dans les lobes frontaux, incluant l’aire de Broca et d’autres régions adjacentes impliqués dans le contrôle moteur et la production du langage. Les lobes temporaux semblent aussi présenter des différences morphométriques même si ces dernières ne sont pas significatives. Enfin, des différences de volume sont également recensées dans les parties du corps calleux contenant les axones permettant la communication entre les régions temporales et occipitales des deux hémisphères.