993 resultados para Middle temporal gyrus


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé : INTRODUCTION : Le rappel de douleurs passées est souvent inexact. Ce phénomène, connu sous le nom de biais mnémonique, pourrait être lié au développement de certaines douleurs chroniques. Dans une étude précédente, notre laboratoire a montré, grâce à l’électroencéphalographie, que l’activité du gyrus temporal supérieur (GTS) était positivement corrélée à l’exagération des rappels douloureux. L’objectif de cette étude était de confirmer si l’activité cérébrale du GTS est impliquée causalement dans le phénomène du biais mnémonique. MÉTHODES : Dans cette étude randomisée à double insu, la stimulation magnétique transcrânienne (TMS) fut utilisée pour perturber temporairement l’activité du GTS (paradigme de lésion virtuelle). Les participants étaient assignés aléatoirement au groupe contrôle (TMS simulée, n = 21) ou au groupe expérimental (TMS réelle, n = 21). L’intensité et l’aspect désagréable de la douleur ont été évalués grâce à des échelles visuelles analogues (ÉVA; 0 à 10) immédiatement après l’événement douloureux (stimulations électriques du nerf sural droit) et au rappel, 2 mois plus tard. L’exactitude du rappel douloureux fut calculée en soustrayant l’ÉVA au rappel de l’ÉVA initiale. RÉSULTATS : Le biais mnémonique de l’intensité de la douleur était similaire dans les deux groupes (contrôle = -0,3, expérimental = 0,0; p = 0,83) alors que le biais mnémonique de l’aspect désagréable de la douleur était significativement inférieur dans le groupe expérimental (contrôle = 1.0, expérimental = -0,4; p < 0,05). CONCLUSION : Nos résultats suggèrent que le GTS affecte spécifiquement nos souvenirs liés à l’aspect motivo-affectif de la douleur. Étant donné le lien entre l’exagération des souvenirs douloureux et la persistance de la douleur, l’inhibition du GTS pourrait être une avenue intéressante pour prévenir le développement de douleur chronique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machado-Joseph disease (MJD/SCA3) is the most frequent spinocerebellar ataxia, characterized by brainstem, basal ganglia and cerebellar damage. Few magnetic resonance imaging based studies have investigated damage in the cerebral cortex. The objective was to determine whether patients with MJD/SCA3 have cerebral cortex atrophy, to identify regions more susceptible to damage and to look for the clinical and neuropsychological correlates of such lesions. Forty-nine patients with MJD/SCA3 (mean age 47.7 ± 13.0 years, 27 men) and 49 matched healthy controls were enrolled. All subjects underwent magnetic resonance imaging scans in a 3 T device, and three-dimensional T1 images were used for volumetric analyses. Measurement of cortical thickness and volume was performed using the FreeSurfer software. Groups were compared using ancova with age, gender and estimated intracranial volume as covariates, and a general linear model was used to assess correlations between atrophy and clinical variables. Mean CAG expansion, Scale for Assessment and Rating of Ataxia (SARA) score and age at onset were 72.1 ± 4.2, 14.7 ± 7.3 and 37.5 ± 12.5 years, respectively. The main findings were (i) bilateral paracentral cortex atrophy, as well as the caudal middle frontal gyrus, superior and transverse temporal gyri, and lateral occipital cortex in the left hemisphere and supramarginal gyrus in the right hemisphere; (ii) volumetric reduction of basal ganglia and hippocampi; (iii) a significant correlation between SARA and brainstem and precentral gyrus atrophy. Furthermore, some of the affected cortical regions showed significant correlations with neuropsychological data. Patients with MJD/SCA3 have widespread cortical and subcortical atrophy. These structural findings correlate with clinical manifestations of the disease, which support the concept that cognitive/motor impairment and cerebral damage are related in disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In social anxiety disorder (SAD), impairments in limbic/paralimbic structures are associated with emotional dysregulation and inhibition of the medial prefrontal cortex (MPFq. Little is known, however, about alterations in limbic and frontal regions associated with the integrated morphometric, functional, and structural architecture of SAD. Whether altered gray matter volume is associated with altered functional and structural connectivity in SAD. Three techniques were used with 18 SAD patients and 18 healthy controls: voxel-based morphometry; resting-state functional connectivity analysis; and diffusion tensor imaging tractography. SAD patients exhibited significantly decreased gray matter volumes in the right posterior inferior temporal gyrus (ITG) and right parahippocampal/hippocampal gyrus (PHG/HIP). Gray matter volumes in these two regions negatively correlated with the fear factor of the Liebowitz Social Anxiety Scale. In addition, we found increased functional connectivity in SAD patients between the right posterior ITG and the left inferior occipital gyrus, and between the right PHF/HIP and left middle temporal gyms. SAD patients had increased right MPFC volume, along with enhanced structural connectivity in the genu of the corpus callosum. Reduced limbic/paralimbic volume, together with increased resting-state functional connectivity, suggests the existence of a compensatory mechanism in SAD. Increased MPFC volume, consonant with enhanced structural connectivity, suggests a long-time overgeneralization of structural connectivity and a role of this area in the mediation of clinical severity. Overall, our results may provide a valuable basis for future studies combining morphometric, functional and anatomical data in the search for a comprehensive understanding of the neural circuitry underlying SAD. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Discrimination of species-specific vocalizations is fundamental for survival and social interactions. Its unique behavioral relevance has encouraged the identification of circumscribed brain regions exhibiting selective responses (Belin et al., 2004), while the role of network dynamics has received less attention. Those studies that have examined the brain dynamics of vocalization discrimination leave unresolved the timing and the inter-relationship between general categorization, attention, and speech-related processes (Levy et al., 2001, 2003; Charest et al., 2009). Given these discrepancies and the presence of several confounding factors, electrical neuroimaging analyses were applied to auditory evoked-potential (AEPs) to acoustically and psychophysically controlled non-verbal human and animal vocalizations. This revealed which region(s) exhibit voice-sensitive responses and in which sequence. Methods: Subjects (N=10) performed a living vs. man-made 'oddball' auditory discrimination task, such that on a given block of trials 'target' stimuli occurred 10% of the time. Stimuli were complex, meaningful sounds of 500ms duration. There were 120 different sound files in total, 60 of which represented sounds of living objects and 60 man-made objects. The stimuli that were the focus of the present investigation were restricted to those of living objects within blocks where no response was required. These stimuli were further sorted between human non-verbal vocalizations and animal vocalizations. They were also controlled in terms of their spectrograms and formant distributions. Continuous 64-channel EEG was acquired through Neuroscan Synamps referenced to the nose, band-pass filtered 0.05-200Hz, and digitized at 1000Hz. Peri-stimulus epochs of continuous EEG (-100ms to 900ms) were visually inspected for artifacts, 40Hz low-passed filtered and baseline corrected using the pre-stimulus period . Averages were computed from each subject separately. AEPs in response to animal and human vocalizations were analyzed with respect to differences of Global Field Power (GFP) and with respect to changes of the voltage configurations at the scalp (reviewed in Murray et al., 2008). The former provides a measure of the strength of the electric field irrespective of topographic differences; the latter identifies changes in spatial configurations of the underlying sources independently of the response strength. In addition, we utilized the local auto-regressive average distributed linear inverse solution (LAURA; Grave de Peralta Menendez et al., 2001) to visualize and statistically contrast the likely underlying sources of effects identified in the preceding analysis steps. Results: We found differential activity in response to human vocalizations over three periods in the post-stimulus interval, and this response was always stronger than that to animal vocalizations. The first differential response (169-219ms) was a consequence of a modulation in strength of a common brain network localized into the right superior temporal sulcus (STS; Brodmann's Area (BA) 22) and extending into the superior temporal gyrus (STG; BA 41). A second difference (291-357ms) also followed from strength modulations of a common network with statistical differences localized to the left inferior precentral and prefrontal gyrus (BA 6/45). These two first strength modulations correlated (Spearman's rho(8)=0.770; p=0.009) indicative of functional coupling between temporally segregated stages of vocalization discrimination. A third difference (389-667ms) followed from strength and topographic modulations and was localized to the left superior frontal gyrus (BA10) although this third difference did not reach our spatial criterion of 12 continuous voxels. Conclusions: We show that voice discrimination unfolds over multiple temporal stages, involving a wide network of brain regions. The initial stages of vocalization discrimination are based on modulations in response strength within a common brain network with no evidence for a voice-selective module. The latency of this effect parallels that of face discrimination (Bentin et al., 2007), supporting the possibility that voice and face processes can mutually inform one another. Putative underlying sources (localized in the right STS; BA 22) are consistent with prior hemodynamic imaging evidence in humans (Belin et al., 2004). Our effect over the 291-357ms post-stimulus period overlaps the 'voice-specific-response' reported by Levy et al. (Levy et al., 2001) and the estimated underlying sources (left BA6/45) were in agreement with previous findings in humans (Fecteau et al., 2005). These results challenge the idea that circumscribed and selective areas subserve con-specific vocalization processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Neuroimaging of the self focused on high-level mechanisms such as language, memory or imagery of the self. Recent evidence suggests that low-level mechanisms of multisensory and sensorimotor integration may play a fundamental role in encoding self-location and the first-person perspective (Blanke and Metzinger, 2009). Neurological patients with out-of body experiences (OBE) suffer from abnormal self-location and the first-person perspective due to a damage in the temporo-parietal junction (Blanke et al., 2004). Although self-location and the first-person perspective can be studied experimentally (Lenggenhager et al., 2009), the neural underpinnings of self-location have yet to be investigated. To investigate the brain network involved in self-location and first-person perspective we used visuo-tactile multisensory conflict, magnetic resonance (MR)-compatible robotics, and fMRI in study 1, and lesion analysis in a sample of 9 patients with OBE due to focal brain damage in study 2. Methods: Twenty-two participants saw a video showing either a person's back or an empty room being stroked (visual stimuli) while the MR-compatible robotic device stroked their back (tactile stimulation). Direction and speed of the seen stroking could either correspond (synchronous) or not (asynchronous) to those of the seen stroking. Each run comprised the four conditions according to a 2x2 factorial design with Object (Body, No-Body) and Synchrony (Synchronous, Asynchronous) as main factors. Self-location was estimated using the mental ball dropping (MBD; Lenggenhager et al., 2009). After the fMRI session participants completed a 6-item adapted from the original questionnaire created by Botvinick and Cohen (1998) and based on questions and data obtained by Lenggenhager et al. (2007, 2009). They were also asked to complete a questionnaire to disclose the perspective they adopted during the illusion. Response times (RTs) for the MBD and fMRI data were analyzed with a 3-way mixed model ANOVA with the in-between factor Perspective (up, down) and the two with-in factors Object (body, no-body) and Stroking (synchronous, asynchronous). Quantitative lesion analysis was performed using MRIcron (Rorden et al., 2007). We compared the distributions of brain lesions confirmed by multimodality imaging (Knowlton, 2004) in patients with OBE with those showing complex visual hallucinations involving people or faces, but without any disturbance of self-location and first person perspective. Nine patients with OBE were investigated. The control group comprised 8 patients. Structural imaging data were available for normalization and co-registration in all the patients. Normalization of each patient's lesion into the common MNI (Montreal Neurological Institute) reference space permitted simple, voxel-wise, algebraic comparisons to be made. Results: Even if in the scanner all participants were lying on their back and were facing upwards, analysis of perspective showed that half of the participants had the impression to be looking down at the virtual human body below them, despite any cues about their body position (Down-group). The other participants had the impression to be looking up at the virtual body above them (Up-group). Analysis of Q3 ("How strong was the feeling that the body you saw was you?") indicated stronger self-identification with the virtual body during the synchronous stroking. RTs in the MBD task confirmed these subjective data (significant 3-way interaction between perspective, object and stroking). fMRI results showed eight cortical regions where the BOLD signal was significantly different during at least one of the conditions resulting from the combination of Object and Stroking, relative to baseline: right and left temporo-parietal junction, right EBA, left middle occipito-temporal gyrus, left postcentral gyrus, right medial parietal lobe, bilateral medial occipital lobe (Fig 1). The activation patterns in right and left temporo-parietal junction and right EBA reflected changes in self-location and perspective as revealed by statistical analysis that was performed on the percentage of BOLD change with respect to the baseline. Statistical lesion overlap comparison (using nonparametric voxel based lesion symptom mapping) with respect to the control group revealed the right temporo-parietal junction, centered at the angular gyrus (Talairach coordinates x = 54, y =-52, z = 26; p>0.05, FDR corrected). Conclusions: The present questionnaire and behavioural results show that - despite the noisy and constraining MR environment) our participants had predictable changes in self-location, self-identification, and first-person perspective when robotic tactile stroking was applied synchronously with the robotic visual stroking. fMRI data in healthy participants and lesion data in patients with abnormal self-location and first-person perspective jointly revealed that the temporo-parietal cortex especially in the right hemisphere encodes these conscious experiences. We argue that temporo-parietal activity reflects the experience of the conscious "I" as embodied and localized within bodily space.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Children with congenital heart disease (CHD) who survive surgery often present impaired neurodevelopment and qualitative brain anomalies. However, the impact of CHD on total or regional brain volumes only received little attention. We address this question in a sample of patients with 22q11.2 deletion syndrome (22q11DS), a neurogenetic condition frequently associated with CHD. Sixty-one children, adolescents, and young adults with confirmed 22q11.2 deletion were included, as well as 80 healthy participants matched for age and gender. Subsequent subdivision of the patients group according to CHD yielded a subgroup of 27 patients with normal cardiac status and a subgroup of 26 patients who underwent cardiac surgery during their first years of life (eight patients with unclear status were excluded). Regional cortical volumes were extracted using an automated method and the association between regional cortical volumes, and CHD was examined within a three-condition fixed factor. Robust protection against type I error used Bonferroni correction. Smaller total cerebral volumes were observed in patients with CHD compared to both patients without CHD and controls. The pattern of bilateral regional reductions associated with CHD encompassed the superior parietal region, the precuneus, the fusiform gyrus, and the anterior cingulate cortex. Within patients, a significant reduction in the left parahippocampal, the right middle temporal, and the left superior frontal gyri was associated with CHD. The present results of global and regional volumetric reductions suggest a role for disturbed hemodynamic in the pathophysiology of brain alterations in patients with neurodevelopmental disease and cardiac malformations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent theory of physiology of language suggests a dual stream dorsal/ventral organization of speech perception. Using intra-cerebral Event-related potentials (ERPs) during pre-surgical assessment of twelve drug-resistant epileptic patients, we aimed to single out electrophysiological patterns during both lexical-semantic and phonological monitoring tasks involving ventral and dorsal regions respectively. Phonological information processing predominantly occurred in the left supra-marginal gyrus (dorsal stream) and lexico-semantic information occurred in anterior/middle temporal and fusiform gyri (ventral stream). Similar latencies were identified in response to phonological and lexico-semantic tasks, suggesting parallel processing. Typical ERP components were strongly left lateralized since no evoked responses were recorded in homologous right structures. Finally, ERP patterns suggested the inferior frontal gyrus as the likely final common pathway of both dorsal and ventral streams. These results brought out detailed evidence of the spatial-temporal information processing in the dual pathways involved in speech perception.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Several patterns of grey and white matter changes have been separately described in young adults with first-episode psychosis. Concomitant investigation of grey and white matter densities in patients with first-episode psychosis without other psychiatric comorbidities that include all relevant imaging markers could provide clues to the neurodevelopmental hypothesis in schizophrenia. Methods: We recruited patients with first-episode psychosis diagnosed according to the DSM-IV-TR and matched controls. All participants underwent magnetic resonance imaging (MRI). Voxel-based morphometry (VBM) analysis and mean diffusivity voxel-based analysis (VBA) were used for grey matter data. Fractional anisotropy and axial, radial and mean diffusivity were analyzed using tract-based spatial statistics (TBSS) for white matter data. Results: We included 15 patients and 16 controls. The mean diffusivity VBA showed significantly greater mean diffusivity in the first-episode psychosis than in the control group in the lingual gyrus bilaterally, the occipital fusiform gyrus bilaterally, the right lateral occipital gyrus and the right inferior temporal gyrus. Moreover, the TBSS analysis revealed a lower fractional anisotropy in the first-episode psychosis than in the control group in the genu of the corpus callosum, minor forceps, corticospinal tract, right superior longitudinal fasciculus, left middle cerebellar peduncle, left inferior longitudinal fasciculus and the posterior part of the fronto-occipital fasciculus. This analysis also revealed greater radial diffusivity in the first-episode psychosis than in the control group in the right corticospinal tract, right superior longitudinal fasciculus and left middle cerebellar peduncle. Limitations: The modest sample size and the absence of women in our series could limit the impact of our results. Conclusion: Our results highlight the structural vulnerability of grey matter in posterior areas of the brain among young adult male patients with first-episode psychosis. Moreover, the concomitant greater radial diffusivity within several regions already revealed by the fractional anisotropy analysis supports the idea of a late myelination in patients with first-episode psychosis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTRODUCTION: Handwriting is a modality of language production whose cerebral substrates remain poorly known although the existence of specific regions is postulated. The description of brain damaged patients with agraphia and, more recently, several neuroimaging studies suggest the involvement of different brain regions. However, results vary with the methodological choices made and may not always discriminate between "writing-specific" and motor or linguistic processes shared with other abilities. METHODS: We used the "Activation Likelihood Estimate" (ALE) meta-analytical method to identify the cerebral network of areas commonly activated during handwriting in 18 neuroimaging studies published in the literature. Included contrasts were also classified according to the control tasks used, whether non-specific motor/output-control or linguistic/input-control. These data were included in two secondary meta-analyses in order to reveal the functional role of the different areas of this network. RESULTS: An extensive, mainly left-hemisphere network of 12 cortical and sub-cortical areas was obtained; three of which were considered as primarily writing-specific (left superior frontal sulcus/middle frontal gyrus area, left intraparietal sulcus/superior parietal area, right cerebellum) while others related rather to non-specific motor (primary motor and sensorimotor cortex, supplementary motor area, thalamus and putamen) or linguistic processes (ventral premotor cortex, posterior/inferior temporal cortex). CONCLUSIONS: This meta-analysis provides a description of the cerebral network of handwriting as revealed by various types of neuroimaging experiments and confirms the crucial involvement of the left frontal and superior parietal regions. These findings provide new insights into cognitive processes involved in handwriting and their cerebral substrates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. In natural settings, a sound produced by a living being or an object provides information about the identity and the location of the sound source. Sound's identity is orocessed alono the ventral "What" pathway which consists of regions within the superior and middle temporal cortices as well as the inferior frontal gyrus. This work concerns the creation of individual auditory object representations in narrow semantic categories and their plasticity using electrical imaging. Discrimination of sounds from broad category has been shown to occur along a temporal hierarchy and in different brain regions along the ventral "What" pathway. However, sounds belonging to the same semantic category, such as faces or voices, were shown to be discriminated in specific brain areas and are thought to represent a special class of stimuli. I have investigated how cortical representations of a narrow category, here birdsongs, is modulated by training novices to recognized songs of individual bird species. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory ventral "What" pathway as a function of the level of expertise newly acquired. Correct recognition of trained items induces a sharpening within a left-lateralized semantic network starting around 200ms, whereas untrained items' processing occurs later in lower-level and memory-related regions. With another category of sounds belonging to the same category, here heartbeats, I investigated the cortical representations of correct and incorrect recognition of sounds. Source estimations revealed differential representations partially overlapping with regions involved in the semantic network that is activated when participants became experts in the task. Incorrect recognition also induces a higher activation when compared to correct recognition in regions processing lower-level features. The discrimination of heartbeat sounds is a difficult task and requires a continuous listening. I investigated whether the repetition effects are modulated by participants' behavioral performance. Dynamic source estimations revealed repetition suppression in areas located outside of the semantic network. Therefore, individual environmental sounds become meaningful with training. Their representations mainly involve a left-lateralized network of brain regions that are tuned with expertise, as well as other brain areas, not related to semantic processing, and occurring in early stages of semantic processing. -- Le terme objet sonore" décrit une expérience auditive associée à un événement acoustique produit par une source sonore. Dans l'environnement, un son produit par un être vivant ou un objet fournit des informations concernant l'identité et la localisation de la source sonore. Les informations concernant l'identité d'un son sont traitée le long de la voie ventrale di "Quoi". Cette voie est composée de regions situées dans le cortex temporal et frontal. L'objet de ce travail est d'étudier quels sont les neuro-mecanismes impliqués dans la représentation de nouveaux objets sonores appartenant à une meme catégorie sémantique ainsi que les phénomènes de plasticité à l'aide de l'imagerie électrique. Il a été montré que la discrimination de sons appartenant à différentes catégories sémantiques survient dans différentes aires situées le long la voie «Quoi» et suit une hiérarchie temporelle II a également été montré que la discrimination de sons appartenant à la même catégorie sémantique tels que les visages ou les voix, survient dans des aires spécifiques et représenteraient des stimuli particuliers. J'ai étudié comment les représentations corticales de sons appartenant à une même catégorie sémantique, dans ce cas des chants d'oiseaux, sont modifiées suite à un entraînement Pour ce faire, des sujets novices ont été entraînés à reconnaître des chants d'oiseaux spécifiques L'analyse des estimations des sources neuronales au cours du temps a montré que les representations des objets sonores activent de manière différente des régions situées le long de la vo,e ventrale en fonction du niveau d'expertise acquis grâce à l'entraînement. La reconnaissance des chants pour lesquels les sujets ont été entraînés implique un réseau sémantique principalement situé dans l'hémisphère gauche activé autour de 200ms. Au contraire, la reconnaissance des chants pour lesquels les sujets n'ont pas été entraînés survient plus tardivement dans des régions de plus bas niveau. J'ai ensuite étudié les mécanismes impliqués dans la reconnaissance et non reconnaissance de sons appartenant à une autre catégorie, .es battements de coeur. L'analyse des sources neuronales a montre que certaines régions du réseau sémantique lié à l'expertise acquise sont recrutées de maniere différente en fonction de la reconnaissance ou non reconnaissance du son La non reconnaissance des sons recrute des régions de plus bas niveau. La discrimination des bruits cardiaques est une tâche difficile et nécessite une écoute continue du son. J'ai étudié l'influence des réponses comportementales sur les effets de répétitions. L'analyse des sources neuronales a montré que la reconnaissance ou non reconnaissance des sons induisent des effets de repétition différents dans des régions situées en dehors des aires du réseau sémantique. Ainsi, les sons acquièrent un sens grâce à l'entraînement. Leur représentation corticale implique principalement un réseau d'aires cérébrales situé dans l'hémisphère gauche, dont l'activité est optimisée avec l'acquisition d'un certain niveau d'expertise, ainsi que d'autres régions qui ne sont pas liée au traitement de l'information sémantique. L'activité de ce réseau sémantique survient plus rapidemement que la prédiction par le modèle de la hiérarchie temporelle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

According to the concepts of cognitive neuropsychology, there are two principal routes of reading processing: a lexical route, in which global reading of words occurs and a phonological route, responsible for the conversion of the graphemes into their respective phonemes. In the present study, functional magnetic resonance imaging (fMRI) was used to investigate the patterns of cerebral activation in lexical and phonological reading by 13 healthy women with a formal educational level greater than 11 years. Participants were submitted to a silent reading task containing three types of stimuli: real words (irregular and foreign words), nonwords and illegitimate graphic stimuli. An increased number of activated voxels were identified by fMRI in the word reading (lexical processing) than in the nonword reading (phonological processing) task. In word reading, activation was greater than for nonwords in the following areas: superior, middle and inferior frontal gyri, and bilateral superior temporal gyrus, right cerebellum and the left precentral gyrus, as indicated by fMRI. In the reading of nonwords, the activation was predominant in the right cerebellum and in the left superior temporal gyrus. The results of the present study suggest the existence of differences in the patterns of cerebral activation during lexical and phonological reading, with greater involvement of the right hemisphere in reading words than nonwords.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.