992 resultados para superior temporal sulcus
Resumo:
Pyramidal neurones were injected with Lucifer Yellow in slices cut tangential to the surface of area 7m and the superior temporal polysensory area (STP) of the macaque monkey. Comparison of the basal dendritic arbors of supra- and infragranular pyramidal neurones (n=139) that were injected in the same putative modules in the different cortical areas revealed variation in their structure. Moreover, there were relative differences in dendritic morphology of supra- and infragranular pyramidal neurones in the two cortical areas. Shell analyses revealed that layer III pyramidal neurones in area STP had considerably higher peak complexity (maximum number of dendritic intersections per Shell circle) than those in layer V, whereas peak complexities were similar for supra- and infragranular pyramidal neurones in area 7m. In both cortical areas, the basal dendritic trees of layer m pyramidal neurones were characterized by a higher spine density than those in layer V. Calculations of the total number of dendritic spines in the average basal dendritic arbor revealed that layer V pyramidal neurones in area 7m had twice as many spines as cells in layer III. (4535 and 2294, respectively). A similar calculation for neurones in area STP revealed that layer III pyramidal neurones had approximately the same number of spines as cells in layer V (3585 and 3850 spines, respectively). Relative differences in the branching patterns of, and the number of spines in, the basal dendritic arbors of supra- and infragranular pyramidal neurones in the different cortical areas may allow for integration of different numbers of inputs, and different degrees of dendritic processing. These results support the thesis that intra-areal circuitry differs in different cortical areas.
Resumo:
Introduction: Discrimination of species-specific vocalizations is fundamental for survival and social interactions. Its unique behavioral relevance has encouraged the identification of circumscribed brain regions exhibiting selective responses (Belin et al., 2004), while the role of network dynamics has received less attention. Those studies that have examined the brain dynamics of vocalization discrimination leave unresolved the timing and the inter-relationship between general categorization, attention, and speech-related processes (Levy et al., 2001, 2003; Charest et al., 2009). Given these discrepancies and the presence of several confounding factors, electrical neuroimaging analyses were applied to auditory evoked-potential (AEPs) to acoustically and psychophysically controlled non-verbal human and animal vocalizations. This revealed which region(s) exhibit voice-sensitive responses and in which sequence. Methods: Subjects (N=10) performed a living vs. man-made 'oddball' auditory discrimination task, such that on a given block of trials 'target' stimuli occurred 10% of the time. Stimuli were complex, meaningful sounds of 500ms duration. There were 120 different sound files in total, 60 of which represented sounds of living objects and 60 man-made objects. The stimuli that were the focus of the present investigation were restricted to those of living objects within blocks where no response was required. These stimuli were further sorted between human non-verbal vocalizations and animal vocalizations. They were also controlled in terms of their spectrograms and formant distributions. Continuous 64-channel EEG was acquired through Neuroscan Synamps referenced to the nose, band-pass filtered 0.05-200Hz, and digitized at 1000Hz. Peri-stimulus epochs of continuous EEG (-100ms to 900ms) were visually inspected for artifacts, 40Hz low-passed filtered and baseline corrected using the pre-stimulus period . Averages were computed from each subject separately. AEPs in response to animal and human vocalizations were analyzed with respect to differences of Global Field Power (GFP) and with respect to changes of the voltage configurations at the scalp (reviewed in Murray et al., 2008). The former provides a measure of the strength of the electric field irrespective of topographic differences; the latter identifies changes in spatial configurations of the underlying sources independently of the response strength. In addition, we utilized the local auto-regressive average distributed linear inverse solution (LAURA; Grave de Peralta Menendez et al., 2001) to visualize and statistically contrast the likely underlying sources of effects identified in the preceding analysis steps. Results: We found differential activity in response to human vocalizations over three periods in the post-stimulus interval, and this response was always stronger than that to animal vocalizations. The first differential response (169-219ms) was a consequence of a modulation in strength of a common brain network localized into the right superior temporal sulcus (STS; Brodmann's Area (BA) 22) and extending into the superior temporal gyrus (STG; BA 41). A second difference (291-357ms) also followed from strength modulations of a common network with statistical differences localized to the left inferior precentral and prefrontal gyrus (BA 6/45). These two first strength modulations correlated (Spearman's rho(8)=0.770; p=0.009) indicative of functional coupling between temporally segregated stages of vocalization discrimination. A third difference (389-667ms) followed from strength and topographic modulations and was localized to the left superior frontal gyrus (BA10) although this third difference did not reach our spatial criterion of 12 continuous voxels. Conclusions: We show that voice discrimination unfolds over multiple temporal stages, involving a wide network of brain regions. The initial stages of vocalization discrimination are based on modulations in response strength within a common brain network with no evidence for a voice-selective module. The latency of this effect parallels that of face discrimination (Bentin et al., 2007), supporting the possibility that voice and face processes can mutually inform one another. Putative underlying sources (localized in the right STS; BA 22) are consistent with prior hemodynamic imaging evidence in humans (Belin et al., 2004). Our effect over the 291-357ms post-stimulus period overlaps the 'voice-specific-response' reported by Levy et al. (Levy et al., 2001) and the estimated underlying sources (left BA6/45) were in agreement with previous findings in humans (Fecteau et al., 2005). These results challenge the idea that circumscribed and selective areas subserve con-specific vocalization processing.
Resumo:
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
We describe the case of a patient with pure verbal palinacousis and perseveration of inner speech after a right inferior temporal lesion. The superior temporal lobe, including the superior temporal sulcus and the interhemispheric connection between the 2 superior temporal lobes, explored by tractography, were preserved. These regions are involved in voice processing, verbal short-term memory and inner speech. It can then be hypothesised that abnormal activity in this network has occurred. Palinacousis and 'palinendophonia', a term proposed for this symptom not previously reported, may be due to common cognitive processes disorders involved in both voice hearing and inner speech.
Resumo:
Inaccurate wiring and synaptic pathology appear to be major hallmarks of schizophrenia. A variety of gene products involved in synaptic neurotransmission and receptor signaling are differentially expressed in brains of schizophrenia patients. However, synaptic pathology may also develop by improper expression of intra- and extra-cellular structural elements weakening synaptic stability. Therefore, we have investigated transcription of these elements in the left superior temporal gyrus of 10 schizophrenia patients and 10 healthy controls by genome-wide microarrays (Illumina). Fourteen up-regulated and 22 downregulated genes encoding structural elements were chosen from the lists of differentially regulated genes for further qRT-PCR analysis. Almost all genes confirmed by this method were downregulated. Their gene products belonged to vesicle-associated proteins, that is, synaptotagmin 6 and syntaxin 12, to cytoskeletal proteins, like myosin 6, pleckstrin, or to proteins of the extracellular matrix, such as collagens, or laminin C3. Our results underline the pivotal roles of structural genes that control formation and stabilization of pre- and post-synaptic elements or influence axon guidance in schizophrenia. The glial origin of collagen or laminin highlights the close interrelationship between neurons and glial cells in establishment and maintenance of synaptic strength and plasticity. It is hypothesized that abnormal expression of these and related genes has a major impact on the pathophysiology of schizophrenia.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
Background: The left superior temporal gyrus (STG) has been suggested to play a key role in auditory verbal hallucinations (AVH) in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant AVH and 19 healthy controls underwent perfusion magnetic resonance (MR) imaging with arterial spin labeling (ASL). Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS) between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF), which consisted of the regional CBF measurement in the left STG and the global CBF measurement in the whole brain. Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001) and to the global CBF in patients (p < 0.004) at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007), and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001). After TMS, PANSS (p = 0.003) and PSYRATS (p = 0.01) scores decreased significantly in patients. Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of AVH in schizophrenia.
Resumo:
Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.
Resumo:
In order to interact with the multisensory world that surrounds us, we must integrate various sources of sensory information (vision, hearing, touch...). A fundamental question is thus how the brain integrates the separate elements of an object defined by several sensory components to form a unified percept. The superior colliculus was the main model for studying multisensory integration. At the cortical level, until recently, multisensory integration appeared to be a characteristic attributed to high-level association regions. First, we describe recently observed direct cortico-cortical connections between different sensory cortical areas in the non-human primate and discuss the potential role of these connections. Then, we show that the projections between different sensory and motor cortical areas and the thalamus enabled us to highlight the existence of thalamic nuclei that, by their connections, may represent an alternative pathway for information transfer between different sensory and/or motor cortical areas. The thalamus is in position to allow a faster transfer and even an integration of information across modalities. Finally, we discuss the role of these non-specific connections regarding behavioral evidence in the monkey and recent electrophysiological evidence in the primary cortical sensory areas.
Resumo:
La voix humaine constitue la partie dominante de notre environnement auditif. Non seulement les humains utilisent-ils la voix pour la parole, mais ils sont tout aussi habiles pour en extraire une multitude d’informations pertinentes sur le locuteur. Cette expertise universelle pour la voix humaine se reflète dans la présence d’aires préférentielles à celle-ci le long des sillons temporaux supérieurs. À ce jour, peu de données nous informent sur la nature et le développement de cette réponse sélective à la voix. Dans le domaine visuel, une vaste littérature aborde une problématique semblable en ce qui a trait à la perception des visages. L’étude d’experts visuels a permis de dégager les processus et régions impliqués dans leur expertise et a démontré une forte ressemblance avec ceux utilisés pour les visages. Dans le domaine auditif, très peu d’études se sont penchées sur la comparaison entre l’expertise pour la voix et d’autres catégories auditives, alors que ces comparaisons pourraient contribuer à une meilleure compréhension de la perception vocale et auditive. La présente thèse a pour dessein de préciser la spécificité des processus et régions impliqués dans le traitement de la voix. Pour ce faire, le recrutement de différents types d’experts ainsi que l’utilisation de différentes méthodes expérimentales ont été préconisés. La première étude a évalué l’influence d’une expertise musicale sur le traitement de la voix humaine, à l’aide de tâches comportementales de discrimination de voix et d’instruments de musique. Les résultats ont démontré que les musiciens amateurs étaient meilleurs que les non-musiciens pour discriminer des timbres d’instruments de musique mais aussi les voix humaines, suggérant une généralisation des apprentissages perceptifs causés par la pratique musicale. La seconde étude avait pour but de comparer les potentiels évoqués auditifs liés aux chants d’oiseaux entre des ornithologues amateurs et des participants novices. L’observation d’une distribution topographique différente chez les ornithologues à la présentation des trois catégories sonores (voix, chants d’oiseaux, sons de l’environnement) a rendu les résultats difficiles à interpréter. Dans la troisième étude, il était question de préciser le rôle des aires temporales de la voix dans le traitement de catégories d’expertise chez deux groupes d’experts auditifs, soit des ornithologues amateurs et des luthiers. Les données comportementales ont démontré une interaction entre les deux groupes d’experts et leur catégorie d’expertise respective pour des tâches de discrimination et de mémorisation. Les résultats obtenus en imagerie par résonance magnétique fonctionnelle ont démontré une interaction du même type dans le sillon temporal supérieur gauche et le gyrus cingulaire postérieur gauche. Ainsi, les aires de la voix sont impliquées dans le traitement de stimuli d’expertise dans deux groupes d’experts auditifs différents. Ce résultat suggère que la sélectivité à la voix humaine, telle que retrouvée dans les sillons temporaux supérieurs, pourrait être expliquée par une exposition prolongée à ces stimuli. Les données présentées démontrent plusieurs similitudes comportementales et anatomo-fonctionnelles entre le traitement de la voix et d’autres catégories d’expertise. Ces aspects communs sont explicables par une organisation à la fois fonctionnelle et économique du cerveau. Par conséquent, le traitement de la voix et d’autres catégories sonores se baserait sur les mêmes réseaux neuronaux, sauf en cas de traitement plus poussé. Cette interprétation s’avère particulièrement importante pour proposer une approche intégrative quant à la spécificité du traitement de la voix.
Resumo:
Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.
Resumo:
Visual observation of human actions provokes more motor activation than observation of robotic actions. We investigated the extent to which this visuomotor priming effect is mediated by bottom-up or top-down processing. The bottom-up hypothesis suggests that robotic movements are less effective in activating the ‘mirror system’ via pathways from visual areas via the superior temporal sulcus to parietal and premotor cortices. The top-down hypothesis postulates that beliefs about the animacy of a movement stimulus modulate mirror system activity via descending pathways from areas such as the temporal pole and prefrontal cortex. In an automatic imitation task, subjects performed a prespecified movement (e.g. hand opening) on presentation of a human or robotic hand making a compatible (opening) or incompatible (closing) movement. The speed of responding on compatible trials, compared with incompatible trials, indexed visuomotor priming. In the first experiment, robotic stimuli were constructed by adding a metal and wire ‘wrist’ to a human hand. Questionnaire data indicated that subjects believed these movements to be less animate than those of the human stimuli but the visuomotor priming effects of the human and robotic stimuli did not differ. In the second experiment, when the robotic stimuli were more angular and symmetrical than the human stimuli, human movements elicited more visuomotor priming than the robotic movements. However, the subjects’ beliefs about the animacy of the stimuli did not affect their performance. These results suggest that bottom-up processing is primarily responsible for the visuomotor priming advantage of human stimuli.
Resumo:
In nonhuman species, testosterone is known to have permanent organizing effects early in life that predict later expression of sex differences in brain and behavior. However, in humans, it is still unknown whether such mechanisms have organizing effects on neural sexual dimorphism. In human males, we show that variation in fetal testosterone (FT) predicts later local gray matter volume of specific brain regions in a direction that is congruent with sexual dimorphism observed in a large independent sample of age-matched males and females from the NIH Pediatric MRI Data Repository. Right temporoparietal junction/posterior superior temporal sulcus (RTPJ/pSTS), planum temporale/parietal operculum (PT/PO), and posterior lateral orbitofrontal cortex (plOFC) had local gray matter volume that was both sexually dimorphic and predicted in a congruent direction by FT. That is, gray matter volume in RTPJ/pSTS was greater for males compared to females and was positively predicted by FT. Conversely, gray matter volume in PT/PO and plOFC was greater in females compared to males and was negatively predicted by FT. Subregions of both amygdala and hypothalamus were also sexually dimorphic in the direction of Male > Female, but were not predicted by FT. However, FT positively predicted gray matter volume of a non-sexually dimorphic subregion of the amygdala. These results bridge a long-standing gap between human and nonhuman species by showing that FT acts as an organizing mechanism for the development of regional sexual dimorphism in the human brain.
Resumo:
Formal thought disorder (FTD) is one of the main symptoms of schizophrenia. To date there are no whole brain volumetric studies investigating gray matter (GM) differences specifically associated with FTD. Here, we studied 20 right-handed schizophrenia patients that differed in the severity of formal thought disorder and 20 matched healthy controls, using voxel-based morphometry (VBM). The severity of FTD was measured with the Scale for the Assessment of Thought, Language, and Communication. The severity was negatively correlated with the GM volume of the left superior temporal sulcus, the left temporal pole, the right middle orbital gyrus and the right cuneus/lingual gyrus. Structural abnormalities specific for FTD were found to be unrelated to GM differences associated with schizophrenia in general. The specific GM abnormalities within the left temporal lobe may help to explain language disturbances included in FTD.