956 resultados para Superior temporal sulcus


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using functional magnetic resonance imaging (fMRI), we investigated brain activity evoked by mutual and averted gaze in a compelling and commonly experienced social encounter. Through virtual-reality goggles, subjects viewed a man who walked toward them and shifted his neutral gaze either toward (mutual gaze) or away (averted gaze) from them. Robust activity was evoked in the superior temporal sulcus (STS) and fusiform gyrus (FFG). For both conditions, STS activity was strongly right lateralized. Mutual gaze evoked greater activity in the STS than did averted gaze, whereas the FFG responded equivalently to mutual and averted gaze. Thus, we show that the STS is involved in processing social information conveyed by shifts in gaze within an overtly social context. This study extends understanding of the role of the STS in social cognition and social perception by demonstrating that it is highly sensitive to the context in which a human action occurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inaccurate wiring and synaptic pathology appear to be major hallmarks of schizophrenia. A variety of gene products involved in synaptic neurotransmission and receptor signaling are differentially expressed in brains of schizophrenia patients. However, synaptic pathology may also develop by improper expression of intra- and extra-cellular structural elements weakening synaptic stability. Therefore, we have investigated transcription of these elements in the left superior temporal gyrus of 10 schizophrenia patients and 10 healthy controls by genome-wide microarrays (Illumina). Fourteen up-regulated and 22 downregulated genes encoding structural elements were chosen from the lists of differentially regulated genes for further qRT-PCR analysis. Almost all genes confirmed by this method were downregulated. Their gene products belonged to vesicle-associated proteins, that is, synaptotagmin 6 and syntaxin 12, to cytoskeletal proteins, like myosin 6, pleckstrin, or to proteins of the extracellular matrix, such as collagens, or laminin C3. Our results underline the pivotal roles of structural genes that control formation and stabilization of pre- and post-synaptic elements or influence axon guidance in schizophrenia. The glial origin of collagen or laminin highlights the close interrelationship between neurons and glial cells in establishment and maintenance of synaptic strength and plasticity. It is hypothesized that abnormal expression of these and related genes has a major impact on the pathophysiology of schizophrenia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The left superior temporal gyrus (STG) has been suggested to play a key role in auditory verbal hallucinations (AVH) in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant AVH and 19 healthy controls underwent perfusion magnetic resonance (MR) imaging with arterial spin labeling (ASL). Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS) between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF), which consisted of the regional CBF measurement in the left STG and the global CBF measurement in the whole brain. Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001) and to the global CBF in patients (p < 0.004) at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007), and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001). After TMS, PANSS (p = 0.003) and PSYRATS (p = 0.01) scores decreased significantly in patients. Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of AVH in schizophrenia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La voix humaine constitue la partie dominante de notre environnement auditif. Non seulement les humains utilisent-ils la voix pour la parole, mais ils sont tout aussi habiles pour en extraire une multitude d’informations pertinentes sur le locuteur. Cette expertise universelle pour la voix humaine se reflète dans la présence d’aires préférentielles à celle-ci le long des sillons temporaux supérieurs. À ce jour, peu de données nous informent sur la nature et le développement de cette réponse sélective à la voix. Dans le domaine visuel, une vaste littérature aborde une problématique semblable en ce qui a trait à la perception des visages. L’étude d’experts visuels a permis de dégager les processus et régions impliqués dans leur expertise et a démontré une forte ressemblance avec ceux utilisés pour les visages. Dans le domaine auditif, très peu d’études se sont penchées sur la comparaison entre l’expertise pour la voix et d’autres catégories auditives, alors que ces comparaisons pourraient contribuer à une meilleure compréhension de la perception vocale et auditive. La présente thèse a pour dessein de préciser la spécificité des processus et régions impliqués dans le traitement de la voix. Pour ce faire, le recrutement de différents types d’experts ainsi que l’utilisation de différentes méthodes expérimentales ont été préconisés. La première étude a évalué l’influence d’une expertise musicale sur le traitement de la voix humaine, à l’aide de tâches comportementales de discrimination de voix et d’instruments de musique. Les résultats ont démontré que les musiciens amateurs étaient meilleurs que les non-musiciens pour discriminer des timbres d’instruments de musique mais aussi les voix humaines, suggérant une généralisation des apprentissages perceptifs causés par la pratique musicale. La seconde étude avait pour but de comparer les potentiels évoqués auditifs liés aux chants d’oiseaux entre des ornithologues amateurs et des participants novices. L’observation d’une distribution topographique différente chez les ornithologues à la présentation des trois catégories sonores (voix, chants d’oiseaux, sons de l’environnement) a rendu les résultats difficiles à interpréter. Dans la troisième étude, il était question de préciser le rôle des aires temporales de la voix dans le traitement de catégories d’expertise chez deux groupes d’experts auditifs, soit des ornithologues amateurs et des luthiers. Les données comportementales ont démontré une interaction entre les deux groupes d’experts et leur catégorie d’expertise respective pour des tâches de discrimination et de mémorisation. Les résultats obtenus en imagerie par résonance magnétique fonctionnelle ont démontré une interaction du même type dans le sillon temporal supérieur gauche et le gyrus cingulaire postérieur gauche. Ainsi, les aires de la voix sont impliquées dans le traitement de stimuli d’expertise dans deux groupes d’experts auditifs différents. Ce résultat suggère que la sélectivité à la voix humaine, telle que retrouvée dans les sillons temporaux supérieurs, pourrait être expliquée par une exposition prolongée à ces stimuli. Les données présentées démontrent plusieurs similitudes comportementales et anatomo-fonctionnelles entre le traitement de la voix et d’autres catégories d’expertise. Ces aspects communs sont explicables par une organisation à la fois fonctionnelle et économique du cerveau. Par conséquent, le traitement de la voix et d’autres catégories sonores se baserait sur les mêmes réseaux neuronaux, sauf en cas de traitement plus poussé. Cette interprétation s’avère particulièrement importante pour proposer une approche intégrative quant à la spécificité du traitement de la voix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.