992 resultados para Superior temporal sulcus
Resumo:
Human subjects overestimate the change of rising intensity sounds compared with falling intensity sounds. Rising sound intensity has therefore been proposed to be an intrinsic warning cue. In order to test this hypothesis, we presented rising, falling, and constant intensity sounds to healthy humans and gathered psychophysiological and behavioral responses. Brain activity was measured using event-related functional magnetic resonance imaging. We found that rising compared with falling sound intensity facilitates autonomic orienting reflex and phasic alertness to auditory targets. Rising intensity sounds produced neural activity in the amygdala, which was accompanied by activity in intraparietal sulcus, superior temporal sulcus, and temporal plane. Our results indicate that rising sound intensity is an elementary warning cue eliciting adaptive responses by recruiting attentional and physiological resources. Regions involved in cross-modal integration were activated by rising sound intensity, while the right-hemisphere phasic alertness network could not be supported by this study.
Resumo:
Edges are important cues defining coherent auditory objects. As a model of auditory edges, sound on- and offset are particularly suitable to study their neural underpinnings because they contrast a specific physical input against no physical input. Change from silence to sound, that is onset, has extensively been studied and elicits transient neural responses bilaterally in auditory cortex. However, neural activity associated with sound onset is not only related to edge detection but also to novel afferent inputs. Edges at the change from sound to silence, that is offset, are not confounded by novel physical input and thus allow to examine neural activity associated with sound edges per se. In the first experiment, we used silent acquisition functional magnetic resonance imaging and found that the offset of pulsed sound activates planum temporale, superior temporal sulcus and planum polare of the right hemisphere. In the planum temporale and the superior temporal sulcus, offset response amplitudes were related to the pulse repetition rate of the preceding stimulation. In the second experiment, we found that these offset-responsive regions were also activated by single sound pulses, onset of sound pulse sequences and single sound pulse omissions within sound pulse sequences. However, they were not active during sustained sound presentation. Thus, our data show that circumscribed areas in right temporal cortex are specifically involved in identifying auditory edges. This operation is crucial for translating acoustic signal time series into coherent auditory objects.
Resumo:
Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.
Resumo:
The macaque cortical visual system is hierarchically organized into two streams, the ventral stream for recognizing objects and the dorsal stream for analyzing spatial relationships. The ventral stream extends from striate cortex or area V1 to inferior temporal cortex (IT) through extra-striate areas V2 and V4. Between V1 and V2, the ventral stream consists of two roughly parallel sub-streams, one extending from the cytochrome oxidase (CO) rich blobs in V1 to the CO rich thin stripes in V2, the other extending from the interblobs in V1 to interstripes, in V2. The blob-dominated sub-stream is thought to analyze the surface features such as color, whereas the interblob-dominated one is thought to analyze the contour features such as shape. ^ In the current study, the organization of cortical pathways linking V2 thin stripe and interstripe compartments with area V4 was investigated using a combination of physiological and anatomical techniques. Different compartments of V2 were first characterized, in vivo, using optical recording of intrinsic cortical signals. These functionally derived maps of V2 stripe compartments were then used to guide iontophoretic injections of multiple, distinguishable, anterograde tracers into specific V2 compartments. The distribution of labeled axons was analyzed either in horizontal sections through the prelunate gyrus, or in tangentially sectioned portions of physically unfolded cortex containing the lunate sulcus, prelunate gyrus and superior temporal sulcus. When a V2 thin stripe and adjacent interstripe were injected with distinguishable tracers, a large primary and several secondary foci were observed in V4. The primary focus from the thin stripe injection was spatially segregated from the primary focus from the V2 interstripe injection, suggesting a retention of the pattern of compartmentation. ^ We examined the distribution of retrogradely labeled cells in V1 following the injections of tracers into V2 different compartments, in order to quantitate just how parallel the two sub-streams are from V1 to V2. Our results suggest that both blobs and interblobs project to thin stripes in V2, whereas only interblobs project to interstripes. This asymmetrical segregation argues against the original proposal of strict parallelism. (Abstract shortened by UMI.) ^
Organization of the inferotemporal cortex in the macaque monkey: Connections of areas PITv and CITvp
Resumo:
Visual cortex of macaque monkeys consists of a large number of cortical areas that span the occipital, parietal, temporal, and frontal lobes and occupy more than half of cortical surface. Although considerable progress has been made in understanding the contributions of many occipital areas to visual perceptual processing, much less is known concerning the specific functional contributions of higher areas in the temporal and frontal lobes. Previous behavioral and electrophysiological investigations have demonstrated that the inferotemporal cortex (IT) is essential to the animal's ability to recognize and remember visual objects. While it is generally recognized that IT consists of a number of anatomically and functionally distinct visual-processing areas, there remains considerable controversy concerning the precise number, size, and location of these areas. Therefore, the precise delineation of the cortical subdivisions of inferotemporal cortex is critical for any significant progress in the understanding of the specific contributions of inferotemporal areas to visual processing. In this study, anterograde and/or retrograde neuroanatomical tracers were injected into two visual areas in the ventral posterior and central portions of IT (areas PITv and CITvp) to elucidate the corticocortical connections of these areas with well known areas of occipital cortex and with less well understood regions of inferotemporal cortex. The locations of injection sites and the delineation of the borders of many occipital areas were aided by the pattern of interhemispheric connections, revealed following callosal transection and subsequent labeling with HRP. The resultant patterns of connections were represented on two-dimensional computational (CARET) and manual cortical maps and the laminar characteristics and density of the projection fields were quantified. The laminar and density features of these corticocortical connections demonstrate thirteen anatomically distinct subdivisions or areas distributed within the superior temporal sulcus and across the inferotemporal gyrus. These results serve to refine previous descriptions of inferotemporal areas, validate recently identified areas, and provide a new description of the hierarchical relationships among occipitotemporal cortical areas in macaques. ^
Resumo:
In this paper, we review evidence from comparative studies of primate cortical organization, highlighting recent findings and hypotheses that may help us to understand the rules governing evolutionary changes of the cortical map and the process of formation of areas during development. We argue that clear unequivocal views of cortical areas and their homologies are more likely to emerge for 'core' fields, including the primary sensory areas, which are specified early in development by precise molecular identification steps. In primates, the middle temporal area is probably one of these primordial cortical fields. Areas that form at progressively later stages of development correspond to progressively more recent evolutionary events, their development being less firmly anchored in molecular specification. The certainty with which areal boundaries can be delimited, and likely homologies can be assigned, becomes increasingly blurred in parallel with this evolutionary/developmental sequence. For example, while current concepts for the definition of cortical areas have been vindicated in allowing a clarification of the organization of the New World monkey 'third tier' visual cortex (the third and dorsomedial areas, V3 and DM), our analyses suggest that more flexible mapping criteria may be needed to unravel the organization of higher-order visual association and polysensory areas.
Resumo:
There have been many functional imaging studies of the brain basis of theory of mind (ToM) skills, but the findings are heterogeneous and implicate anatomical regions as far apart as orbitofrontal cortex and the inferior parietal lobe. The functional imaging studies are reviewed to determine whether the diverse findings are due to methodological factors. The studies are considered according to the paradigm employed (e.g., stories vs. cartoons and explicit vs. implicit ToM instructions), the mental state(s) investigated, and the language demands of the tasks. Methodological variability does not seem to account for the variation in findings, although this conclusion may partly reflect the relatively small number of studies. Alternatively, several distinct brain regions may be activated during ToM reasoning, forming an integrated functional "network." The imaging findings suggest that there are several "core" regions in the network-including parts of the prefrontal cortex and superior temporal sulcus-while several more "peripheral" regions may contribute to ToM reasoning in a manner contingent on relatively minor aspects of the ToM task. © 2008 Wiley-Liss, Inc.
Resumo:
From birth, infants preferentially attend to human motion, which allows them to learn to interpret other peoples’ facial expressions and mental states. Evidence from adults shows that selectivity of the amygdala and the posterior superior temporal sulcus (pSTS) to biological motion correlates with social network size. Social motivation—one’s desire to orient to the social world, to seek and find reward in social interaction, and to maintain social relationships—may also contribute to neural specialization for biological motion and to social network characteristics. The current study aimed to determine whether neural selectivity for biological motion relates to social network characteristics, and to gain preliminary evidence as to whether social motivation plays a role in this relation. Findings suggest that neural selectivity for biological motion in the pSTS is positively related to social network size in middle childhood and that this relation is moderated by social motivation.
Resumo:
Speech perception routinely takes place in noisy or degraded listening environments, leading to ambiguity in the identity of the speech token. Here, I present one review paper and two experimental papers that highlight cognitive and visual speech contributions to the listening process, particularly in challenging listening environments. First, I survey the literature linking audiometric age-related hearing loss and cognitive decline and review the four proposed causal mechanisms underlying this link. I argue that future research in this area requires greater consideration of the functional overlap between hearing and cognition. I also present an alternative framework for understanding causal relationships between age-related declines in hearing and cognition, with emphasis on the interconnected nature of hearing and cognition and likely contributions from multiple causal mechanisms. I also provide a number of testable hypotheses to examine how impairments in one domain may affect the other. In my first experimental study, I examine the direct contribution of working memory (through a cognitive training manipulation) on speech in noise comprehension in older adults. My results challenge the efficacy of cognitive training more generally, and also provide support for the contribution of sentence context in reducing working memory load. My findings also challenge the ubiquitous use of the Reading Span test as a pure test of working memory. In a second experimental (fMRI) study, I examine the role of attention in audiovisual speech integration, particularly when the acoustic signal is degraded. I demonstrate that attentional processes support audiovisual speech integration in the middle and superior temporal gyri, as well as the fusiform gyrus. My results also suggest that the superior temporal sulcus is sensitive to intelligibility enhancement, regardless of how this benefit is obtained (i.e., whether it is obtained through visual speech information or speech clarity). In addition, I also demonstrate that both the cingulo-opercular network and motor speech areas are recruited in difficult listening conditions. Taken together, these findings augment our understanding of cognitive contributions to the listening process and demonstrate that memory, working memory, and executive control networks may flexibly be recruited in order to meet listening demands in challenging environments.
Resumo:
L’estimation de la douleur chez autrui peut être influencée par différents facteurs liés à la personne en douleur, à l’observateur ou bien à l’interaction entre ces derniers. Parmi ces facteurs, l’exposition répétée à la douleur d’autrui, dans les milieux de soins ou dans une relation dans laquelle un des deux conjoints souffre de douleur chronique, a souvent été liée à une sous-estimation de la douleur d’autrui. L’objectif de cette thèse visait à mesurer les impacts de l’exposition répétée à la douleur d’autrui sur l’estimation subséquente de la douleur des autres, mais aussi sur l’activité cérébrale lors de l’observation de la douleur d’autrui et finalement, sur l’estimation de la douleur chez les conjoints de patients atteints de douleur chronique. La première étude expérimentale a permis d’isoler le facteur d’exposition répétée à la douleur d’autrui des autres facteurs confondants pouvant moduler l’estimation de la douleur d’autrui. Ainsi, il a été démontré que l’exposition répétée à la douleur d’autrui diminuait l’évaluation subséquente de la douleur des autres. Dans la seconde étude, il a été démontré en imagerie par résonance magnétique fonctionnelle que l’exposition répétée à la douleur d’autrui entrainait des changements dans l’activité cérébrale de certaines régions associées au traitement affectif (l’insula bilatérale), mais aussi cognitif de la douleur (sulcus temporal supérieur ; précunéus), lors de l’observation de la douleur d’autrui. Finalement, la troisième étude expérimentale, celle-ci proposant une visée plus clinique, a permis de démontrer que les conjoints de patients atteints de douleur chronique ne surestiment pas la douleur de leur conjoint, mais qu’ils perçoivent de la douleur même dans des expressions faciales neutres. L’ensemble de ces résultats suggère que chez les sujets sains, l’exposition répétée à la douleur d’autrui entraine une sous-estimation de la douleur chez l’autre et des changements dans le réseau de la matrice de la douleur lors de l’observation de la douleur des autres. En définitive, ces résultats démontrent que l’exposition répétée à la douleur d’autrui, dans un contexte expérimental, a des impacts majeurs sur l’observateur et son jugement de l’intensité de la douleur.
Resumo:
Interaural intensity and time differences (IID and ITD) are two binaural auditory cues for localizing sounds in space. This study investigated the spatio-temporal brain mechanisms for processing and integrating IID and ITD cues in humans. Auditory-evoked potentials were recorded, while subjects passively listened to noise bursts lateralized with IID, ITD or both cues simultaneously, as well as a more frequent centrally presented noise. In a separate psychophysical experiment, subjects actively discriminated lateralized from centrally presented stimuli. IID and ITD cues elicited different electric field topographies starting at approximately 75 ms post-stimulus onset, indicative of the engagement of distinct cortical networks. By contrast, no performance differences were observed between IID and ITD cues during the psychophysical experiment. Subjects did, however, respond significantly faster and more accurately when both cues were presented simultaneously. This performance facilitation exceeded predictions from probability summation, suggestive of interactions in neural processing of IID and ITD cues. Supra-additive neural response interactions as well as topographic modulations were indeed observed approximately 200 ms post-stimulus for the comparison of responses to the simultaneous presentation of both cues with the mean of those to separate IID and ITD cues. Source estimations revealed differential processing of IID and ITD cues initially within superior temporal cortices and also at later stages within temporo-parietal and inferior frontal cortices. Differences were principally in terms of hemispheric lateralization. The collective psychophysical and electrophysiological results support the hypothesis that IID and ITD cues are processed by distinct, but interacting, cortical networks that can in turn facilitate auditory localization.
Resumo:
INTRODUCTION: Handwriting is a modality of language production whose cerebral substrates remain poorly known although the existence of specific regions is postulated. The description of brain damaged patients with agraphia and, more recently, several neuroimaging studies suggest the involvement of different brain regions. However, results vary with the methodological choices made and may not always discriminate between "writing-specific" and motor or linguistic processes shared with other abilities. METHODS: We used the "Activation Likelihood Estimate" (ALE) meta-analytical method to identify the cerebral network of areas commonly activated during handwriting in 18 neuroimaging studies published in the literature. Included contrasts were also classified according to the control tasks used, whether non-specific motor/output-control or linguistic/input-control. These data were included in two secondary meta-analyses in order to reveal the functional role of the different areas of this network. RESULTS: An extensive, mainly left-hemisphere network of 12 cortical and sub-cortical areas was obtained; three of which were considered as primarily writing-specific (left superior frontal sulcus/middle frontal gyrus area, left intraparietal sulcus/superior parietal area, right cerebellum) while others related rather to non-specific motor (primary motor and sensorimotor cortex, supplementary motor area, thalamus and putamen) or linguistic processes (ventral premotor cortex, posterior/inferior temporal cortex). CONCLUSIONS: This meta-analysis provides a description of the cerebral network of handwriting as revealed by various types of neuroimaging experiments and confirms the crucial involvement of the left frontal and superior parietal regions. These findings provide new insights into cognitive processes involved in handwriting and their cerebral substrates.
Resumo:
Cette thèse a pour objectif de comparer les expressions émotionnelles évoquées par la musique, la voix (expressions non-linguistiques) et le visage sur les plans comportemental et neuronal. Plus précisément, le but est de bénéficier de l’indéniable pouvoir émotionnel de la musique afin de raffiner notre compréhension des théories et des modèles actuels associés au traitement émotionnel. Qui plus est, il est possible que cette disposition surprenante de la musique pour évoquer des émotions soit issue de sa capacité à s’immiscer dans les circuits neuronaux dédiés à la voix, bien que les évidences à cet effet demeurent éparses pour le moment. Une telle comparaison peut potentiellement permettre d’élucider, en partie, la nature des émotions musicales. Pour ce faire, différentes études ont été réalisées et sont ici présentées dans deux articles distincts. Les études présentées dans le premier article ont comparé, sur le plan comportemental, les effets d’expressions émotionnelles sur la mémoire entre les domaines musical et vocal (non-linguistique). Les résultats ont révélé un avantage systématique en mémoire pour la peur dans les deux domaines. Aussi, une corrélation dans la performance individuelle en mémoire a été trouvée entre les expressions de peur musicales et vocales. Ces résultats sont donc cohérents avec l’hypothèse d’un traitement perceptif similaire entre la musique et la voix. Dans le deuxième article, les corrélats neuronaux associés à la perception d’expressions émotionnelles évoquées par la musique, la voix et le visage ont été directement comparés en imagerie par résonnance magnétique fonctionnelle (IRMf). Une augmentation significative du signal « Blood Oxygen Level Dependent » (BOLD) a été trouvée dans l’amygdale (et à l’insula postérieure) en réponse à la peur, parmi l’ensemble des domaines et des modalités à l’étude. Une corrélation dans la réponse BOLD individuelle de l’amygdale, entre le traitement musical et vocal, a aussi été mise en évidence, suggérant à nouveau des similarités entre les deux domaines. En outre, des régions spécifiques à chaque domaine ont été relevées. Notamment, le gyrus fusiforme (FG/FFA) pour les expressions du visage, le sulcus temporal supérieur (STS) pour les expressions vocales ainsi qu’une portion antérieure du gyrus temporal supérieur (STG) particulièrement sensible aux expressions musicales (peur et joie), dont la réponse s’est avérée modulée par l’intensité des stimuli. Mis ensemble, ces résultats révèlent des similarités mais aussi des différences dans le traitement d’expressions émotionnelles véhiculées par les modalités visuelle et auditive, de même que différents domaines dans la modalité auditive (musique et voix). Plus particulièrement, il appert que les expressions musicales et vocales partagent d’étroites similarités surtout en ce qui a trait au traitement de la peur. Ces données s’ajoutent aux connaissances actuelles quant au pouvoir émotionnel de la musique et contribuent à élucider les mécanismes perceptuels sous-jacents au traitement des émotions musicales. Par conséquent, ces résultats donnent aussi un appui important à l’utilisation de la musique dans l’étude des émotions qui pourra éventuellement contribuer au développement de potentielles interventions auprès de populations psychiatriques.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.