9 resultados para Cross-modal
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
During stereotactic functional neurosurgery, stimulation procedure to control for proper target localization provides a unique opportunity to investigate pathophysiological phenomena that cannot be addressed in experimental setups. Here we report on the distribution of response modalities to 487 intraoperative thalamic stimulations performed in 24 neurogenic pain (NP), 17 parkinsonian (PD) and 10 neuropsychiatric (Npsy) patients. Threshold responses were subdivided into somatosensory, motor and affective, and compared between medial (central lateral nucleus) and lateral (ventral anterior, ventral lateral and ventral medial) thalamic nuclei and between patients groups. Major findings were as follows: in the medial thalamus, evoked responses were for a large majority (95%) somatosensory in NP patients, 47% were motor in PD patients, and 54% affective in Npsy patients. In the lateral thalamus, a much higher proportion of somatosensory (83%) than motor responses (5%) was evoked in NP patients, while the proportion was reversed in PD patients (69% motor vs. 21% somatosensory). These results provide the first evidence for functional cross-modal changes in lateral and medial thalamic nuclei in response to intraoperative stimulations in different functional disorders. This extensive functional reorganization sheds new light on wide-range plasticity in the adult human thalamocortical system.
Resumo:
The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.
Resumo:
Human subjects overestimate the change of rising intensity sounds compared with falling intensity sounds. Rising sound intensity has therefore been proposed to be an intrinsic warning cue. In order to test this hypothesis, we presented rising, falling, and constant intensity sounds to healthy humans and gathered psychophysiological and behavioral responses. Brain activity was measured using event-related functional magnetic resonance imaging. We found that rising compared with falling sound intensity facilitates autonomic orienting reflex and phasic alertness to auditory targets. Rising intensity sounds produced neural activity in the amygdala, which was accompanied by activity in intraparietal sulcus, superior temporal sulcus, and temporal plane. Our results indicate that rising sound intensity is an elementary warning cue eliciting adaptive responses by recruiting attentional and physiological resources. Regions involved in cross-modal integration were activated by rising sound intensity, while the right-hemisphere phasic alertness network could not be supported by this study.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
A tacitly held assumption in synesthesia research is the unidirectionality of digit-color associations. This notion is based on synesthetes' report that digits evoke a color percept, but colors do not elicit any numerical impression. In a random color generation task, we found evidence for an implicit co-activation of digits by colors, a finding that constrains neurological theories concerning cross-modal associations in general and synesthesia in particular.
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.
Resumo:
Research has mainly focussed on the perceptual nature of synaesthesia. However, synaesthetic experiences are also semantically represented. It was our aim to develop a task to investigate the semantic representation of the concurrent and its relation to the inducer in grapheme-colour synaesthesia. Non-synaesthetes were either tested with a lexical-decision (i.e., word / non-word) or a semantic-classification (i.e., edibility decision) task. Targets consisted of words which were strongly associated with a specific colour (e.g., banana - yellow) and words which were neutral and not associated with a specific colour (e.g., aunt). Target words were primed with colours: the prime target relationship was either intramodal (i.e., word - word) or crossmodal (colour patch - word). Each of the four task versions consisted of three conditions: congruent (same colour for prime and target), incongruent (different colour), and unrelated (neutral target). For both tasks (i.e., lexical and semantic) and both versions of the task (i.e., intramodal and crossmodal), we expected faster reaction times (RTs) in the congruent condition than in the neutral condition and slower RTs in the incongruent condition than the neutral condition. Stronger effects were expected in the intramodal condition due to the overlap in the prime target modality. The results suggest that the hypotheses were partly confirmed. We conclude that the tasks and hypotheses can be readily adopted to investigate the nature of the representation of the synaesthetic experiences.
Resumo:
BACKGROUND AND PURPOSE Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. METHODS We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. RESULTS Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. CONCLUSIONS In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.