37 resultados para MULTISENSORY
Resumo:
Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed.
Resumo:
Previous research has provided inconsistent results regarding the spatial modulation of auditory-somatosensory interactions. The present study reports three experiments designed to investigate the nature of these interactions in the space close to the head. Human participants made speeded detection responses to unimodal auditory, somatosensory, or simultaneous auditory-somatosensory stimuli. In Experiment 1, electrocutaneous stimuli were presented to either earlobe, while auditory stimuli were presented from the same versus opposite sides, and from one of two distances (20 vs. 70cm) from the participant's head. The results demonstrated a spatial modulation of auditory-somatosensory interactions when auditory stimuli were presented from close to the head. In Experiment 2, electrocutaneous stimuli were delivered to the hands, which were placed either close to or far from the head, while the auditory stimuli were again presented at one of two distances. The results revealed that the spatial modulation observed in Experiment 1 was specific to the particular body part stimulated (head) rather than to the region of space (i.e. around the head) where the stimuli were presented. The results of Experiment 3 demonstrate that sounds that contain high-frequency components are particularly effective in eliciting this auditory-somatosensory spatial effect. Taken together, these findings help to resolve inconsistencies in the previous literature and suggest that auditory-somatosensory multisensory integration is modulated by the stimulated body surface and acoustic spectra of the stimuli presented.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
Sensory information can interact to impact perception and behavior. Foods are appreciated according to their appearance, smell, taste and texture. Athletes and dancers combine visual, auditory, and somatosensory information to coordinate their movements. Under laboratory settings, detection and discrimination are likewise facilitated by multisensory signals. Research over the past several decades has shown that the requisite anatomy exists to support interactions between sensory systems in regions canonically designated as exclusively unisensory in their function and, more recently, that neural response interactions occur within these same regions, including even primary cortices and thalamic nuclei, at early post-stimulus latencies. Here, we review evidence concerning direct links between early, low-level neural response interactions and behavioral measures of multisensory integration.
Resumo:
Research into the anatomical substrates and "principles" for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context dependent and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal, and experience-related differences in integration.
Resumo:
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Resumo:
Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials in response to auditory, somatosensory, or simultaneous auditory-somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition). Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40-84 ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86-128 ms) were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena.
Resumo:
Multisensory processes facilitate perception of currently-presented stimuli and can likewise enhance later object recognition. Memories for objects originally encountered in a multisensory context can be more robust than those for objects encountered in an exclusively visual or auditory context [1], upturning the assumption that memory performance is best when encoding and recognition contexts remain constant [2]. Here, we used event-related potentials (ERPs) to provide the first evidence for direct links between multisensory brain activity at one point in time and subsequent object discrimination abilities. Across two experiments we found that individuals showing a benefit and those impaired during later object discrimination could be predicted by their brain responses to multisensory stimuli upon their initial encounter. These effects were observed despite the multisensory information being meaningless, task-irrelevant, and presented only once. We provide critical insights into the advantages associated with multisensory interactions; they are not limited to the processing of current stimuli, but likewise encompass the ability to determine the benefit of one's memories for object recognition in later, unisensory contexts.
Resumo:
Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
In the Morris water maze (MWM) task, proprioceptive information is likely to have a poor accuracy due to movement inertia. Hence, in this condition, dynamic visual information providing information on linear and angular acceleration would play a critical role in spatial navigation. To investigate this assumption we compared rat's spatial performance in the MWM and in the homing hole board (HB) tasks using a 1.5 Hz stroboscopic illumination. In the MWM, rats trained in the stroboscopic condition needed more time than those trained in a continuous light condition to reach the hidden platform. They expressed also little accuracy during the probe trial. In the HB task, in contrast, place learning remained unaffected by the stroboscopic light condition. The deficit in the MWM was thus complete, affecting both escape latency and discrimination of the reinforced area, and was thus task specific. This dissociation confirms that dynamic visual information is crucial to spatial navigation in the MWM whereas spatial navigation on solid ground is mediated by a multisensory integration, and thus less dependent on visual information.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
BACKGROUND: The assessment of physical activity and energy expenditure is relevant to the care of maintenance haemodialysis (MHD) patients. In the current study, we aimed to evaluate measurements of physical activity and energy expenditure in MHD patients from different centres and countries and explored the predictors of physical activity in these patients.¦METHODS: In this cross-sectional multicentre study, 134 MHD patients from four countries (France, Switzerland, Sweden and Brazil) were included. The physical activity was evaluated for 5.0 ± 1.4 days (mean ± SD) by a multisensory device (SenseWear Armband) and comprised the assessment of number of steps per day, activity-related energy expenditure (activity-related EE) and physical activity level (PAL).¦RESULTS: The number of steps per day, activity-related EE and PAL from the MHD patients were compatible with a sedentary lifestyle. In addition, all parameters were significantly lower in dialysis days when compared to non-dialysis days (P < 0.001). The multivariate regression analysis revealed that diabetes and higher body mass index (BMI) predicted a lower PAL and older age and diabetes predicted a reduced number of steps.¦CONCLUSIONS: The physical activity parameters of MHD patients were compatible with a sedentary lifestyle. This inactivity was worsened by aging, diabetes and higher BMI. Our results indicate that MHD patients should be encouraged by the health care team to increase their physical activity.
Resumo:
Introduction: Neuroimaging of the self focused on high-level mechanisms such as language, memory or imagery of the self. Recent evidence suggests that low-level mechanisms of multisensory and sensorimotor integration may play a fundamental role in encoding self-location and the first-person perspective (Blanke and Metzinger, 2009). Neurological patients with out-of body experiences (OBE) suffer from abnormal self-location and the first-person perspective due to a damage in the temporo-parietal junction (Blanke et al., 2004). Although self-location and the first-person perspective can be studied experimentally (Lenggenhager et al., 2009), the neural underpinnings of self-location have yet to be investigated. To investigate the brain network involved in self-location and first-person perspective we used visuo-tactile multisensory conflict, magnetic resonance (MR)-compatible robotics, and fMRI in study 1, and lesion analysis in a sample of 9 patients with OBE due to focal brain damage in study 2. Methods: Twenty-two participants saw a video showing either a person's back or an empty room being stroked (visual stimuli) while the MR-compatible robotic device stroked their back (tactile stimulation). Direction and speed of the seen stroking could either correspond (synchronous) or not (asynchronous) to those of the seen stroking. Each run comprised the four conditions according to a 2x2 factorial design with Object (Body, No-Body) and Synchrony (Synchronous, Asynchronous) as main factors. Self-location was estimated using the mental ball dropping (MBD; Lenggenhager et al., 2009). After the fMRI session participants completed a 6-item adapted from the original questionnaire created by Botvinick and Cohen (1998) and based on questions and data obtained by Lenggenhager et al. (2007, 2009). They were also asked to complete a questionnaire to disclose the perspective they adopted during the illusion. Response times (RTs) for the MBD and fMRI data were analyzed with a 3-way mixed model ANOVA with the in-between factor Perspective (up, down) and the two with-in factors Object (body, no-body) and Stroking (synchronous, asynchronous). Quantitative lesion analysis was performed using MRIcron (Rorden et al., 2007). We compared the distributions of brain lesions confirmed by multimodality imaging (Knowlton, 2004) in patients with OBE with those showing complex visual hallucinations involving people or faces, but without any disturbance of self-location and first person perspective. Nine patients with OBE were investigated. The control group comprised 8 patients. Structural imaging data were available for normalization and co-registration in all the patients. Normalization of each patient's lesion into the common MNI (Montreal Neurological Institute) reference space permitted simple, voxel-wise, algebraic comparisons to be made. Results: Even if in the scanner all participants were lying on their back and were facing upwards, analysis of perspective showed that half of the participants had the impression to be looking down at the virtual human body below them, despite any cues about their body position (Down-group). The other participants had the impression to be looking up at the virtual body above them (Up-group). Analysis of Q3 ("How strong was the feeling that the body you saw was you?") indicated stronger self-identification with the virtual body during the synchronous stroking. RTs in the MBD task confirmed these subjective data (significant 3-way interaction between perspective, object and stroking). fMRI results showed eight cortical regions where the BOLD signal was significantly different during at least one of the conditions resulting from the combination of Object and Stroking, relative to baseline: right and left temporo-parietal junction, right EBA, left middle occipito-temporal gyrus, left postcentral gyrus, right medial parietal lobe, bilateral medial occipital lobe (Fig 1). The activation patterns in right and left temporo-parietal junction and right EBA reflected changes in self-location and perspective as revealed by statistical analysis that was performed on the percentage of BOLD change with respect to the baseline. Statistical lesion overlap comparison (using nonparametric voxel based lesion symptom mapping) with respect to the control group revealed the right temporo-parietal junction, centered at the angular gyrus (Talairach coordinates x = 54, y =-52, z = 26; p>0.05, FDR corrected). Conclusions: The present questionnaire and behavioural results show that - despite the noisy and constraining MR environment) our participants had predictable changes in self-location, self-identification, and first-person perspective when robotic tactile stroking was applied synchronously with the robotic visual stroking. fMRI data in healthy participants and lesion data in patients with abnormal self-location and first-person perspective jointly revealed that the temporo-parietal cortex especially in the right hemisphere encodes these conscious experiences. We argue that temporo-parietal activity reflects the experience of the conscious "I" as embodied and localized within bodily space.
Resumo:
Neuroimaging of the self has focused on high-level mechanisms such as language, memory or imagery of the self and implicated widely distributed brain networks. Yet recent evidence suggests that low-level mechanisms such as multisensory and sensorimotor integration may play a fundamental role in self-related processing. In the present study we used visuotactile multisensory conflict, robotics, virtual reality, and fMRI to study such low-level mechanisms by experimentally inducing changes in self-location. Participants saw a video of a person's back (body) or an empty room (no-body) being stroked while a MR-compatible robotic device stroked their back. The latter tactile input was synchronous or asynchronous with respect to the seen stroking. Self-location was estimated behaviorally confirming previous data that self-location only differed between the two body conditions. fMRI results showed a bilateral activation of the temporo-parietal cortex with a significantly higher BOLD signal increase in the synchronous/body condition with respect to the other conditions. Sensorimotor cortex and extrastriate-body-area were also activated. We argue that temporo-parietal activity reflects the experience of the conscious 'I' as embodied and localized within bodily space, compatible with clinical data in neurological patients with out-of-body experiences.