992 resultados para Multisensory integration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

"This letter aims to highlight the multisensory integration weighting mechanisms that may account for the results in studies investigating haptic feedback in laparoscopic surgery. The current lack of multisensory theoretical knowledge in laparoscopy is evident, and “a much better understanding of how multimodal displays in virtual environments influence human performance is required” ...publisher website

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent report in Consciousness and Cognition provided evidence from a study of the rubber hand illusion (RHI) that supports the multisensory principle of inverse effectiveness (PoIE). I describe two methods of assessing the principle of inverse effectiveness ('a priori' and 'post-hoc'), and discuss how the post-hoc method is affected by the statistical artefact of,regression towards the mean'. I identify several cases where this artefact may have affected particular conclusions about the PoIE, and relate these to the historical origins of 'regression towards the mean'. Although the conclusions of the recent report may not have been grossly affected, some of the inferential statistics were almost certainly biased by the methods used. I conclude that, unless such artefacts are fully dealt with in the future, and unless the statistical methods for assessing the PoIE evolve, strong evidence in support of the PoIE will remain lacking. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Out-of-body experiences (OBEs) are illusory perceptions of one's body from an elevated disembodied perspective. Recent theories postulate a double disintegration process in the personal (visual, proprioceptive and tactile disintegration) and extrapersonal (visual and vestibular disintegration) space as the basis of OBEs. Here we describe a case which corroborates and extends this hypothesis. The patient suffered from peripheral vestibular damage and presented with OBEs and lucid dreams. Analysis of the patient's behaviour revealed a failure of visuo-vestibular integration and abnormal sensitivity to visuo-tactile conflicts that have previously been shown to experimentally induce out-of-body illusions (in healthy subjects). In light of these experimental findings and the patient's symptomatology we extend an earlier model of the role of vestibular signals in OBEs. Our results advocate the involvement of subcortical bodily mechanisms in the occurrence of OBEs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Balance maintenance relies on a complex interplay between many different sensory modalities. Although optimal multisensory processing is thought to decline with ageing, inefficient integration is particularly associated with falls in older adults. We investigated whether improved balance control, following a novel balance training intervention, was associated with more efficient multisensory integration in older adults, particularly those who have fallen in the past. Specifically, 76 healthy and fall-prone older adults were allocated to either a balance training programme conducted over 5 weeks or to a passive control condition. Balance training involved a VR display in which the on-screen position of a target object was controlled by shifts in postural balance on a Wii balance board. Susceptibility to the sound-induced flash illusion, before and after the intervention (or control condition), was used as a measure of multisensory function. Whilst balance and postural control improved for all participants assigned to the Intervention group, improved functional balance was correlated with more efficient multisensory processing in the fall-prone older adults only. Our findings add to growing evidence suggesting important links between balance control and multisensory interactions in the ageing brain and have implications for the development of interventions designed to reduce the risk of falls.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources. © 2013 Viana et al.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Three studies investigated the relation between symbolic gestures and words, aiming at discover the neural basis and behavioural features of the lexical semantic processing and integration of the two communicative signals. The first study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. In the second study, experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. The effect of the prior presentation of a symbolic gesture on congruent target word processing was investigated in study 3. In experiment 5, symbolic gestures were presented as primes, followed by semantically congruent target word or pseudowords. In this case, lexical-semantic decision was accompanied by a motor simulation at 100ms after the onset of the verbal stimuli. Summing up, the same type of integration with a word was present for both prime gesture and word. It was probably subsequent to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words. However, gesture and words could be understood at the same motor level through simulation if words were preceded by an adequate gestural context. Results are discussed in the prospective of a continuum between transitive actions and emblems, in parallelism with language; the grounded/symbolic content of the different signals evidences relation between sensorimotor and linguistic systems, which could interact at different levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le rôle du collicule inférieur dans les divers processus auditif demeure à ce jour méconnu chez l’humain. À l’aide d’évaluations comportementales et électrophysiologiques, le but des études consiste à examiner l’intégrité fonctionnelle du système nerveux auditif chez une personne ayant une lésion unilatérale du collicule inférieur. Les résultats de ces études suggèrent que le collicule inférieur n’est pas impliqué dans la détection de sons purs, la reconnaissance de la parole dans le silence et l’interaction binaurale. Cependant, ces données suggèrent que le collicule inférieur est impliqué dans la reconnaissance de mots dans le bruit présentés monauralement, la discrimination de la fréquence, la reconnaissance de la durée, la séparation binaurale, l’intégration binaurale, la localisation de sources sonores et, finalement, l’intégration multisensorielle de la parole.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thèse de doctorat réalisé en cotutelle avec l'Université catholique de Louvain, Belgique (Faculté de médecine, Institut de Neuroscience)