989 resultados para visual integration
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Previous studies have shown that the human posterior cingulate contains a visual processing area selective for optic flow (CSv). However, other studies performed in both humans and monkeys have identified a somatotopic motor region at the same location (CMA). Taken together, these findings suggested the possibility that the posterior cingulate contains a single visuomotor integration region. To test this idea we used fMRI to identify both visual and motor areas of the posterior cingulate in the same brains and to test the activity of those regions during a visuomotor task. Results indicated that rather than a single visuomotor region the posterior cingulate contains adjacent but separate motor and visual regions. CSv lies in the fundus of the cingulate sulcus, while CMA lies in the dorsal bank of the sulcus, slightly superior in terms of stereotaxic coordinates. A surprising and novel finding was that activity in CSv was suppressed during the visuomotor task, despite the visual stimulus being identical to that used to localize the region. This may provide an important clue to the specific role played by this region in the utilization of optic flow to control self-motion.
Resumo:
The purpose of this study was to examine the effects of visual and somatosensory information on body sway in individuals with Down syndrome (DS). Nine adults with DS (19-29 years old) and nine control subjects (CS) (19-29 years old) stood in the upright stance in four experimental conditions: no vision and no touch; vision and no touch; no vision and touch; and vision and touch. In the vision condition, participants looked at a target placed in front of them; in the no vision condition, participants wore a black cotton mask. In the touch condition, participants touched a stationary surface with their right index finger; in the no touch condition, participants kept their arms hanging alongside their bodies. A force plate was used to estimate center of pressure excursion for both anterior-posterior and medial-lateral directions. MANOVA revealed that both the individuals with DS and the control subjects used vision and touch to reduce overall body sway, although individuals with DS still oscillated more than did the CS. These results indicate that adults with DS are able to use sensory information to reduce body sway, and they demonstrate that there is no difference in sensory integration between the individuals with DS and the CS.
Resumo:
Background: Aging is characterized by a decline in the postural control performance, which is based on a coherent and stable coupling between sensory information and motor action. Therefore, changes in postural control in elderlies can be related to changes in this coupling. In addition, it has been observed that physical activity seems to improve postural control performance in elderlies. These improvements can be due to changes in the coupling between sensory information and motor action related to postural control. Objective: the purpose of this study was to verify the coupling between visual information and body sway in active and sedentary elderlies. Methods: Sixteen sedentary elderlies ( SE), 16 active elderlies ( AE) and 16 young adults ( YA) were asked to stand upright inside a moving room in two experimental conditions: ( 1) discrete movement and ( 2) continuous movement of the room. Results: In the continuous condition, the results showed that the coupling between the movement of the room and body sway was stronger and more stable for SE and AE compared to YA. In the discrete condition, SE showed larger body displacement compared to AE and YA. Conclusions: SE have more difficulty to discriminate and to integrate sensory information than AE and YA indicating that physical activity may improve sensory integration. Copyright (C) 2005 S. Karger AG, Basel.
Resumo:
The maintenance of a given body orientation is obtained by the complex relation between sensory information and muscle activity. Therefore, this study purpose was to review the role of visual, somatosensory, vestibular and auditory information in the maintenance and control of the posture. Method. a search by papers for the last 24 years was done in the PubMed and CAPES databases. The following keywords were used: postural control, sensory information, vestibular system, visual system, somatosensory system, auditory system and haptic system. Results. the influence of each sensory system and its integration were analyzed for the maintenance and control of the posture. Conclusion. the literature showed that there is information redundancy provided by sensory channels. Thus, the central nervous system chooses the main source for the posture control.
Resumo:
Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources. © 2013 Viana et al.
Resumo:
To students with special educational needs participate actively at school are required effective and systematic investment, involving the school community as a whole. The occupational therapist is one of the professionals who can facilitate this student inclusion process. This study aimed to discuss the occupational therapy intervention effects with two disability children with deficits in visual perceptual skills, motor coordination and visual motor integration, that was included in regular education. The Beery-Buktenica Developmental Test of Visual Motor Integration was use to evaluate visual perceptual skills, motor coordination and visual-motor integration. Because the deficits presented in the functions investigates was identified the need of an occupational therapy intervention program designed to improve the performance in theses functions. After the program, the test was reapplied. The results pointed to an improvement of all functions considered deficient. These results highlight to the training importance to improve the performance in abilities evaluated.
Resumo:
Postural sway variability was evaluated in Parkinson’s disease (PD) patients at different stages of disease. Twenty PD patients were grouped into two groups (unilateral, 14; bilateral, 6) according to disease severity. The results showed no significant differences in postural sway variability between the groups (p ≥ 0.05). Postural sway variability was higher in the antero-posterior direction and with the eyes closed. Significant differences between the unilateral and bilateral groups were observed in clinical tests (UPDRS, Berg Balance Scale, and retropulsion test; p ≤ 0.05, all). Postural sway variability was unaffected by disease severity, indicating that neurological mechanisms for postural control still function at advanced stages of disease. Postural sway instability appears to occur in the antero-posterior direction to compensate for the stooped posture. The eyes-closed condition during upright stance appears to be challenging for PD patients because of the associated sensory integration deficit. Finally, objective measures such as postural sway variability may be more reliable than clinical tests to evaluate changes in balance control in PD patients.
Resumo:
Cognitive functioning is based on binding processes, by which different features and elements of neurocognition are integrated and coordinated. Binding is an essential ingredient of, for instance, Gestalt perception. We have implemented a paradigm of causality perception based on the work of Albert Michotte, in which 2 identical discs move from opposite sides of a monitor, steadily toward, and then past one another. Their coincidence generates an ambiguous percept of either "streaming" or "bouncing," which the subjects (34 schizophrenia spectrum patients and 34 controls with mean age 27.9 y) were instructed to report. The latter perception is a marker of the binding processes underlying perceived causality (type I binding). In addition to this visual task, acoustic stimuli were presented at different times during the task (150 ms before and after visual coincidence), which can modulate perceived causality. This modulation by intersensory and temporally delayed stimuli is viewed as a different type of binding (type II). We show here, using a mixed-effects hierarchical analysis, that type II binding distinguishes schizophrenia spectrum patients from healthy controls, whereas type I binding does not. Type I binding may even be excessive in some patients, especially those with positive symptoms; Type II binding, however, was generally attenuated in patients. The present findings point to ways in which the disconnection (or Gestalt) hypothesis of schizophrenia can be refined, suggesting more specific markers of neurocognitive functioning and potential targets of treatment.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.