923 resultados para visual motor integration


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Integration of inputs by cortical neurons provides the basis for the complex information processing performed in the cerebral cortex. Here, we propose a new analytic framework for understanding integration within cortical neuronal receptive fields. Based on the synaptic organization of cortex, we argue that neuronal integration is a systems--level process better studied in terms of local cortical circuitry than at the level of single neurons, and we present a method for constructing self-contained modules which capture (nonlinear) local circuit interactions. In this framework, receptive field elements naturally have dual (rather than the traditional unitary influence since they drive both excitatory and inhibitory cortical neurons. This vector-based analysis, in contrast to scalarsapproaches, greatly simplifies integration by permitting linear summation of inputs from both "classical" and "extraclassical" receptive field regions. We illustrate this by explaining two complex visual cortical phenomena, which are incompatible with scalar notions of neuronal integration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cognitive experiments involving motor execution (ME) and motor imagery (MI) have been intensively studied using functional magnetic resonance imaging (fMRI). However, the functional networks of a multitask paradigm which include ME and MI were not widely explored. In this article, we aimed to investigate the functional networks involved in MI and ME using a method combining the hierarchical clustering analysis (HCA) and the independent component analysis (ICA). Ten right-handed subjects were recruited to participate a multitask experiment with conditions such as visual cue, MI, ME and rest. The results showed that four activation clusters were found including parts of the visual network, ME network, the MI network and parts of the resting state network. Furthermore, the integration among these functional networks was also revealed. The findings further demonstrated that the combined HCA with ICA approach was an effective method to analyze the fMRI data of multitasks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In visual tracking experiments, distributions of the relative phase be-tween target and tracer showed positive relative phase indicating that the tracer precedes the target position. We found a mode transition from the reactive to anticipatory mode. The proposed integrated model provides a framework to understand the antici-patory behaviour of human, focusing on the integration of visual and soma-tosensory information. The time delays in visual processing and somatosensory feedback are explicitly treated in the simultaneous differential equations. The anticipatory behaviour observed in the visual tracking experiments can be ex-plained by the feedforward term of target velocity, internal dynamics, and time delay in somatosensory feedback.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study was to examine the effects of visual and somatosensory information on body sway in individuals with Down syndrome (DS). Nine adults with DS (19-29 years old) and nine control subjects (CS) (19-29 years old) stood in the upright stance in four experimental conditions: no vision and no touch; vision and no touch; no vision and touch; and vision and touch. In the vision condition, participants looked at a target placed in front of them; in the no vision condition, participants wore a black cotton mask. In the touch condition, participants touched a stationary surface with their right index finger; in the no touch condition, participants kept their arms hanging alongside their bodies. A force plate was used to estimate center of pressure excursion for both anterior-posterior and medial-lateral directions. MANOVA revealed that both the individuals with DS and the control subjects used vision and touch to reduce overall body sway, although individuals with DS still oscillated more than did the CS. These results indicate that adults with DS are able to use sensory information to reduce body sway, and they demonstrate that there is no difference in sensory integration between the individuals with DS and the CS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Aging is characterized by a decline in the postural control performance, which is based on a coherent and stable coupling between sensory information and motor action. Therefore, changes in postural control in elderlies can be related to changes in this coupling. In addition, it has been observed that physical activity seems to improve postural control performance in elderlies. These improvements can be due to changes in the coupling between sensory information and motor action related to postural control. Objective: the purpose of this study was to verify the coupling between visual information and body sway in active and sedentary elderlies. Methods: Sixteen sedentary elderlies ( SE), 16 active elderlies ( AE) and 16 young adults ( YA) were asked to stand upright inside a moving room in two experimental conditions: ( 1) discrete movement and ( 2) continuous movement of the room. Results: In the continuous condition, the results showed that the coupling between the movement of the room and body sway was stronger and more stable for SE and AE compared to YA. In the discrete condition, SE showed larger body displacement compared to AE and YA. Conclusions: SE have more difficulty to discriminate and to integrate sensory information than AE and YA indicating that physical activity may improve sensory integration. Copyright (C) 2005 S. Karger AG, Basel.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources. © 2013 Viana et al.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the present study was to determine the effects of motor practice on visual judgments of apertures for wheelchair locomotion and the visual control of wheelchair locomotion in wheelchair users who had no prior experience. Sixteen young adults, divided into motor practice and control groups, visually judged varying apertures as passable or impassable under walking, pre-practice, and post-practice conditions. The motor practice group underwent additional motor practice in 10 blocks of five trials each, moving the wheelchair through different apertures. The relative perceptual boundary was determined based on judgment data and kinematic variables that were calculated from videos of the motor practice trials. The participants overestimated the space needed under the walking condition and underestimated it under the wheelchair conditions, independent of group. The accuracy of judgments improved from the pre-practice to post-practice condition in both groups. During motor practice, the participants adaptively modulated wheelchair locomotion, adjusting it to the apertures available. The present findings from a priori visual judgments of space and the continuous judgments that are necessary for wheelchair approach and passage through apertures appear to support the dissociation between processes of perception and action.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The superficial gray layer of the superior colliculus contains a map that represents the visual field, whereas the underlying intermediate gray layer contains a vector map of the saccades that shift the direction of gaze. These two maps are aligned so that a particular region of the visual field is represented directly above the neurons that orient the highest acuity area of the retina toward that region. Although it has been proposed that the transmission of information from the visuosensory to the motor map plays an important role in the generation of visually guided saccades, experiments have failed to demonstrate any functional linkage between the two layers. We examined synaptic transmission between these layers in vitro by stimulating the superficial layer while using whole-cell patch-clamp methods to measure the responses of intermediate layer neurons. Stimulation of superficial layer neurons evoked excitatory postsynaptic currents in premotor cells. This synaptic input was columnar in organization, indicating that the connections between the layers link corresponding regions of the visuosensory and motor maps. Excitatory postsynaptic currents were large enough to evoke action potentials and often occurred in clusters similar in duration to the bursts of action potentials that premotor cells use to command saccades. Our results indicate the presence of functional connections between the superficial and intermediate layers and show that such connections could play a significant role in the generation of visually guided saccades.