143 resultados para Proprioceptive
Resumo:
The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.
Resumo:
Since the pioneering work of Hough in 1902 (1) the term ‘delayed onset muscle soreness (DOMS)’ has dominated the field of athletic recovery. DOMS typically occurs after exercise induced muscle damage (EIMD), particularly if the exercise is unaccustomed or involves a large amount of eccentric (muscle lengthening) contractions. The symptoms of EIMD manifest as a temporary reduction in muscle force, disturbed proprioceptive acuity, increases in inflammatory markers both within the injured muscle and in the blood as well as increased muscle soreness, stiffness and swelling. The intensity of discomfort and soreness associated with DOMS increases within the first 24 hours, peaks between 24 and 72 hours, before subsiding and eventually disappearing 5-7 days after the exercise. Consequently, DOMS may interfere with athletic training or competition and several recovery interventions have been utilised by athletes and coaches in an attempt to offset the negative effects...
Resumo:
It is well recognized that many scientifically interesting sites on Mars are located in rough terrains. Therefore, to enable safe autonomous operation of a planetary rover during exploration, the ability to accurately estimate terrain traversability is critical. In particular, this estimate needs to account for terrain deformation, which significantly affects the vehicle attitude and configuration. This paper presents an approach to estimate vehicle configuration, as a measure of traversability, in deformable terrain by learning the correlation between exteroceptive and proprioceptive information in experiments. We first perform traversability estimation with rigid terrain assumptions, then correlate the output with experienced vehicle configuration and terrain deformation using a multi-task Gaussian Process (GP) framework. Experimental validation of the proposed approach was performed on a prototype planetary rover and the vehicle attitude and configuration estimate was compared with state-of-the-art techniques. We demonstrate the ability of the approach to accurately estimate traversability with uncertainty in deformable terrain.
Resumo:
This work aims to promote reliability and integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicle (UGV) autonomy. For this purpose, a comprehensive UGV system, comprising many different exteroceptive and proprioceptive sensors has been built. The first contribution of this work is a large, accurately calibrated and synchronised, multi-modal data-set, gathered in controlled environmental conditions, including the presence of dust, smoke and rain. The data have then been used to analyse the effects of such challenging conditions on perception and to identify common perceptual failures. The second contribution is a presentation of methods for mitigating these failures to promote perceptual integrity in adverse environmental conditions.
Resumo:
The present study investigated whether memory for a room-sized spatial layout learned through auditory localization of sounds exhibits orientation dependence similar to that observed for spatial memory acquired from stationary viewing of the environment. Participants learned spatial layouts by viewing objects or localizing sounds and then performed judgments of relative direction among remembered locations. The results showed that direction judgments following auditory learning were performed most accurately at a particular orientation in the same way as were those following visual learning, indicating that auditorily encoded spatial memory is orientation dependent. In combination with previous findings that spatial memories derived from haptic and proprioceptive experiences are also orientation dependent, the present finding suggests that orientation dependence is a general functional property of human spatial memory independent of learning modality.
Learned stochastic mobility prediction for planning with control uncertainty on unstructured terrain
Resumo:
Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This mobility prediction model is trained using sample executions of motion primitives on representative terrain, and predicts the future outcome of control actions on similar terrain. Using Gaussian process regression allows us to exploit its inherent measure of prediction uncertainty in planning. We integrate mobility prediction into a Markov decision process framework and use dynamic programming to construct a control policy for navigation to a goal region in a terrain map built using an on-board depth sensor. We consider both rigid terrain, consisting of uneven ground, small rocks, and non-traversable rocks, and also deformable terrain. We introduce two methods for training the mobility prediction model from either proprioceptive or exteroceptive observations, and report results from nearly 300 experimental trials using a planetary rover platform in a Mars-analogue environment. Our results validate the approach and demonstrate the value of planning under uncertainty for safe and reliable navigation.
Resumo:
Our world is literally and figuratively turning to ‘dust’. This work acknowledges decay and renewal and the transitional, cyclical natures of interrelated ecologies. It also suggests advanced levels of degradation potentially beyond reparation. Dust exists both on and beneath the border of our unaided vision. Dust particles are predominantly forms of disintegrating solids that often become the substance or catalyst of future forms. Like many tiny forms, dust is an often unnoticed residue with ‘planet-size consequences’. (Hanna Holmes 2001) The image depicts an ethereal, backlit body, continually circling and morphing, apparently floating, suggesting endless cycles of birth, life and death and inviting differing states of meditation, exploration, stillness and play. This never ending video work is taken from a large-scale interactive/media artwork created during a six-month research residency in England at the Institute of Contemporary Art London and at Vincent Dance Theatre Sheffield in 2006. It was originally presented on a raised floor screen made of pure white sand at the ICA in London (see). The project involved developing new interaction, engagement and image making strategies for media arts practice, drawing on the application of both kinetic and proprioceptive dance/performance knowledges. The work was further informed by ecological network theory that assesses the systemic implications of private and public actions within bounded systems. The creative methodology was primarily practice-led which fomented the particular qualities of imagery, generated through cross-fertilising embodied knowledge of Dance and Media Arts. This was achieved through extensive workshopping undertaken in theatres, working ‘on the floor’ live, with dancers, props, sound and projection. And eventually of course, all this dust must settle. (Holmes 2001, from Dust Jacket) Holmes, H. 2001, The Secret Life of Dust: From the Cosmos to the Kitchen Counter, the Big Consequences of Little Things, p.3
Resumo:
The aim of this work is to enable seamless transformation of product concepts to CAD models. This necessitates availability of 3D product sketches. The present work concerns intuitive generation of 3D strokes and intrinsic support for space sharing and articulation for the components of the product being sketched. Direct creation of 3D strokes in air lacks in precision, stability and control. The inadequacy of proprioceptive feedback for the task is complimented in this work with stereo vision and haptics. Three novel methods based on pencil-paper interaction analogy for haptic rendering of strokes have been investigated. The pen-tilt based rendering is simpler and found to be more effective. For the spatial conformity, two modes of constraints for the stylus movements, corresponding to the motions on a control surface and in a control volume have been studied using novel reactive and field based haptic rendering schemes. The field based haptics, which in effect creates an attractive force field near a surface, though non-realistic, provided highly effective support for the control-surface constraints. The efficacy of the reactive haptic rendering scheme for the constrained environments has been demonstrated using scribble strokes. This can enable distributed collaborative 3D concept development. The notion of motion constraints, defined through sketch strokes enables intuitive generation of articulated 3D sketches and direct exploration of motion annotations found in most product concepts. The work, thus, establishes that modeling of the constraints is a central issue in 3D sketching.
Resumo:
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Resumo:
Currently,one of the important research areas in Spatial updating is the role of external (for instance visual) and internal (for instance proprioceptive or vestibular) information in spatial updating of scene recognition. Our study uses the paradigm of classic spatial updating research and the experimental design of investigation of Burgess(2004),first, we will explore the concrete influence of locomotion on scene recognition in real world; next, we will use virtual reality technology, which can control many spatial learning parameters and exclude the influence of extra irrelevant variables, to explore the influence of pure locomotion without visual cue on scene recognition, and furthermore, we will explore whether the ability of spatial updating can be transferred to new situations in a short period of time and compare the result pattern in real word with that in virtual reality to test the validity of virtual reality technology in spatial updating of scene recognition research. The main results of this paper can be summarized as follows: 1. In real world, we found two effects: the spatial updating effect and the viewpoint dependent effect, this result indicated that the spatial updating effect based on locomotion does not eliminate the viewpoint dependent effect during the scene recognition process in physical environment. 2. In virtual reality environment, we still found two effects: the spatial updating effect and the viewpoint dependent effect, this result showed us that the spatial updating effect based on locomotion does not eliminate the viewpoint dependent effect during the scene recognition process in virtual reality environment either. 3. The spatial updating effect based on locomotion plays double role in scene recognition: When subjects were tested in different viewpoint, spatial updating based on locomotion promoted scene recognition; while subjected were tested in same viewpoint, spatial updating based on locomotion had a negative influence on scene recognition, these results show us that spatial updating based on locomotion is automated and can not be ignored. 4. The ability of spatial updating can be transferred to new situations in a short period of time , and the experiment in the immersed virtual reality environment got the same result pattern with that in the physical environment, suggesting VR technology is a very effective method to do research on spatial updating of the scene recognition studies. 5. This study about scene recognition provides evidence to double system model of spatial updating in the immersed virtual reality environment.
Resumo:
Meng, Q., & Lee, M. (2005). Novelty and Habituation: the Driving Forces in Early Stage Learning for Developmental Robotics. Wermter, S., Palm, G., & Elshaw, M. (Eds.), In: Biomimetic Neural Learning for Intelligent Robots: Intelligent Systems, Cognitive Robotics, and Neuroscience. (pp. 315-332). (Lecture Notes in Computer Science). Springer Berlin Heidelberg.
Resumo:
Q. Meng and M. H. Lee, Novelty and Habituation: the Driving Forces in Early Stage Learning for Developmental Robotics, AI-Workshop on NeuroBotics, University of Ulm, Germany. September 2004.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
Resumo:
Aging is characterized by brain structural changes that may compromise motor functions. In the context of postural control, white matter integrity is crucial for the efficient transfer of visual, proprioceptive and vestibular feedback in the brain. To determine the role of age-related white matter decline as a function of the sensory feedback necessary to correct posture, we acquired diffusion weighted images in young and old subjects. A force platform was used to measure changes in body posture under conditions of compromised proprioceptive and/or visual feedback. In the young group, no significant brain structure-balance relations were found. In the elderly however, the integrity of a cluster in the frontal forceps explained 21% of the variance in postural control when proprioceptive information was compromised. Additionally, when only the vestibular system supplied reliable information, the occipital forceps was the best predictor of balance performance (42%). Age-related white matter decline may thus be predictive of balance performance in the elderly when sensory systems start to degrade.