13 resultados para feet sensory information
em Cambridge University Engineering Department Publications Database
Resumo:
Modern theories of motor control incorporate forward models that combine sensory information and motor commands to predict future sensory states. Such models circumvent unavoidable neural delays associated with on-line feedback control. Here we show that signals in human muscle spindle afferents during unconstrained wrist and finger movements predict future kinematic states of their parent muscle. Specifically, we show that the discharges of type Ia afferents are best correlated with the velocity of length changes in their parent muscles approximately 100-160 ms in the future and that their discharges vary depending on motor sequences in a way that cannot be explained by the state of their parent muscle alone. We therefore conclude that muscle spindles can act as "forward sensory models": they are affected both by the current state of their parent muscle and by efferent (fusimotor) control, and their discharges represent future kinematic states. If this conjecture is correct, then sensorimotor learning implies learning how to control not only the skeletal muscles but also the fusimotor system.
Resumo:
Legged locomotion of biological systems can be viewed as a self-organizing process of highly complex system-environment interactions. Walking behavior is, for example, generated from the interactions between many mechanical components (e.g., physical interactions between feet and ground, skeletons and muscle-tendon systems), and distributed informational processes (e.g., sensory information processing, sensory-motor control in central nervous system, and reflexes) [21]. An interesting aspect of legged locomotion study lies in the fact that there are multiple levels of self-organization processes (at the levels of mechanical dynamics, sensory-motor control, and learning). Previously, the self-organization of mechanical dynamics was nicely demonstrated by the so-called Passive Dynamic Walkers (PDWs; [18]). The PDW is a purely mechanical structure consisting of body, thigh, and shank limbs that are connected by passive joints. When placed on a shallow slope, it exhibits natural bipedal walking dynamics by converting potential to kinetic energy without any actuation. An important contribution of these case studies is that, if designed properly, mechanical dynamics can generate a relatively complex locomotion dynamics, on the one hand, and the mechanical dynamics induces self-stability against small disturbances without any explicit control of motors, on the other. The basic principle of the mechanical self-stability appears to be fairly general that there are several different physics models that exhibit similar characteristics in different kinds of behaviors (e.g., hopping, running, and swimming; [2, 4, 9, 16, 19]), and a number of robotic platforms have been developed based on them [1, 8, 13, 22]. © 2009 Springer London.
Resumo:
Human sensorimotor control has been predominantly studied using fixed tasks performed under laboratory conditions. This approach has greatly advanced our understanding of the mechanisms that integrate sensory information and generate motor commands during voluntary movement. However, experimental tasks necessarily restrict the range of behaviors that are studied. Moreover, the processes studied in the laboratory may not be the same processes that subjects call upon during their everyday lives. Naturalistic approaches thus provide an important adjunct to traditional laboratory-based studies. For example, wearable self-contained tracking systems can allow subjects to be monitored outside the laboratory, where they engage spontaneously in natural everyday behavior. Similarly, advances in virtual reality technology allow laboratory-based tasks to be made more naturalistic. Here, we review naturalistic approaches, including perspectives from psychology and visual neuroscience, as well as studies and technological advances in the field of sensorimotor control.
Resumo:
Although learning a motor skill, such as a tennis stroke, feels like a unitary experience, researchers who study motor control and learning break the processes involved into a number of interacting components. These components can be organized into four main groups. First, skilled performance requires the effective and efficient gathering of sensory information, such as deciding where and when to direct one's gaze around the court, and thus an important component of skill acquisition involves learning how best to extract task-relevant information. Second, the performer must learn key features of the task such as the geometry and mechanics of the tennis racket and ball, the properties of the court surface, and how the wind affects the ball's flight. Third, the player needs to set up different classes of control that include predictive and reactive control mechanisms that generate appropriate motor commands to achieve the task goals, as well as compliance control that specifies, for example, the stiffness with which the arm holds the racket. Finally, the successful performer can learn higher-level skills such as anticipating and countering the opponent's strategy and making effective decisions about shot selection. In this Primer we shall consider these components of motor learning using as an example how we learn to play tennis.
Resumo:
Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations--the mapping between actual and visual location of the hand--during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.
Resumo:
Optimal feedback control postulates that feedback responses depend on the task relevance of any perturbations. We test this prediction in a bimanual task, conceptually similar to balancing a laden tray, in which each hand could be perturbed up or down. Single-limb mechanical perturbations produced long-latency reflex responses ("rapid motor responses") in the contralateral limb of appropriate direction and magnitude to maintain the tray horizontal. During bimanual perturbations, rapid motor responses modulated appropriately depending on the extent to which perturbations affected tray orientation. Specifically, despite receiving the same mechanical perturbation causing muscle stretch, the strongest responses were produced when the contralateral arm was perturbed in the opposite direction (large tray tilt) rather than in the same direction or not perturbed at all. Rapid responses from shortening extensors depended on a nonlinear summation of the sensory information from the arms, with the response to a bimanual same-direction perturbation (orientation maintained) being less than the sum of the component unimanual perturbations (task relevant). We conclude that task-dependent tuning of reflexes can be modulated online within a single trial based on a complex interaction across the arms.
Resumo:
A recent study demonstrates involvement of primary motor cortex in task-dependent modulation of rapid feedback responses; cortical neurons resolve locally ambiguous sensory information, producing sophisticated responses to disturbances.
Resumo:
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions. © 2012 Rüter et al.
Resumo:
A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain's remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
It has been shown that sensory morphology and sensory-motor coordination enhance the capabilities of sensing in robotic systems. The tasks of categorization and category learning, for example, can be significantly simplified by exploiting the morphological constraints, sensory-motor couplings and the interaction with the environment. This paper argues that, in the context of sensory-motor control, it is essential to consider body dynamics derived from morphological properties and the interaction with the environment in order to gain additional insight into the underlying mechanisms of sensory-motor coordination, and more generally the nature of perception. A locomotion model of a four-legged robot is used for the case studies in both simulation and real world. The locomotion model demonstrates how attractor states derived from body dynamics influence the sensory information, which can then be used for the recognition of stable behavioral patterns and of physical properties in the environment. A comprehensive analysis of behavior and sensory information leads to a deeper understanding of the underlying mechanisms by which body dynamics can be exploited for category learning of autonomous robotic systems. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.