5 resultados para intention to learn

em Cambridge University Engineering Department Publications Database


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study compared adaptation in novel force fields where trajectories were initially either stable or unstable to elucidate the processes of learning novel skills and adapting to new environments. Subjects learned to move in a null force field (NF), which was unexpectedly changed either to a velocity-dependent force field (VF), which resulted in perturbed but stable hand trajectories, or a position-dependent divergent force field (DF), which resulted in unstable trajectories. With practice, subjects learned to compensate for the perturbations produced by both force fields. Adaptation was characterized by an initial increase in the activation of all muscles followed by a gradual reduction. The time course of the increase in activation was correlated with a reduction in hand-path error for the DF but not for the VF. Adaptation to the VF could have been achieved solely by formation of an inverse dynamics model and adaptation to the DF solely by impedance control. However, indices of learning, such as hand-path error, joint torque, and electromyographic activation and deactivation suggest that the CNS combined these processes during adaptation to both force fields. Our results suggest that during the early phase of learning there is an increase in endpoint stiffness that serves to reduce hand-path error and provides additional stability, regardless of whether the dynamics are stable or unstable. We suggest that the motor control system utilizes an inverse dynamics model to learn the mean dynamics and an impedance controller to assist in the formation of the inverse dynamics model and to generate needed stability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make and vary the direction of movements in unstable environments. It has been shown that when a single reaching movement is repeated in unstable dynamics, the central nervous system (CNS) learns an impedance internal model to compensate for the environment instability. However, there is still no explanation for how humans can learn to move in various directions in such environments. In this study, we investigated whether and how humans compensate for instability while learning two different reaching movements simultaneously. Results show that when performing movements in two different directions, separated by a 35° angle, the CNS was able to compensate for the unstable dynamics. After adaptation, the force was found to be similar to the free movement condition, but stiffness increased in the direction of instability, specifically for each direction of movement. Our findings suggest that the CNS either learned an internal model generalizing over different movements, or alternatively that it was able to switch between specific models acquired simultaneously. © 2008 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. © 2012 Kadiallah et al.