931 resultados para Franklin
Resumo:
As humanoid robots become more commonplace in our society, it is important to understand the relation between humans and humanoid robots. In human face-to-face interaction, the observation of another individual performing an action facilitates the execution of a similar action, and interferes with the execution of different action. This phenomenon has been explained by the existence of shared internal representations for the execution and perception of actions, which would be automatically activated by the perception of another individual's action. In one interference experiment, null interference was reported when subjects observed a robotic arm perform the incongruent task, suggesting that this effect may be specific to interacting with other humans. This experimental paradigm, designed to investigate motor interference in human interactions, was adapted to investigate how similar the implicit perception of a humanoid robot is to a human agent. Subjects performed rhythmic arm movements while observing either a human agent or humanoid robot performing either congruent or incongruent movements. The variance of the executed movements was used as a measure of the amount of interference in the movements. Both the human and humanoid agents produced significant interference effect. These results suggest that observing the action of humanoid robot and human agent may rely on similar perceptual processes. Furthermore, the ratio of the variance in incongruent to congruent conditions varied between the human agent and humanoid robot. We speculate this ratio describes how the implicit perception of a robot is similar to that of a human, so that this paradigm could provide an objective measure of the reaction to different types of robots and be used to guide the design of humanoid robots interacting with humans. © 2004 IEEE.
Resumo:
The results of recent studies suggest that humans can form internal models that they use in a feedforward manner to compensate for both stable and unstable dynamics. To examine how internal models are formed, we performed adaptation experiments in novel dynamics, and measured the endpoint force, trajectory and EMG during learning. Analysis of reflex feedback and change of feedforward commands between consecutive trials suggested a unified model of motor learning, which can coherently unify the learning processes observed in stable and unstable dynamics and reproduce available data on motor learning. To our knowledge, this algorithm, based on the concurrent minimization of (reflex) feedback and muscle activation, is also the first nonlinear adaptive controller able to stabilize unstable dynamics.
Resumo:
Optimal feedback control postulates that feedback responses depend on the task relevance of any perturbations. We test this prediction in a bimanual task, conceptually similar to balancing a laden tray, in which each hand could be perturbed up or down. Single-limb mechanical perturbations produced long-latency reflex responses ("rapid motor responses") in the contralateral limb of appropriate direction and magnitude to maintain the tray horizontal. During bimanual perturbations, rapid motor responses modulated appropriately depending on the extent to which perturbations affected tray orientation. Specifically, despite receiving the same mechanical perturbation causing muscle stretch, the strongest responses were produced when the contralateral arm was perturbed in the opposite direction (large tray tilt) rather than in the same direction or not perturbed at all. Rapid responses from shortening extensors depended on a nonlinear summation of the sensory information from the arms, with the response to a bimanual same-direction perturbation (orientation maintained) being less than the sum of the component unimanual perturbations (task relevant). We conclude that task-dependent tuning of reflexes can be modulated online within a single trial based on a complex interaction across the arms.
Resumo:
In order to generate skilled and efficient actions, the motor system must find solutions to several problems inherent in sensorimotor control, including nonlinearity, nonstationarity, delays, redundancy, uncertainty, and noise. We review these problems and five computational mechanisms that the brain may use to limit their deleterious effects: optimal feedback control, impedance control, predictive control, Bayesian decision theory, and sensorimotor learning. Together, these computational mechanisms allow skilled and fluent sensorimotor behavior.
Resumo:
Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society.
Resumo:
The technique presented in this paper enables a simple, accurate and unbiased measurement of hand stiffness during human arm movements. Using a computer-controlled mechanical interface, the hand is shifted relative to a prediction of the undisturbed trajectory. Stiffness is then computed as the restoring force divided by the position amplitude of the perturbation. A precise prediction algorithm insures the measurement quality. We used this technique to measure stiffness in free movements and after adaptation to a linear velocity dependent force field. The subjects compensated for the external force by co-contracting muscles selectively. The stiffness geometry changed with learning and stiffness tended to increase in the direction of the external force.
Resumo:
A recent study demonstrates involvement of primary motor cortex in task-dependent modulation of rapid feedback responses; cortical neurons resolve locally ambiguous sensory information, producing sophisticated responses to disturbances.
Resumo:
At an early stage of learning novel dynamics, changes in muscle activity are mainly due to corrective feedback responses. These feedback contributions to the overall motor command are gradually reduced as feedforward control is learned. The temporary increased use of feedback could arise simply from the large errors in early learning with either unaltered gains or even slightly downregulated gains, or from an upregulation of the feedback gains when feedforward prediction is insufficient. We therefore investigated whether the sensorimotor control system alters feedback gains during adaptation to a novel force field generated by a robotic manipulandum. To probe the feedback gains throughout learning, we measured the magnitude of involuntary rapid visuomotor responses to rapid shifts in the visual location of the hand during reaching movements. We found large increases in the magnitude of the rapid visuomotor response whenever the dynamics changed: both when the force field was first presented, and when it was removed. We confirmed that these changes in feedback gain are not simply a byproduct of the change in background load, by demonstrating that this rapid visuomotor response is not load sensitive. Our results suggest that when the sensorimotor control system experiences errors, it increases the gain of the visuomotor feedback pathways to deal with the unexpected disturbances until the feedforward controller learns the appropriate dynamics. We suggest that these feedback gains are upregulated with increased uncertainty in the knowledge of the dynamics to counteract any errors or disturbances and ensure accurate and skillful movements.
Resumo:
Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states.
Resumo:
Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. © 2012 Kadiallah et al.
Resumo:
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Resumo:
Recent theoretical frameworks such as optimal feedback control suggest that feedback gains should modulate throughout a movement and be tuned to task demands. Here we measured the visuomotor feedback gain throughout the course of movements made to "near" or "far" targets in human subjects. The visuomotor gain showed a systematic modulation over the time course of the reach, with the gain peaking at the middle of the movement and dropping rapidly as the target is approached. This modulation depends primarily on the proportion of the movement remaining, rather than hand position, suggesting that the modulation is sensitive to task demands. Model-predictive control suggests that the gains should be continuously recomputed throughout a movement. To test this, we investigated whether feedback gains update when the task goal is altered during a movement, that is when the target of the reach jumped. We measured the visuomotor gain either simultaneously with the jump or 100 ms after the jump. The visuomotor gain nonspecifically reduced for all target jumps when measured synchronously with the jump. However, the visuomotor gain 100 ms later showed an appropriate modulation for the revised task goal by increasing for jumps that increased the distance to the target and reducing for jumps that decreased the distance. We conclude that visuomotor feedback gain shows a temporal evolution related to task demands and that this evolution can be flexibly recomputed within 100 ms to accommodate online modifications to task goals.
Resumo:
Impedance control can be used to stabilize the limb against both instability and unpredictable perturbations. Limb posture influences motor noise, energy usage and limb impedance as well as their interaction. Here we examine whether subjects use limb posture as part of a mechanism to regulate limb stability. Subjects performed stabilization tasks while attached to a two dimensional robotic manipulandum which generated a virtual environment. Subjects were instructed that they could perform the stabilization task anywhere in the workspace, while the chosen postures were tracked as subjects repeated the task. In order to investigate the mechanisms behind the chosen limb postures, simulations of the neuro-mechanical system were performed. The results indicate that posture selection is performed to provide energy efficiency in the presence of force variability.
Resumo:
Successful motor performance requires the ability to adapt motor commands to task dynamics. A central question in movement neuroscience is how these dynamics are represented. Although it is widely assumed that dynamics (e.g., force fields) are represented in intrinsic, joint-based coordinates (Shadmehr R, Mussa-Ivaldi FA. J Neurosci 14: 3208-3224, 1994), recent evidence has questioned this proposal. Here we reexamine the representation of dynamics in two experiments. By testing generalization following changes in shoulder, elbow, or wrist configurations, the first experiment tested for extrinsic, intrinsic, or object-centered representations. No single coordinate frame accounted for the pattern of generalization. Rather, generalization patterns were better accounted for by a mixture of representations or by models that assumed local learning and graded, decaying generalization. A second experiment, in which we replicated the design of an influential study that had suggested encoding in intrinsic coordinates (Shadmehr and Mussa-Ivaldi 1994), yielded similar results. That is, we could not find evidence that dynamics are represented in a single coordinate system. Taken together, our experiments suggest that internal models do not employ a single coordinate system when generalizing and may well be represented as a mixture of coordinate systems, as a single system with local learning, or both.