30 resultados para Control framework
em Cambridge University Engineering Department Publications Database
Resumo:
The authors demonstrate that a widely proposed method of robot dynamic control can be inherently unstable, due to an algebraic feedback loop condition causing an ill-posed feedback system. By focussing on the concept of ill-posedness a necessary and sufficient condition is derived for instability in robot manipulator systems which incorporate online acceleration cross-coupling control. Also demonstrated is a quasilinear multivariable control framework useful for assessing the robustness of this type of control when the instability condition is not obeyed.
Resumo:
We explore collective behavior in biological systems using a cooperative control framework. In particular, we study a hysteresis phenomenon in which a collective switches from circular to parallel motion under slow variation of the neighborhood size in which individuals tend to align with one another. In the case that the neighborhood radius is less than the circular motion radius, both circular and parallel motion can occur. We provide Lyapunov-based analysis of bistability of circular and parallel motion in a closed-loop system of self-propelled particles with coupled-oscillator dynamics. ©2007 IEEE.
Resumo:
A novel framework is provided for very fast model-based reinforcement learning in continuous state and action spaces. It requires probabilistic models that explicitly characterize their levels of condence. Within the framework, exible, non-parametric models are used to describe the world based on previously collected experience. It demonstrates learning on the cart-pole problem in a setting where very limited prior knowledge about the task has been provided. Learning progressed rapidly, and a good policy found after only a small number of iterations.
Resumo:
Uncertainty is ubiquitous in our sensorimotor interactions, arising from factors such as sensory and motor noise and ambiguity about the environment. Setting it apart from previous theories, a quintessential property of the Bayesian framework for making inference about the state of world so as to select actions, is the requirement to represent the uncertainty associated with inferences in the form of probability distributions. In the context of sensorimotor control and learning, the Bayesian framework suggests that to respond optimally to environmental stimuli the central nervous system needs to construct estimates of the sensorimotor transformations, in the form of internal models, as well as represent the structure of the uncertainty in the inputs, outputs and in the transformations themselves. Here we review Bayesian inference and learning models that have been successful in demonstrating the sensitivity of the sensorimotor system to different forms of uncertainty as well as recent studies aimed at characterizing the representation of the uncertainty at different computational levels.
Resumo:
On a daily basis, humans interact with a vast range of objects and tools. A class of tasks, which can pose a serious challenge to our motor skills, are those that involve manipulating objects with internal degrees of freedom, such as when folding laundry or using a lasso. Here, we use the framework of optimal feedback control to make predictions of how humans should interact with such objects. We confirm the predictions experimentally in a two-dimensional object manipulation task, in which subjects learned to control six different objects with complex dynamics. We show that the non-intuitive behavior observed when controlling objects with internal degrees of freedom can be accounted for by a simple cost function representing a trade-off between effort and accuracy. In addition to using a simple linear, point-mass optimal control model, we also used an optimal control model, which considers the non-linear dynamics of the human arm. We find that the more realistic optimal control model captures aspects of the data that cannot be accounted for by the linear model or other previous theories of motor control. The results suggest that our everyday interactions with objects can be understood by optimality principles and advocate the use of more realistic optimal control models for the study of human motor neuroscience.
Resumo:
This review will focus on four areas of motor control which have recently been enriched both by neural network and control system models: motor planning, motor prediction, state estimation and motor learning. We will review the computational foundations of each of these concepts and present specific models which have been tested by psychophysical experiments. We will cover the topics of optimal control for motor planning, forward models for motor prediction, observer models of state estimation arid modular decomposition in motor learning. The aim of this review is to demonstrate how computational approaches, as well as proposing specific models, provide a theoretical framework to formalize the issues in motor control.