42 resultados para Agricultural Learning of Barbacena, MG


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past decade, a variety of user models have been proposed for user simulation-based reinforcement-learning of dialogue strategies. However, the strategies learned with these models are rarely evaluated in actual user trials and it remains unclear how the choice of user model affects the quality of the learned strategy. In particular, the degree to which strategies learned with a user model generalise to real user populations has not be investigated. This paper presents a series of experiments that qualitatively and quantitatively examine the effect of the user model on the learned strategy. Our results show that the performance and characteristics of the strategy are in fact highly dependent on the user model. Furthermore, a policy trained with a poor user model may appear to perform well when tested with the same model, but fail when tested with a more sophisticated user model. This raises significant doubts about the current practice of learning and evaluating strategies with the same user model. The paper further investigates a new technique for testing and comparing strategies directly on real human-machine dialogues, thereby avoiding any evaluation bias introduced by the user model. © 2005 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for a step known as sigma point placement, causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. ©2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans have exceptional abilities to learn new skills, manipulate tools and objects, and interact with our environment. In order to be successful at these tasks, our brain has developed learning mechanisms to deal with and compensate for the constantly changing dynamics of the world. If this mechanism or mechanisms can be understood from a computational point of view, then they can also be used to drive the adaptability and learning of robots. In this paper, we will present a new technique for examining changes in the feedforward motor command due to adaptation. This technique can then be utilized for examining motor adaptation in humans and determining a computational algorithm which explains motor learning. © 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for sigma point placement, potentially causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. © 2011 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At an early stage of learning novel dynamics, changes in muscle activity are mainly due to corrective feedback responses. These feedback contributions to the overall motor command are gradually reduced as feedforward control is learned. The temporary increased use of feedback could arise simply from the large errors in early learning with either unaltered gains or even slightly downregulated gains, or from an upregulation of the feedback gains when feedforward prediction is insufficient. We therefore investigated whether the sensorimotor control system alters feedback gains during adaptation to a novel force field generated by a robotic manipulandum. To probe the feedback gains throughout learning, we measured the magnitude of involuntary rapid visuomotor responses to rapid shifts in the visual location of the hand during reaching movements. We found large increases in the magnitude of the rapid visuomotor response whenever the dynamics changed: both when the force field was first presented, and when it was removed. We confirmed that these changes in feedback gain are not simply a byproduct of the change in background load, by demonstrating that this rapid visuomotor response is not load sensitive. Our results suggest that when the sensorimotor control system experiences errors, it increases the gain of the visuomotor feedback pathways to deal with the unexpected disturbances until the feedforward controller learns the appropriate dynamics. We suggest that these feedback gains are upregulated with increased uncertainty in the knowledge of the dynamics to counteract any errors or disturbances and ensure accurate and skillful movements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical integration is a key component of many problems in scientific computing, statistical modelling, and machine learning. Bayesian Quadrature is a modelbased method for numerical integration which, relative to standard Monte Carlo methods, offers increased sample efficiency and a more robust estimate of the uncertainty in the estimated integral. We propose a novel Bayesian Quadrature approach for numerical integration when the integrand is non-negative, such as the case of computing the marginal likelihood, predictive distribution, or normalising constant of a probabilistic model. Our approach approximately marginalises the quadrature model's hyperparameters in closed form, and introduces an active learning scheme to optimally select function evaluations, as opposed to using Monte Carlo samples. We demonstrate our method on both a number of synthetic benchmarks and a real scientific problem from astronomy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful motor performance requires the ability to adapt motor commands to task dynamics. A central question in movement neuroscience is how these dynamics are represented. Although it is widely assumed that dynamics (e.g., force fields) are represented in intrinsic, joint-based coordinates (Shadmehr R, Mussa-Ivaldi FA. J Neurosci 14: 3208-3224, 1994), recent evidence has questioned this proposal. Here we reexamine the representation of dynamics in two experiments. By testing generalization following changes in shoulder, elbow, or wrist configurations, the first experiment tested for extrinsic, intrinsic, or object-centered representations. No single coordinate frame accounted for the pattern of generalization. Rather, generalization patterns were better accounted for by a mixture of representations or by models that assumed local learning and graded, decaying generalization. A second experiment, in which we replicated the design of an influential study that had suggested encoding in intrinsic coordinates (Shadmehr and Mussa-Ivaldi 1994), yielded similar results. That is, we could not find evidence that dynamics are represented in a single coordinate system. Taken together, our experiments suggest that internal models do not employ a single coordinate system when generalizing and may well be represented as a mixture of coordinate systems, as a single system with local learning, or both.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer simulation experiments were performed to examine the effectiveness of OR- and comparative-reinforcement learning algorithms. In the simulation, human rewards were given as +1 and -1. Two models of human instruction that determine which reward is to be given in every step of a human instruction were used. Results show that human instruction may have a possibility of including both model-A and model-B characteristics, and it can be expected that the comparative-reinforcement learning algorithm is more effective for learning by human instructions.