35 resultados para Reinforcement Learning


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although it is widely believed that reinforcement learning is a suitable tool for describing behavioral learning, the mechanisms by which it can be implemented in networks of spiking neurons are not fully understood. Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i.e., depending on which neural code is in effect. We use the framework of Williams (1992) to derive learning rules for arbitrary neural codes. For illustration, we present policy-gradient rules for three different example codes - a spike count code, a spike timing code and the most general "full spike train" code - and test them on simple model problems. In addition to classical synaptic learning, we derive learning rules for intrinsic parameters that control the excitability of the neuron. The spike count learning rule has structural similarities with established Bienenstock-Cooper-Munro rules. If the distribution of the relevant spike train features belongs to the natural exponential family, the learning rules have a characteristic shape that raises interesting prediction problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While underactuated robotic systems are capable of energy efficient and rapid dynamic behavior, we still do not fully understand how body dynamics can be actively used for adaptive behavior in complex unstructured environment. In particular, we can expect that the robotic systems could achieve high maneuverability by flexibly storing and releasing energy through the motor control of the physical interaction between the body and the environment. This paper presents a minimalistic optimization strategy of motor control policy for underactuated legged robotic systems. Based on a reinforcement learning algorithm, we propose an optimization scheme, with which the robot can exploit passive elasticity for hopping forward while maintaining the stability of locomotion process in the environment with a series of large changes of ground surface. We show a case study of a simple one-legged robot which consists of a servomotor and a passive elastic joint. The dynamics and learning performance of the robot model are tested in simulation, and then transferred the results to the real-world robot. ©2007 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As observed in nature, complex locomotion can be generated based on an adequate combination of motor primitives. In this context, the paper focused on experiments which result in the development of a quality criterion for the design and analysis of motor primitives. First, the impact of different vocabularies on behavioural diversity, robustness of prelearned behaviours and learning process is elaborated. The experiments are performed with the quadruped robot MiniDog6M for which a running and standing up behaviour is implemented. Further, a reinforcement learning approach based on Q-learning is introduced which is used to select an adequate sequence of motor primitives. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents a novel algorithm for learning parameters in statistical dialogue systems which are modeled as Partially Observable Markov Decision Processes (POMDPs). The three main components of a POMDP dialogue manager are a dialogue model representing dialogue state information; a policy that selects the system's responses based on the inferred state; and a reward function that specifies the desired behavior of the system. Ideally both the model parameters and the policy would be designed to maximize the cumulative reward. However, while there are many techniques available for learning the optimal policy, no good ways of learning the optimal model parameters that scale to real-world dialogue systems have been found yet. The presented algorithm, called the Natural Actor and Belief Critic (NABC), is a policy gradient method that offers a solution to this problem. Based on observed rewards, the algorithm estimates the natural gradient of the expected cumulative reward. The resulting gradient is then used to adapt both the prior distribution of the dialogue model parameters and the policy parameters. In addition, the article presents a variant of the NABC algorithm, called the Natural Belief Critic (NBC), which assumes that the policy is fixed and only the model parameters need to be estimated. The algorithms are evaluated on a spoken dialogue system in the tourist information domain. The experiments show that model parameters estimated to maximize the expected cumulative reward result in significantly improved performance compared to the baseline hand-crafted model parameters. The algorithms are also compared to optimization techniques using plain gradients and state-of-the-art random search algorithms. In all cases, the algorithms based on the natural gradient work significantly better. © 2011 ACM.