4 resultados para Discrete Mathematics Learning

em Cambridge University Engineering Department Publications Database


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rhythmic and discrete arm movements occur ubiquitously in everyday life, and there is a debate as to whether these two classes of movements arise from the same or different underlying neural mechanisms. Here we examine interference in a motor-learning paradigm to test whether rhythmic and discrete movements employ at least partially separate neural representations. Subjects were required to make circular movements of their right hand while they were exposed to a velocity-dependent force field that perturbed the circularity of the movement path. The direction of the force-field perturbation reversed at the end of each block of 20 revolutions. When subjects made only rhythmic or only discrete circular movements, interference was observed when switching between the two opposing force fields. However, when subjects alternated between blocks of rhythmic and discrete movements, such that each was uniquely associated with one of the perturbation directions, interference was significantly reduced. Only in this case did subjects learn to corepresent the two opposing perturbations, suggesting that different neural resources were employed for the two movement types. Our results provide further evidence that rhythmic and discrete movements employ at least partially separate control mechanisms in the motor system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations. We provide Markov chain Monte Carlo algorithms for inference in these belief networks and explore the structures learned on several image data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.