6 resultados para IT intention to learn

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Older people often find it difficult to learn to use new technology. Although they may want to adopt it, they can find the learning process challenging and frustrating and subsequently lose motivation. This paper looks at how psychological theories of intrinsic motivation could be applied to make the ICT learning process more engaging for older users and describes an experiment set up to test the applicability of these theories to user interface (UI) design. The results of the experiment confirmed that intrinsic motivation theory is a valid lens through which to look at current ICT design and also uncovered significant gender differences in reaction to different kinds of learning tasks. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make and vary the direction of movements in unstable environments. It has been shown that when a single reaching movement is repeated in unstable dynamics, the central nervous system (CNS) learns an impedance internal model to compensate for the environment instability. However, there is still no explanation for how humans can learn to move in various directions in such environments. In this study, we investigated whether and how humans compensate for instability while learning two different reaching movements simultaneously. Results show that when performing movements in two different directions, separated by a 35° angle, the CNS was able to compensate for the unstable dynamics. After adaptation, the force was found to be similar to the free movement condition, but stiffness increased in the direction of instability, specifically for each direction of movement. Our findings suggest that the CNS either learned an internal model generalizing over different movements, or alternatively that it was able to switch between specific models acquired simultaneously. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. © 2012 Kadiallah et al.