9 resultados para interaction, e-learning
em Cambridge University Engineering Department Publications Database
Resumo:
The purpose of this research was to investigate the extent to which prior technological experience of products is related to age, and if this has implications for the success of subsequent product interaction. The contribution of this work is to provide the design community with new knowledge and a greater awareness of the diversity of user needs, and particularly the needs and skills of older people. The focus of this paper is to present how individual's mental models of products and interaction were developed through experiential learning; what new knowledge was acquired, and how this contributed to the development of mental models and product understanding. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
In adapting to changing forces in the mechanical environment, humans change the force being applied by the limb by reciprocal changes in the activation of antagonistic muscles. However, they also cocontract these muscles when interaction with the environment is mechanically unstable to increase the mechanical impedance of the limb. We have postulated that appropriate patterns of muscle activation could be learned using a simple scheme in which the naturally occurring stretch reflex is used as a template to adjust feedforward commands to muscles. Feedforward commands are modified iteratively by shifting a scaled version of the reflex response forward in time and adding it to the previous feedforward command. We show that such an algorithm can account for the principal features of changes in muscle activation observed when human subjects adapt to instabilities in the mechanical environment. © 2006.
Resumo:
Statistical dialogue models have required a large number of dialogues to optimise the dialogue policy, relying on the use of a simulated user. This results in a mismatch between training and live conditions, and significant development costs for the simulator thereby mitigating many of the claimed benefits of such models. Recent work on Gaussian process reinforcement learning, has shown that learning can be substantially accelerated. This paper reports on an experiment to learn a policy for a real-world task directly from human interaction using rewards provided by users. It shows that a usable policy can be learnt in just a few hundred dialogues without needing a user simulator and, using a learning strategy that reduces the risk of taking bad actions. The paper also investigates adaptation behaviour when the system continues learning for several thousand dialogues and highlights the need for robustness to noisy rewards. © 2011 IEEE.
Resumo:
The contribution described in this paper is an algorithm for learning nonlinear, reference tracking, control policies given no prior knowledge of the dynamical system and limited interaction with the system through the learning process. Concepts from the field of reinforcement learning, Bayesian statistics and classical control have been brought together in the formulation of this algorithm which can be viewed as a form of indirect self tuning regulator. On the task of reference tracking using a simulated inverted pendulum it was shown to yield generally improved performance on the best controller derived from the standard linear quadratic method using only 30 s of total interaction with the system. Finally, the algorithm was shown to work on the simulated double pendulum proving its ability to solve nontrivial control tasks. © 2011 IEEE.
Resumo:
A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy. © 2013 IEEE.