2 resultados para online interaction learning model
em Repositório Institucional da Universidade de Aveiro - Portugal
Resumo:
The work presents a theoretical framework for the evaluation of e-Teaching that aims at positioning the online activities designed and developed by the teacher as to the Learning, Interaction and Technology Dimensions. The theoretical research that underlies the study was developed reflecting current thinking on the promotion of quality of teaching and of the integration of information and communication tools into the curriculum in Higher Education (HE), i.e., bearing in mind some European guidelines and policies on this subject. This way, an answer was sought to be given to one of the aims put forward in this study, namely to contribute towards the development of a conceptual framework to support research on evaluation of e-teaching in the context of HE. Based on the theoretical research carried out, an evaluation tool (SCAI) was designed, which integrates the two questionnaires developed to collect the teachers' and the students' perceptions regarding the development of e-activities. Consequently, an empirical study was structured and carried out, allowing SCAI tool to be tested and validated in real cases. From the comparison of the theoretical framework established and the analysis of the data obtained, we found that the differences in teaching should be valued and seen as assets by HE institutions rather than annihilated in a globalizing perspective.
Resumo:
This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.