3 resultados para Active Learning

em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco


Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] The higher education regulation process in Europe, known as the Bologna Process, has involved many changes, mainly in relation to methodology and assessment. The paper given below relates to implementing the new EU study plans into the Teacher Training College of Vitoria-Gasteiz; it is the first interdisciplinary paper written involving teaching staff and related to the Teaching Profession module, the first contained in the structure of the new plans. The coordination of teaching staff is one of the main lines of work in the Bologna Process, which is also essential to develop the right skills and maximise the role of students as an active learning component. The use of active, interdisciplinary methodologies has opened up a new dimension in universities, requiring the elimination of the once componential, individual structure, making us look for new areas of exchange that make it possible for students' training to be developed jointly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.