689 resultados para Game-based learning model
Resumo:
This paper provides a critical overview into a distinctive typology of Learning and Teaching Research developed at a relatively small, research-led UK University. Based upon research into staff perceptions of the relationship between learning and teaching research and practice, the model represents an holistic approach to evidence-based learning and teaching practice in Contemporary Higher Education.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
There is an increasing trend by publishers to provide supplementary learning materials with text books in order to improve the learning experience and thus ultimately improve text book sales. This study will aim to establish the use of these materials and their relevance to students in terms of supporting student learning. The materials include multiple choice test banks, animated demonstrations, simulations, quizzes and electronic versions of the text. The study will focus on the extensive library of web-based learning materials available on the ‘WileyPlus’ web platform which accompanies the textbook ‘Operations Management’, 2nd edition authored by A. Greasley and published by John Wiley and Sons Ltd.
Resumo:
This article describes the approach adopted and the results obtained by the international team developing WBLST (Web Based Learning in Sciences and Technologies) a Web-based application for e-learning, developed for the students of “UVPL: Université Virtuelle des Pays de la Loire”. The developed e-learning system covers three levels of learning activities - content, exercises, and laboratory. The delivery model is designed to operate with domain concepts as relevant providers of semantic links. The aim is to facilitate the overview and to help the establishment of a mental map of the learning material. The implemented system is strongly based on the organization of the instruction in virtual classes. The obtained quality of the system is evaluated on the bases of feedback form students and professors.
Resumo:
In recent years Web has become mainstream medium for communication and information dissemination. This paper presents approaches and methods for adaptive learning implementation, which are used in some contemporary web-interfaced Learning Management Systems (LMSs). The problem is not how to create electronic learning materials, but how to locate and utilize the available information in personalized way. Different attitudes to personalization are briefly described in section 1. The real personalization requires a user profile containing information about preferences, aims, and educational history to be stored and used by the system. These issues are considered in section 2. A method for development and design of adaptive learning content in terms of learning strategy system support is represented in section 3. Section 4 includes a set of innovative personalization services that are suggested by several very important research projects (SeLeNe project, ELENA project, etc.) dated from the last few years. This section also describes a model for role- and competency-based learning customization that uses Web Services approach. The last part presents how personalization techniques are implemented in Learning Grid-driven applications.
Resumo:
There have been multifarious approaches in building expert knowledge in medical or engineering field through expert system, case-based reasoning, model-based reasoning and also a large-scale knowledge-based system. The intriguing factors with these approaches are mainly the choices of reasoning mechanism, ontology, knowledge representation, elicitation and modeling. In our study, we argue that the knowledge construction through hypermedia-based community channel is an effective approach in constructing expert’s knowledge. We define that the knowledge can be represented as in the simplest form such as stories to the most complex ones such as on-the-job type of experiences. The current approaches of encoding experiences require expert’s knowledge to be acquired and represented in rules, cases or causal model. We differentiate the two types of knowledge which are the content knowledge and socially-derivable knowledge. The latter is described as knowledge that is earned through social interaction. Intelligent Conversational Channel is the system that supports the building and sharing on this type of knowledge.
Resumo:
There has been an increasing interest in the use of agent-based simulation and some discussion of the relative merits of this approach as compared to discrete-event simulation. There are differing views on whether an agent-based simulation offers capabilities that discrete-event cannot provide or whether all agent-based applications can at least in theory be undertaken using a discrete-event approach. This paper presents a simple agent-based NetLogo model and corresponding discrete-event versions implemented in the widely used ARENA software. The two versions of the discrete-event model presented use a traditional process flow approach normally adopted in discrete-event simulation software and also an agent-based approach to the model build. In addition a real-time spatial visual display facility is provided using a spreadsheet platform controlled by VBA code embedded within the ARENA model. Initial findings from this investigation are that discrete-event simulation can indeed be used to implement agent-based models and with suitable integration elements such as VBA provide the spatial displays associated with agent-based software.
Resumo:
This qualitative case study explored how employees learn from Team Primacy Concept (TPC)-based employee evaluation and how they apply the knowledge in their job performance. Kolb's experiential learning model (1974) served as a conceptual framework for the study to reveal the process of how employees learn from TPC evaluation, namely, how they experience, reflect, conceptualize and act on performance feedback. TPC based evaluation is a form of multirater evaluation that consists of three components: self-feedback, supervisor's feedback, and peer feedback. The distinctive characteristic of TPC based evaluation is the team evaluation component during which the employee's professional performance is discussed by one's peers in a face-to-face team setting, while other forms of multirater evaluation are usually conducted in a confidential and anonymous manner.^ Case study formed the methodological framework. The case was the Southeastern Virginia (SEVA) region of the Institute for Family Centered Services, and the participants were eight employees of the SEVA region. Findings showed that the evaluation process was anxiety producing for employees, especially the process of peer evaluation in a team setting. Preparation was found to be an important phase of TPC evaluation. Overall, the positive feedback delivered in a team setting made team members feel acknowledged. The study participants felt that honesty in providing feedback and openness to hearing challenges were significant prerequisites to the TPC evaluation process. Further, in the planning phase, employees strove to develop goals for themselves that were meaningful. Also, the catalyst for feedback implementation appeared to stem from one's accountability to self and to the client or community. Generally, the participants identified a number of performance improvement goals that they attained during their employment with IFCS, which were supported by their developmental plans.^ In conclusion, the study identified the process by which employees learned from TPC-based employee evaluation and the ways in which they used the knowledge to improve their job performance. Specifically, the study examined how participants felt and what they thought about TPC-based feedback, in what ways they reflected and made meaning of the feedback, and how they used the feedback to improve their job performance.^
Resumo:
The objective of this research is to test the effectiveness of a game-based mathematical curriculum Number-Way in preschools for low socioeconomic status (SES) children. This curriculum contains fifteen interesting number games representing four main principles. The result indicated that this curriculum promoted early mathematical competence for preschoolers significantly.
Resumo:
Eschewing costly high-tech approaches, this paper looks at the experience of using low-tech approaches to game design assignments as problem based learning and assessment tool over a number of years in undergraduate teaching. General game design concepts are discussed, along with learning outcomes and assessment rubrics in line with Blooms Taxonomy based on evidence from students who had no prior experience of serious game play or design. Approaches to creating game design based assessments are offered.
Resumo:
This study investigates the degree to which textual complexity indices applied on students’ online contributions, corroborated with a longitudinal analysis performed on their weekly posts, predict academic performance. The source of student writing consists of blog and microblog posts, created in the context of a project-based learning scenario run on our eMUSE platform. Data is collected from six student cohorts, from six consecutive installments of the Web Applications Design course, comprising of 343 students. A significant model was obtained by relying on the textual complexity and longitudinal analysis indices, applied on the English contributions of 148 students that were actively involved in the undertaken projects.
Resumo:
Social media tools are increasingly popular in Computer Supported Collaborative Learning and the analysis of students' contributions on these tools is an emerging research direction. Previous studies have mainly focused on examining quantitative behavior indicators on social media tools. In contrast, the approach proposed in this paper relies on the actual content analysis of each student's contributions in a learning environment. More specifically, in this study, textual complexity analysis is applied to investigate how student's writing style on social media tools can be used to predict their academic performance and their learning style. Multiple textual complexity indices are used for analyzing the blog and microblog posts of 27 students engaged in a project-based learning activity. The preliminary results of this pilot study are encouraging, with several indexes predictive of student grades and/or learning styles.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.