703 resultados para dynamic learning environments
Resumo:
In recent years, learning analytics (LA) has attracted a great deal of attention in technology-enhanced learning (TEL) research as practitioners, institutions, and researchers are increasingly seeing the potential that LA has to shape the future TEL landscape. Generally, LA deals with the development of methods that harness educational data sets to support the learning process. This paper provides a foundation for future research in LA. It provides a systematic overview on this emerging field and its key concepts through a reference model for LA based on four dimensions, namely data, environments, context (what?), stakeholders (who?), objectives (why?), and methods (how?). It further identifies various challenges and research opportunities in the area of LA in relation to each dimension.
Resumo:
Person-to-stock order picking is highly flexible and requires minimal investment costs in comparison to automated picking solutions. For these reasons, tradi-tional picking is widespread in distribution and production logistics. Due to its typically large proportion of manual activities, picking causes the highest operative personnel costs of all intralogistics process. The required personnel capacity in picking varies short- and mid-term due to capacity requirement fluctuations. These dynamics are often balanced by employing minimal permanent staff and using seasonal help when needed. The resulting high personnel fluctuation necessitates the frequent training of new pickers, which, in combination with in-creasingly complex work contents, highlights the im-portance of learning processes in picking. In industrial settings, learning is often quantified based on diminishing processing time and cost requirements with increasing experience. The best-known industrial learning curve models include those from Wright, de Jong, Baloff and Crossman, which are typically applied to the learning effects of an entire work crew rather than of individuals. These models have been validated in largely static work environments with homogeneous work contents. Little is known of learning effects in picking systems. Here, work contents are heterogeneous and individual work strategies vary among employees. A mix of temporary and steady employees with varying degrees of experience necessitates the observation of individual learning curves. In this paper, the individual picking performance development of temporary employees is analyzed and compared to that of steady employees in the same working environment.
Resumo:
Despite promising cost saving potential, many offshore software projects fail to realize the expected benefits. A frequent source of failure lies in the insufficient transfer of knowledge during the transition phase. Former literature has reported cases where some domains of knowledge were successfully transferred to vendor personnel whereas others were not. There is further evidence that the actual knowledge transfer processes often vary from case to case. This raises the question whether there is a systematic relationship between the chosen knowledge transfer process and know-ledge transfer success. This paper introduces a dynamic perspective that distinguishes different types of knowledge transfer processes explaining under which circumstances which type is deemed most appropriate to successfully transfer knowledge. Our paper draws on knowledge transfer literature, the Model of Work-Based Learning and theories from cognitive psychology to show how characteristics of know-ledge and the absorptive capacity of knowledge recipients fit particular knowledge transfer processes. The knowledge transfer processes are conceptualized as combinations of generic knowledge transfer activities. This results in six gestalts of know-ledge transfer processes, each representing a fit between the characteristics of the knowledge process and the characteristics of the knowledge to be transferred and the absorptive capacity of the knowledge recipient.
Resumo:
In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.
Resumo:
Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.
Resumo:
Despite that a wealth of evidence links striatal dopamine to individualś reward learning performance in non-social environments, the neurochemical underpinnings of such learning during social interaction are unknown. Here, we show that the administration of 300 mg of the dopamine precursor L-DOPA to 200 healthy male subjects influences learning about a partners' prosocial preferences in a novel social interaction task, which is akin to a repeated trust game. We found learning to be modulated by a well-established genetic marker of striatal dopamine levels, the 40-bp variable number tandem repeats polymorphism of the dopamine transporter (DAT1 polymorphism). In particular, we found that L-DOPA improves learning in 10/10R genoype subjects, who are assumed to have lower endogenous striatal dopamine levels and impairs learning in 9/10R genotype subjects, who are assumed to have higher endogenous dopamine levels. These findings provide first evidence for a critical role of dopamine in learning whether an interaction partner has a prosocial or a selfish personality. The applied pharmacogenetic approach may open doors to new ways of studying psychiatric disorders such as psychosis, which is characterized by distorted perceptions of others' prosocial attitudes.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
The contribution of this article demonstrates how to identify context-aware types of e-Learning objects (eLOs) derived from the subject domains. This perspective is taken from an engineering point of view and is applied during requirements elicitation and analysis relating to present work in constructing an object-oriented (OO), dynamic, and adaptive model to build and deliver packaged e-Learning courses. Consequently, three preliminary subject domains are presented and, as a result, three primitive types of eLOs are posited. These types educed from the subject domains are of structural, conceptual, and granular nature. Structural objects are responsible for the course itself, conceptual objects incorporate adaptive and logical interoperability, while granular objects congregate granular assets. Their differences, interrelationships, and responsibilities are discussed. A major design challenge relates to adaptive behaviour. Future research addresses refinement on the subject domains and adaptive hypermedia systems.
Resumo:
Specification consortia and standardization bodies concentrate on e-Learning objects to en-sure reusability of content. Learning objects may be collected in a library and used for deriv-ing course offerings that are customized to the needs of different learning communities. How-ever, customization of courses is possible only if the logical dependencies between the learn-ing objects are known. Metadata for describing object relationships have been proposed in several e-Learning specifications. This paper discusses the customization potential of e-Learning objects but also the pitfalls that exist if content is customized inappropriately.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.
Resumo:
BIOLOGY is a dynamic and fascinating science. The study of this subject is an amazing trip for all the students that have a first contact with this subject. Here, we present the development of the study and learning experience of this subject belonging to an area of knowledge that is different to the training curriculum of students who have studied Physics during their degree period. We have taken a real example, the “Elements of Biology” subject, which is taught as part of the Official Biomedical Physics Master, at the Physics Faculty, of the Complutense University of Madrid, since the course 2006/07. Its main objective is to give to the student an understanding how the Physics can have numerous applications in the Biomedical Sciences area, giving the basic training to develop a professional, academic or research career. The results obtained when we use new virtual tools combined with the classical learning show that there is a clear increase in the number of persons that take and pass the final exam. On the other hand, this new learning strategy is well received by the students and this is translated to a higher participation and a decrease of the giving the subject up