42 resultados para Ubiquitous Learning Environments
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
An experiment was conducted to investigate the persistence of the effect of ""bandwidth knowledge of results (KR)"" manipulated during the learning phase of performing a manual force-control task. The experiment consisted of two phases, an acquisition phase with the goal of maintaining 60% maximum force in 30 trials, and a second phase with the objective of maintaining 40% of maximum force in 20 further trials. There were four bandwidths of KR: when performance error exceeded 5, 10, or 15% of the target, and a control group (0% bandwidth). Analysis showed that 5, 10, and 15% bandwidth led to better performance than 0% bandwidth KR at the beginning of the second phase and persisted during the extended trials.
Resumo:
Background: Universities worldwide are seeking objective measures for the assessment of their faculties` research products to evaluate them and to attain prestige. Despite concerns, the impact factors (IF) of journals where faculties publish have been adopted. Research objective: The study aims to explore conditions created within five countries as a result of policies requiring or not requiring faculty to publish in high IF journals, and the extent to which these facilitated or hindered the development of nursing science. Design: The design was a multiple case study of Brazil, Taiwan, Thailand (with IF policies, Group A), United Kingdom and the United States (no IF policies, Group B). Key informants from each country were identified to assist in subject recruitment. Methods: A questionnaire was developed for data collection. The study was approved by a human subject review committee. Five faculty members of senior rank from each country participated. All communication occurred electronically. Findings: Groups A and B countries differed on who used the policy and the purposes for which it was used. There were both similarities and differences across the five countries with respect to hurdles, scholar behaviour, publishing locally vs. internationally, views of their science, steps taken to internationalize their journals. Conclusions: In group A countries, Taiwan seemed most successful in developing its scholarship. Group B countries have continued their scientific progress without such policies. IF policies were not necessary motivators of scholarship; factors such as qualified nurse scientists, the resource base in the country, may be critical factors in supporting science development.
Resumo:
There are several tools in the literature that support innovation in organizations. Some of the most cited are the so-called technology roadmapping methods, also known as TRM. However, these methods are designed primarily for organizations that adopt the market pull strategy of technology-product integration. Organizations that adopt the technology push integration strategy are neglected in the literature. Furthermore, with the advent of open innovation, it is possible to note the need to consider the adoption of partnerships in the innovation process. Thus, this study proposes a method of technology roadmapping, identified as method for technology push (MTP), applicable to organizations that adopt the technology push integration strategy, such as SMEs and independent research centers in an open-innovation environment. The method was developed through action-research and was assessed from two analytical standpoints: externally, via a specific literature review on its theoretical contributions, and internally, through the analysis of potential users` perceptions on the feasibility of applying MTP. The results indicate both the unique character of the method and its perceived implementation feasibility. Future research is suggested in order to validate the method in different types of organizations (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Studies on the avoidance behavior of aquatic organisms to contaminants have confirmed that such behavior can be relevant in field situations. However, almost all toxicity tests involve the forced exposure of organisms to toxicants. In particular, despite the importance of Chironomus riparius Meigen larvae in sediment toxicity testing, only a few studies on avoidance behavior have been performed. This study investigated the ability of different life stages of C. riparius, including ovipositing females, first-, second-, and fourth-instar larvae, to avoid copper-contaminated environments. Ovipositing females were given a choice between a control and copper solution (1.3 mg Cu l(-1)). First-instar larvae were provided with a choice between a control and a copper (2.0 mg Cu l(-1))-spiked sediment. Both second- and fourth-instars were exposed to a copper gradient (0.38-3.4 mg Cu l(-1)) in a flow-through system. None of the life stages avoided copper, even though the highest concentrations caused lethal effects on midges. The avoidance behavior of C. riparius is not a sensitive endpoint to assess copper sublethal toxicity.
Resumo:
Oxy-coal combustion is a viable technology, for new and existing coal-fired power plants, as it facilitates carbon capture and, thereby, can mitigate climate change. Pulverized coals of various ranks, biomass, and their blends were burned to assess the evolution of combustion effluent gases, such as NO(x), SO(2), and CO, under a variety of background gas compositions. The fuels were burned in an electrically heated laboratory drop-tube furnace in O(2)/N(2) and O(2)/CO(2) environments with oxygen mole fractions of 20%, 40%, 60%, 80%, and 100%, at a furnace temperature of 1400 K. The fuel mass flow rate was kept constant in most cases, and combustion was fuel-lean. Results showed that in the case of four coals studied, NO(x) emissions in O(2)/CO(2) environments were lower than those in O(2)/N(2) environments by amounts that ranged from 19 to 43% at the same oxygen concentration. In the case of bagasse and coal/bagasse blends, the corresponding NO(x) reductions ranged from 22 to 39%. NO(x) emissions were found to increase with increasing oxygen mole fraction until similar to 50% O(2) was reached; thereafter, they monotonically decreased with increasing oxygen concentration. NO(x) emissions from the various fuels burned did not clearly reflect their nitrogen content (0.2-1.4%), except when large content differences were present. SO(2) emissions from all fuels remained largely unaffected by the replacement of the N(2) diluent gas with CO(2), whereas they typically increased with increasing sulfur content of the fuels (0.07-1.4%) and decreased with increasing calcium content of the fuels (0.28-2.7%). Under the conditions of this work, 20-50% of the fuel-nitrogen was converted to NO(x). The amount of fuel-sulfur converted to SO(2) varied widely, depending on the fuel and, in the case of the bituminous coal, also depending on the O(2) mole fraction. Blending the sub-bituminous coal with bagasse reduced its SO(2) yields, whereas blending the bituminous coal with bagasse reduced both its SO(2) and NO(x) yields. CO emissions were generally very low in all cases. The emission trends were interpreted on the basis of separate combustion observations.
Resumo:
The Learning Object (OA) is any digital resource that can be reused to support learning with specific functions and objectives. The OA specifications are commonly offered in SCORM model without considering activities in groups. This deficiency was overcome by the solution presented in this paper. This work specified OA for e-learning activities in groups based on SCORM model. This solution allows the creation of dynamic objects which include content and software resources for the collaborative learning processes. That results in a generalization of the OA definition, and in a contribution with e-learning specifications.
Resumo:
One of the e-learning environment goal is to attend the individual needs of students during the learning process. The adaptation of contents, activities and tools into different visualization or in a variety of content types is an important feature of this environment, bringing to the user the sensation that there are suitable workplaces to his profile in the same system. Nevertheless, it is important the investigation of student behaviour aspects, considering the context where the interaction happens, to achieve an efficient personalization process. The paper goal is to present an approach to identify the student learning profile analyzing the context of interaction. Besides this, the learning profile could be analyzed in different dimensions allows the system to deal with the different focus of the learning.
Resumo:
In this paper, a framework for detection of human skin in digital images is proposed. This framework is composed of a training phase and a detection phase. A skin class model is learned during the training phase by processing several training images in a hybrid and incremental fuzzy learning scheme. This scheme combines unsupervised-and supervised-learning: unsupervised, by fuzzy clustering, to obtain clusters of color groups from training images; and supervised to select groups that represent skin color. At the end of the training phase, aggregation operators are used to provide combinations of selected groups into a skin model. In the detection phase, the learned skin model is used to detect human skin in an efficient way. Experimental results show robust and accurate human skin detection performed by the proposed framework.
Resumo:
This paper investigates how to make improved action selection for online policy learning in robotic scenarios using reinforcement learning (RL) algorithms. Since finding control policies using any RL algorithm can be very time consuming, we propose to combine RL algorithms with heuristic functions for selecting promising actions during the learning process. With this aim, we investigate the use of heuristics for increasing the rate of convergence of RL algorithms and contribute with a new learning algorithm, Heuristically Accelerated Q-learning (HAQL), which incorporates heuristics for action selection to the Q-Learning algorithm. Experimental results on robot navigation show that the use of even very simple heuristic functions results in significant performance enhancement of the learning rate.
Resumo:
How does knowledge management (KM) by a government agency responsible for environmental impact assessment (EIA) potentially contribute to better environmental assessment and management practice? Staff members at government agencies in charge of the EIA process are knowledge workers who perform judgement-oriented tasks highly reliant on individual expertise, but also grounded on the agency`s knowledge accumulated over the years. Part of an agency`s knowledge can be codified and stored in an organizational memory, but is subject to decay or loss if not properly managed. The EIA agency operating in Western Australia was used as a case study. Its KM initiatives were reviewed, knowledge repositories were identified and staff surveyed to gauge the utilisation and effectiveness of such repositories in enabling them to perform EIA tasks. Key elements of KM are the preparation of substantive guidance and spatial information management. It was found that treatment of cumulative impacts on the environment is very limited and information derived from project follow-up is not properly captured and stored, thus not used to create new knowledge and to improve practice and effectiveness. Other opportunities for improving organizational learning include the use of after-action reviews. The learning about knowledge management in EIA practice gained from Western Australian experience should be of value to agencies worldwide seeking to understand where best to direct their resources for their own knowledge repositories and environmental management practice. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.
Resumo:
We address here aspects of the implementation of a memory evolutive system (MES), based on the model proposed by A. Ehresmann and J. Vanbremeersch (2007), by means of a simulated network of spiking neurons with time dependent plasticity. We point out the advantages and challenges of applying category theory for the representation of cognition, by using the MES architecture. Then we discuss the issues concerning the minimum requirements that an artificial neural network (ANN) should fulfill in order that it would be capable of expressing the categories and mappings between them, underlying the MES. We conclude that a pulsed ANN based on Izhikevich`s formal neuron with STDP (spike time-dependent plasticity) has sufficient dynamical properties to achieve these requirements, provided it can cope with the topological requirements. Finally, we present some perspectives of future research concerning the proposed ANN topology.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.