9 resultados para Game-based learning model
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Susceptible-infective-removed (SIR) models are commonly used for representing the spread of contagious diseases. A SIR model can be described in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. Here, this framework is employed for investigating the consequences of applying vaccine against the propagation of a contagious infection, by considering vaccination as a game, in the sense of game theory. In this game, the players are the government and the susceptible newborns. In order to maximize their own payoffs, the government attempts to reduce the costs for combating the epidemic, and the newborns may be vaccinated only when infective individuals are found in their neighborhoods and/or the government promotes an immunization program. As a consequence of these strategies supported by cost-benefit analysis and perceived risk, numerical simulations show that the disease is not fully eliminated and the government implements quasi-periodic vaccination campaigns. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The Learning Object (OA) is any digital resource that can be reused to support learning with specific functions and objectives. The OA specifications are commonly offered in SCORM model without considering activities in groups. This deficiency was overcome by the solution presented in this paper. This work specified OA for e-learning activities in groups based on SCORM model. This solution allows the creation of dynamic objects which include content and software resources for the collaborative learning processes. That results in a generalization of the OA definition, and in a contribution with e-learning specifications.
Resumo:
The purpose of this investigation was to evaluate three learning methods for teaching basic oral surgical skills Thirty predoctoral dental students without any surgical knowledge or previous surgical experience were divided Into three groups (n=10 each) according to instructional strategy Group 1, active learning Group 2, text reading only, and Group 3, text reading and video demonstration After instruction, the apprentices were allowed to practice incision dissection and suture maneuvers in a bench learning model During the students' performance, a structured practice evaluation test to account for correct or incorrect maneuvers was applied by trained observers Evaluation tests were repeated after thirty and sixty days Data from resulting scores between groups and periods were considered for statistical analysis (ANOVA and Tukey Kramer) with a significant level of a=0 05 Results showed that the active learning group presented the significantly best learning outcomes related to immediate assimilation of surgical procedures compared to other groups All groups results were similar after sixty days of the first practice Assessment tests were fundamental to evaluate teaching strategies and allowed theoretical and proficiency learning feedbacks Repetition and interactive practice promoted retention of knowledge on basic oral surgical skills
Resumo:
The purpose of this research was to evaluate educational strategies applied to a tele-education leprosy course. The curriculum was for members of the Brazilian Family Health Team and was made available through the Sao Paulo Telehealth Portal. The course educational strategy was based on a constructivist learning model where interactivity was emphasized. Authors assessed motivational aspects of the course using the WebMAC Professional tool. Forty-eight healthcare professionals answered the evaluation questionnaire. Adequate internal consistency was achieved (Cronbach`s alpha = 0.79). More than 95% of queried items received good evaluations. Multidimensional analysis according to motivational groups of questions (STIMULATING, MEANINGFUL, ORGANIZED, EASY-TO-USE) showed high agreement. According to WebMAC`s criteria, it was considered an ""awesome course."" The tele-educational strategies implemented for leprosy disclosed high motivational scores.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Resumo:
Lellis-Santos C, Giannocco G, Nunes MT. The case of thyroid hormones: how to learn physiology by solving a detective case. Adv Physiol Educ 35: 219-226, 2011; doi:10.1152/advan.00135.2010.Thyroid diseases are prevalent among endocrine disorders, and careful evaluation of patients' symptoms is a very important part in their diagnosis. Developing new pedagogical strategies, such as problem-based learning (PBL), is extremely important to stimulate and encourage medical and biomedical students to learn thyroid physiology and identify the signs and symptoms of thyroid dysfunction. The present study aimed to create a new pedagogical approach to build deep knowledge about hypo-/hyperthyroidism by proposing a hands-on activity based on a detective case, using alternative materials in place of laboratory animals. After receiving a description of a criminal story involving changes in thyroid hormone economy, students collected data from clues, such as body weight, mesenteric vascularization, visceral fat, heart and thyroid size, heart rate, and thyroid-stimulating hormone serum concentration to solve the case. Nevertheless, there was one missing clue for each panel of data. Four different materials were proposed to perform the same practical lesson. Animals, pictures, small stuffed toy rats, and illustrations were all effective to promote learning, and the detective case context was considered by students as inviting and stimulating. The activity can be easily performed independently of the institution's purchasing power. The practical lesson stimulated the scientific method of data collection and organization, discussion, and review of thyroid hormone actions to solve the case. Hence, this activity provides a new strategy and alternative materials to teach without animal euthanization.
Resumo:
The Brazilian Amazon is one of the most rapidly developing agricultural areas in the world and represents a potentially large future source of greenhouse gases from land clearing and subsequent agricultural management. In an integrated approach, we estimate the greenhouse gas dynamics of natural ecosystems and agricultural ecosystems after clearing in the context of a future climate. We examine scenarios of deforestation and postclearing land use to estimate the future (2006-2050) impacts on carbon dioxide (CO(2)), methane (CH(4)), and nitrous oxide (N(2)O) emissions from the agricultural frontier state of Mato Grosso, using a process-based biogeochemistry model, the Terrestrial Ecosystems Model (TEM). We estimate a net emission of greenhouse gases from Mato Grosso, ranging from 2.8 to 15.9 Pg CO(2)-equivalents (CO(2)-e) from 2006 to 2050. Deforestation is the largest source of greenhouse gas emissions over this period, but land uses following clearing account for a substantial portion (24-49%) of the net greenhouse gas budget. Due to land-cover and land-use change, there is a small foregone carbon sequestration of 0.2-0.4 Pg CO(2)-e by natural forests and cerrado between 2006 and 2050. Both deforestation and future land-use management play important roles in the net greenhouse gas emissions of this frontier, suggesting that both should be considered in emissions policies. We find that avoided deforestation remains the best strategy for minimizing future greenhouse gas emissions from Mato Grosso.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.