689 resultados para Game-based learning model
Resumo:
We describe a general likelihood-based 'mixture model' for inferring phylogenetic trees from gene-sequence or other character-state data. The model accommodates cases in which different sites in the alignment evolve in qualitatively distinct ways, but does not require prior knowledge of these patterns or partitioning of the data. We call this qualitative variability in the pattern of evolution across sites "pattern-heterogeneity" to distinguish it from both a homogenous process of evolution and from one characterized principally by differences in rates of evolution. We present studies to show that the model correctly retrieves the signals of pattern-heterogeneity from simulated gene-sequence data, and we apply the method to protein-coding genes and to a ribosomal 12S data set. The mixture model outperforms conventional partitioning in both these data sets. We implement the mixture model such that it can simultaneously detect rate- and pattern-heterogeneity. The model simplifies to a homogeneous model or a rate- variability model as special cases, and therefore always performs at least as well as these two approaches, and often considerably improves upon them. We make the model available within a Bayesian Markov-chain Monte Carlo framework for phylogenetic inference, as an easy-to-use computer program.
Resumo:
A self study course for learning to program using the C programming language has been developed. A Learning Object approach was used in the design of the course. One of the benefits of the Learning Object approach is that the learning material can be reused for different purposes. 'Me course developed is designed so that learners can choose the pedagogical approach most suited to their personal learning requirements. For all learning approaches a set of common Assessment Learning Objects (ALOs or tests) have been created. The design of formative assessments with ALOs can be carried out by the Instructional Designer grouping ALOs to correspond to a specific assessment intention. The course is non-credit earning, so there is no summative assessment, all assessment is formative. In this paper examples of ALOs and their uses is presented together with their uses as decided by the Instructional Designer and learner. Personalisation of the formative assessment of skills can be decided by the Instructional Designer or the learner using a repository of pre-designed ALOs. The process of combining ALOs can be carried out manually or in a semi-automated way using metadata that describes the ALO and the skill it is designed to assess.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
We compared output from 3 dynamic process-based models (DMs: ECOSSE, MILLENNIA and the Durham Carbon Model) and 9 bioclimatic envelope models (BCEMs; including BBOG ensemble and PEATSTASH) ranging from simple threshold to semi-process-based models. Model simulations were run at 4 British peatland sites using historical climate data and climate projections under a medium (A1B) emissions scenario from the 11-RCM (regional climate model) ensemble underpinning UKCP09. The models showed that blanket peatlands are vulnerable to projected climate change; however, predictions varied between models as well as between sites. All BCEMs predicted a shift from presence to absence of a climate associated with blanket peat, where the sites with the lowest total annual precipitation were closest to the presence/absence threshold. DMs showed a more variable response. ECOSSE predicted a decline in net C sink and shift to net C source by the end of this century. The Durham Carbon Model predicted a smaller decline in the net C sink strength, but no shift to net C source. MILLENNIA predicted a slight overall increase in the net C sink. In contrast to the BCEM projections, the DMs predicted that the sites with coolest temperatures and greatest total annual precipitation showed the largest change in carbon sinks. In this model inter-comparison, the greatest variation in model output in response to climate change projections was not between the BCEMs and DMs but between the DMs themselves, because of different approaches to modelling soil organic matter pools and decomposition amongst other processes. The difference in the sign of the response has major implications for future climate feedbacks, climate policy and peatland management. Enhanced data collection, in particular monitoring peatland response to current change, would significantly improve model development and projections of future change.
Resumo:
Abstract: Following a workshop exercise, two models, an individual-based landscape model (IBLM) and a non-spatial life-history model were used to assess the impact of a fictitious insecticide on populations of skylarks in the UK. The chosen population endpoints were abundance, population growth rate, and the chances of population persistence. Both models used the same life-history descriptors and toxicity profiles as the basis for their parameter inputs. The models differed in that exposure was a pre-determined parameter in the life-history model, but an emergent property of the IBLM, and the IBLM required a landscape structure as an input. The model outputs were qualitatively similar between the two models. Under conditions dominated by winter wheat, both models predicted a population decline that was worsened by the use of the insecticide. Under broader habitat conditions, population declines were only predicted for the scenarios where the insecticide was added. Inputs to the models are very different, with the IBLM requiring a large volume of data in order to achieve the flexibility of being able to integrate a range of environmental and behavioural factors. The life-history model has very few explicit data inputs, but some of these relied on extensive prior modelling needing additional data as described in Roelofs et al.(2005, this volume). Both models have strengths and weaknesses; hence the ideal approach is that of combining the use of both simple and comprehensive modeling tools.
Resumo:
The construction field is dynamic and dominated by complex, ill-defined problems for which myriad possible solutions exist. Teaching students to solve construction-related problems requires an understanding of the nature of these complex problems as well as the implementation of effective instructional strategies to address them. Traditional approaches to teaching construction planning and management have long been criticized for presenting students primarily with well-defined problems - an approach inconsistent with the challenges encountered in the industry. However, growing evidence suggests that employing innovative teaching approaches, such as interactive simulation games, offers more active, hands-on and problem-based learning opportunities for students to synthesize and test acquired knowledge more closely aligned with real-life construction scenarios. Simulation games have demonstrated educational value in increasing student problem solving skills and motivation through critical attributes such as interaction and feedback-supported active learning. Nevertheless, broad acceptance of simulation games in construction engineering education remains limited. While recognizing benefits, research focused on the role of simulation games in educational settings lacks a unified approach to developing, implementing and evaluating these games. To address this gap, this paper provides an overview of the challenges associated with evaluating the effectiveness of simulation games in construction education that still impede their wide adoption. An overview of the current status, as well as the results from recently implemented Virtual Construction Simulator (VCS) game at Penn State provide lessons learned, and are intended to guide future efforts in developing interactive simulation games to reach their full potential.
Resumo:
House builders play a key role in controlling the quality of new homes in the UK. The UK house building sector is, however, currently facing pressures to expand supply as well as conform to tougher low carbon planning and Building Regulation requirements; primarily in the areas of sustainability. There is growing evidence that the pressure the UK house building industry is currently under may be eroding build quality and causing an increase in defects. It is found that the prevailing defect literature is limited to the causes, pathology and statistical analysis of defects (and failures). The literature does not extend to examine how house builders individually and collectively, in practice, collect and learn from defects experience in order to reduce the prevalence of defects in future homes. The theoretical lens for the research is organisational learning. This paper contributes to our understanding of organisational learning in construction through a synthesis of current literature. Further, a suitable organisational learning model is adopted. The paper concludes by reporting the research design of an ongoing collaborative action research project with the National House Building Council (NHBC), focused on developing a better understanding of house builders’ localised defects analysis procedures and learning processes.
Resumo:
Purpose – The purpose of this paper is to investigate to what extent one can apply experiential learning theory (ELT) to the public-private partnership (PPP) setting in Russia and to draw insights regarding the learning cycle ' s nature. Additionally, the paper assesses whether the PPP case confirms Kolb ' s ELT. Design/methodology/approach – The case study draws upon primary data which the authors collected by interviewing informants including a PPP operator ' s managers, lawyers from Russian law firms and an expert from the National PPP Centre. The authors accomplished data source triangulation in order to ensure a high degree of research validity. Findings – Experiential learning has resulted in a successful and a relatively fast PPP project launch without the concessionary framework. The lessons learned include the need for effective stakeholder engagement; avoiding being stuck in bureaucracy such as collaboration with Federal Ministries and anti-trust agency; avoiding application for government funding as the approval process is tangled and lengthy; attracting strategic private investors; shaping positive public perception of a PPP project; and making continuous efforts in order to effectively mitigate the public acceptance risk. Originality/value – The paper contributes to ELT by incorporating the impact of social environment in the learning model. Additionally, the paper tests the applicability of ELT to learning in the complex organisational setting, i.e., a PPP.
Resumo:
It has been suggested that few students graduate with the skills required for many ecological careers, as field-based learning is said to be in decline in academic institutions. Here, we asked if mobile technology could improve field-based learning, using ability to identify birds as the study metric. We divided a class of ninety-one undergraduate students into two groups for field-based sessions where they were taught bird identification skills. The first group has access to a traditional identification book and the second group were provided with an identification app. We found no difference between the groups in the ability of students to identify birds after three field sessions. Furthermore, we found that students using the traditional book were significantly more likely to identify novel species. Therefore, we find no evidence that mobile technology improved students’ ability to retain what they experienced in the field; indeed, there is evidence that traditional field guides were more useful to students as they attempted to identify new species. Nevertheless, students felt positively about using their own smartphone devices for learning, highlighting that while apps did not lead to an improvement in bird identification ability, they gave greater accessibility to relevant information outside allocated teaching times.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Resumo:
The goal of primary science education is to foster children’s interest, develop positive science attitudes and promote science process skills development. Learning by playing and discovering provides several opportunities for children to inquiry and understand science based on the first–hand experience. The current research was conducted in the children’s laboratory in Heureka, the Finnish science centre. Young children (aged 7 years) which came from 4 international schools did a set of chemistry experiments in the laboratory. From the results of the cognitive test, the pre-test, the post-test, supported by observation and interview, we could make the conclusion that children enjoyed studying in the laboratory. Chemistry science was interesting and fascinating for young children; no major gender differences were found between boys and girls learning in the science laboratory. Lab work not only encouraged children to explore and investigate science, but also stimulated children’s cognitive development.
Resumo:
The assertion of identity and power via computer-mediated communication in the context of distance or web-based learning presents challenges to both teachers and students. When regular, face-to-face classroom interaction is replaced by online chat or group discussion forums, participants must avail themselves of new techniques and tactics for contributing to and furthering interaction, discussion, and learning. During student-only chat sessions, the absence of teacher-led, face-to-face classroom activities requires the students to assume leadership roles and responsibilities normally associated with the teacher. This situation raises the questions of who teaches and who learns; how students discursively negotiate power roles; and whether power emerges as a function of displayed expertise and knowledge or rather the use of authoritative language. This descriptive study represents an examination of a corpus of task-based discussion logs among Vietnamese students of distance learning courses in English linguistics. The data reveal recurring discourse strategies for 1) negotiating the progression of the discussion sessions, 2) asserting and questioning knowledge, and 3) assuming or delegating responsibility. Power is defined ad hoc as the ability to successfully perform these strategies. The data analysis contributes to a better understanding of how working methods and materials can be tailored to students in distance learning courses, and how such students can be empowered by being afforded opportunities and effectively encouraged to assert their knowledge and authority.
Resumo:
The open provenance architecture (OPA) approach to the challenge was distinct in several regards. In particular, it is based on an open, well-defined data model and architecture, allowing different components of the challenge workflow to independently record documentation, and for the workflow to be executed in any environment. Another noticeable feature is that we distinguish between the data recorded about what has occurred, emphprocess documentation, and the emphprovenance of a data item, which is all that caused the data item to be as it is and is obtained as the result of a query over process documentation. This distinction allows us to tailor the system to separately best address the requirements of recording and querying documentation. Other notable features include the explicit recording of causal relationships between both events and data items, an interaction-based world model, intensional definition of data items in queries rather than relying on explicit naming mechanisms, and emphstyling of documentation to support non-functional application requirements such as reducing storage costs or ensuring privacy of data. In this paper we describe how each of these features aid us in answering the challenge provenance queries.
Resumo:
Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.