264 resultados para problem difficulty
em Queensland University of Technology - ePrints Archive
Resumo:
Cognitive load theory was used to generate a series of three experiments to investigate the effects of various worked example formats on learning orthographic projection. Experiments 1 and 2 investigated the benefits of presenting problems, conventional worked examples incorporating the final 2-D and 3-D representations only, and modified worked examples with several intermediate stages of rotation between the 2-D and 3-D representations. Modified worked examples proved superior to conventional worked examples without intermediate stages while conventional worked examples were, in turn, superior to problems. Experiment 3 investigated the consequences of varying the number and location of intermediate stages in the rotation trajectory and found three stages to be superior to one. A single intermediate stage was superior when nearer the 2-D than the 3-D end of the trajectory. It was concluded that (a) orthographic projection is learned best using worked examples with several intermediate stages and that (b) a linear relation between angle of rotation and problem difficulty did not hold for orthographic projection material. Cognitive load theory could be used to suggest the ideal location of the intermediate stages.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
Novice programmers have difficulty developing an algorithmic solution while simultaneously obeying the syntactic constraints of the target programming language. To see how students fare in algorithmic problem solving when not burdened by syntax, we conducted an experiment in which a large class of beginning programmers were required to write a solution to a computational problem in structured English, as if instructing a child, without reference to program code at all. The students produced an unexpectedly wide range of correct, and attempted, solutions, some of which had not occurred to their teachers. We also found that many common programming errors were evident in the natural language algorithms, including failure to ensure loop termination, hardwiring of solutions, failure to properly initialise the computation, and use of unnecessary temporary variables, suggesting that these mistakes are caused by inexperience at thinking algorithmically, rather than difficulties in expressing solutions as program code.
Resumo:
Since the 1960s, numerous studies on problem solving have revealed the complexity of the domain and the difficulty in translating research findings into practice. The literature suggests that the impact of problem solving research on the mathematics curriculum has been limited. Furthermore, our accumulation of knowledge on the teaching of problem solving is lagging. In this first discussion paper we initially present a sketch of 50 years of research on mathematical problem solving. We then consider some factors that have held back problem solving research over the past decades and offer some directions for how we might advance the field. We stress the urgent need to take into account the nature of problem solving in various arenas of today’s world and to accordingly modernize our perspectives on the teaching and learning of problem solving and of mathematical content through problem solving. Substantive theory development is also long overdue—we show how new perspectives on the development of problem solving expertise can contribute to theory development in guiding the design of worthwhile learning activities. In particular, we explore a models and modeling perspective as an alternative to existing views on problem solving.
Resumo:
Purpose of review: To critique the recent literature on telephone, correspondence-based, and computerized interventions for alcohol problems, which enhance or substitute for practitioner-delivered treatments. Recent findings: There is an unmet need for screening, assessment and intervention for alcohol problems, in part because of the difficulty in accessing such treatment within the current health care system. Research on the efficacy of correspondence or electronic (for example Internet-based) interventions is beginning to emerge. In the period 2003–2004 we identified nine acceptability or feasibility studies of these approaches and seven efficacy trials covering a wide range of settings. These modes of intervention are acceptable to patients and the public, and with careful planning, can be implemented in a variety of settings. Treatment trials demonstrate the efficacy of these interventions in reducing hazardous drinking by university students, in delaying initiation of heavy drinking in children and adolescents, and, intriguingly, in addressing insomnia among recovering alcoholics. Summary: There is strong support among potential users for alcohol interventions that employ telephone assistance, written correspondence, and the Internet. These new technologies offer the prospect of increasing the reach of interventions for problem drinking and being cost- effective alternatives or supplements to face-to-face health service delivery.
Resumo:
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Resumo:
The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.
Resumo:
A new solution to the millionaire problem is designed on the base of two new techniques: zero test and batch equation. Zero test is a technique used to test whether one or more ciphertext contains a zero without revealing other information. Batch equation is a technique used to test equality of multiple integers. Combination of these two techniques produces the only known solution to the millionaire problem that is correct, private, publicly verifiable and efficient at the same time.