878 resultados para design-based inference
Resumo:
This research is a survey on values related to entrepreneurship education and a participatory action research on entrepreneurship education curricula in teacher education. Research problems, rising from the practical development work, were solved by several methods, following the principles of design-based research. Values related to entrepreneurship education were studied among teachers, headmasters, teacher educators, researchers and officers in the field of entrepreneurship education in 16 European Union countries. Fifteen most important values related to entrepreneurship education were listed based on two qualitative surveys (N 124 and N 66). Values were also surveyed among Finnish teacher trainees (N 71). Results of the surveys show that the values given by the teacher trainees did not differ much from the ones given by the professionals already working in the field. Subsequently, emergence of these values was studied in documents that steer education. The values gathered in the surveys did not occur in the documents to a substantial degree. Development of entrepreneurship education curricula in teacher education was conducted by means of participatory action research. The development project gathered 55 teacher trainers from 15 teacher education organisations in Finland. The starting point of the phenomenon based project (see Annala and Mäkinen 2011) was the activity plan created for developing entrepreneurship education curricula. During the project, the learning of the teacher educators proceeded in a balanced way as brightening visions, stronger motivation, increasing understanding and new practices, following Shulman and Shulman’s model (2004). Goals of the development project were set to each teacher educator acquiring basic knowledge on entrepreneurship education, organization of obligatory courses on entrepreneurship education, and making entrepreneurship education a cross-curricular theme in teacher education. The process increased the understanding and motivation of teacher educators to develop and teach entrepreneurship education. It also facilitated collaboration as well as creating visions on entrepreneurship education. Based on the results, the concept of enterprisingness was defined, and recommendations were given for developing curricula in entrepreneurship education.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
This study is about expectations and aspirations of secondary school teachers. It is an investigation of why some teachers aspire to become administrators and why some teachers do not. My research compares expectations and existing attltudes regarding aspirations toward administration which are held by three distinct groups within the secondary school system: 1) principals/vice-principals, 2) aspiring teachers, and 3) non-aspiring teachers. This study questions why, in the late 60's, secondary school administration is still predominated by men. The conclusions and recommendations were based on interviews with thirty men and women in the Hamilton Secondary School System. In addltion, Mr. Keith Rielly, Superintendent of Operations, made valuable contributions to my work. The interviews revealed experiences and percept ions of men and women in di scourse about f amil y re lat i onshi ps, educational choices and perceived internal and external barriers which inhiblted or enhanced their decision to aspire to secondary school administration. Candidates spoke about their personal and professional Hves wlth respect to encouragement, perceived images of an administrator, netWOrking and the effect of marriage and children on their careers. Historically, women have not accepted the challenge of administration and It would appear as if this is still the case today. My research suggests that women are under-represented in secondary school administration because of internal and external barriers which discourage many women from aspiring. I conclude that many of women's internal barrlers are reinforced by external roadblocks which prevent women from aspiring to secondory school administration. Thus. many women who do not envision a future in educational administration establish priorities outside the general realm of education. I recommend that males and females recognize that women make valuable contributions to educational theory and design based on their experiences which may be "differene from mole experiences. but just as significant. Mole and female representation in secondary school administration represents a balance between attitudes and behaviours which can not be accomplished when an administrative offlce is dominated by on all ma1e or all female staff.
Resumo:
Mobile augmented reality applications are increasingly utilized as a medium for enhancing learning and engagement in history education. Although these digital devices facilitate learning through immersive and appealing experiences, their design should be driven by theories of learning and instruction. We provide an overview of an evidence-based approach to optimize the development of mobile augmented reality applications that teaches students about history. Our research aims to evaluate and model the impacts of design parameters towards learning and engagement. The research program is interdisciplinary in that we apply techniques derived from design-based experiments and educational data mining. We outline the methodological and analytical techniques as well as discuss the implications of the anticipated findings.
Resumo:
Mobile augmented reality applications are increasingly utilized as a medium for enhancing learning and engagement in history education. Although these digital devices facilitate learning through immersive and appealing experiences, their design should be driven by theories of learning and instruction. We provide an overview of an evidence-based approach to optimize the development of mobile augmented reality applications that teaches students about history. Our research aims to evaluate and model the impacts of design parameters towards learning and engagement. The research program is interdisciplinary in that we apply techniques derived from design-based experiments and educational data mining. We outline the methodological and analytical techniques as well as discuss the implications of the anticipated findings.
Resumo:
In participatory design situations the competence of the facilitator will influence the opportunities for a user group to become engaged in the process of design. Based on the observation of the conversations from a series of design workshops, the performance of design facilitation expertise by an expert architect is compared with a less experienced architectural graduate. The skills that are the focus of this research are the conversational competences deployed by architects to engage users in the design of an architectural project. The difference between the conversational behaviour of a project architect and a less experienced graduate was observed to illustrate with examples the effect the performance of facilitation had on the opportunity for user engagement in design, and of learning the skill of facilitation that occurred in these situations.
Resumo:
Studies of ignorance-driven decision making have been employed to analyse when ignorance should prove advantageous on theoretical grounds or else they have been employed to examine whether human behaviour is consistent with an ignorance-driven inference strategy (e. g., the recognition heuristic). In the current study we examine whether-under conditions where such inferences might be expected-the advantages that theoretical analyses predict are evident in human performance data. A single experiment shows that, when asked to make relative wealth judgements, participants reliably use recognition as a basis for their judgements. Their wealth judgements under these conditions are reliably more accurate when some of the target names are unknown than when participants recognize all of the names (a "less-is-more effect"). These results are consistent across a number of variations: the number of options given to participants and the nature of the wealth judgement. A basic model of recognition-based inference predicts these effects.
Resumo:
“Fast & frugal” heuristics represent an appealing way of implementing bounded rationality and decision-making under pressure. The recognition heuristic is the simplest and most fundamental of these heuristics. Simulation and experimental studies have shown that this ignorance-driven heuristic inference can prove superior to knowledge based inference (Borges, Goldstein, Ortman & Gigerenzer, 1999; Goldstein & Gigerenzer, 2002) and have shown how the heuristic could develop from ACT-R’s forgetting function (Schooler & Hertwig, 2005). Mathematical analyses also demonstrate that, under certain conditions, a “less-is-more effect” will always occur (Goldstein & Gigerenzer, 2002). The further analyses presented in this paper show, however, that these conditions may constitute a special case and that the less-is-more effect in decision-making is subject to the moderating influence of the number of options to be considered and the framing of the question.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.
Resumo:
Cellular neural networks (CNNs) have locally connected neurons. This characteristic makes CNNs adequate for hardware implementation and, consequently, for their employment on a variety of applications as real-time image processing and construction of efficient associative memories. Adjustments of CNN parameters is a complex problem involved in the configuration of CNN for associative memories. This paper reviews methods of associative memory design based on CNNs, and provides comparative performance analysis of these approaches.
Resumo:
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
Resumo:
Background: Evidence-based practice (EBP) is emphasized to increase the quality of care and patient safety. EBP is often described as a process consisting of distinct activities including, formulating questions, searching for information, compiling the appraised information, implementing evidence, and evaluating the resulting practice. To increase registered nurses' (RNs') practice of EBP, variables associated with such activities need to be explored. The aim of the study was to examine individual and organizational factors associated with EBP activities among RNs 2 years post graduation. Methods: A cross-sectional design based on a national sample of RNs was used. Data were collected in 2007 from a cohort of RNs, included in the Swedish Longitudinal Analyses of Nursing Education/Employment study. The sample consisted of 1256 RNs (response rate 76%). Of these 987 RNs worked in healthcare at the time of the data collection. Data was self-reported and collected through annual postal surveys. EBP activities were measured using six single items along with instruments measuring individual and work-related variables. Data were analyzed using logistic regression models. Results: Associated factors were identified for all six EBP activities. Capability beliefs regarding EBP was a significant factor for all six activities (OR = 2.6 - 7.3). Working in the care of older people was associated with a high extent of practicing four activities (OR = 1.7 - 2.2). Supportive leadership and high collective efficacy were associated with practicing three activities (OR = 1.4 - 2.0). Conclusions: To be successful in enhancing EBP among newly graduated RNs, strategies need to incorporate both individually and organizationally directed factors.
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.