3 resultados para Program evaluation

em QSpace: Queen's University - Canada


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Developmental evaluation (DE) is an evaluation approach that aims to support the development of an innovation (Patton, 1994, 2011). This aim is achieved through supporting clients’ information needs through evaluative inquiry as they work to develop and refine the innovation. While core concepts and principles are beginning to be articulated and refined, challenges remain as to how to focus a developmental evaluation beyond those knowledge frameworks most immediate to clients to support innovation development. Anchoring a DE in knowledge frameworks other than those of the clients might direct attention to issues not yet obvious to clients, but which might further the goal of supporting innovation development if attended to. Drawing concepts and practices from the field of design may be one avenue with which to inform developmental evaluation in achieving its aim. Through a case study methodology, this research seeks to understand the nuances of operationalizing the guiding principles of DE as well as to investigate the utility, feasibility, and consequences of integrating design concepts and practices into developmental evaluation (design-informed developmental evaluation, “DI-DE”). It does so by documenting the efforts of a design-informed developmental evaluator and a task force of educators and researchers in a Faculty of Education as they work to develop a graduate-level education program. A systematic review into those purposeful efforts made to introduce DI-DE thinking into task force deliberations, and an analysis into the responses and consequences of those efforts shed light on what it had meant to practice DI-DE. As a whole, this research on evaluation is intended to further contemporary thinking about the closely coupled relationship between program development and evaluation in complex and dynamic environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The presentation made at the conference addressed the issue of linkages between performance information and innovation within the Canadian federal government1. This is a three‐part paper prepared as background to that presentation. • Part I provides an overview of three main sources of performance information - results-based systems, program evaluation, and centrally driven review exercises – and reviews the Canadian experience with them. • Part II identifies and discusses a number of innovation issues that are common to the literature reviewed for this paper. • Part III examines actual and potential linkages between innovation and performance information. This section suggests that innovation in the Canadian federal government tends to cluster into two groups: smaller initiatives driven by staff or middle management; and much larger projects involving major programs, whole departments or whole-of-government. Readily available data on smaller innovation projects is skimpy but suggests that performance information does not play a major role in stimulating these initiatives. In contrast, two of the examples of large-scale innovation show that performance information plays a critical role at all stages. The paper concludes by supporting the contention of others writing on this topic: that more research is needed on innovation, particularly on its link to performance information. In that context, other conclusions drawn in this paper are tentative but suggest that the quality of performance information is as important for innovation as it is for performance management. However, innovation is likely to require its own particular performance information that may not be generated on a routine basis for purposes of performance management, particularly in the early stages of innovation. And, while the availability of performance information can be an important success factor in innovation, it does not stand alone. The commonality of a number of other factors identified in the literature surveyed for this paper strongly suggests that equal if not greater priority needs to be given to attenuating factors that inhibit innovation and to nurturing incentives.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper considers the analysis of data from randomized trials which offer a sequence of interventions and suffer from a variety of problems in implementation. In experiments that provide treatment in multiple periods (T>1), subjects have up to 2^{T}-1 counterfactual outcomes to be estimated to determine the full sequence of causal effects from the study. Traditional program evaluation and non-experimental estimators are unable to recover parameters of interest to policy makers in this setting, particularly if there is non-ignorable attrition. We examine these issues in the context of Tennessee's highly influential randomized class size study, Project STAR. We demonstrate how a researcher can estimate the full sequence of dynamic treatment effects using a sequential difference in difference strategy that accounts for attrition due to observables using inverse probability weighting M-estimators. These estimates allow us to recover the structural parameters of the small class effects in the underlying education production function and construct dynamic average treatment effects. We present a complete and different picture of the effectiveness of reduced class size and find that accounting for both attrition due to observables and selection due to unobservable is crucial and necessary with data from Project STAR