796 resultados para formative institutional evaluation
Resumo:
Although Design Science Research (DSR) is now an accepted approach to research in the Information Systems (IS) discipline, consensus on the methodology of DSR has yet to be achieved. Lack of a comprehensive and detailed methodology for Design Science Research (DSR) in the Information System (IS) discipline is a main issue. Prior research (the parent-study) aimed to remedy this situation and resulted in the DSR-Roadmap (Alturki et al., 2011a). Continuing empirical validation and revision of the DSR-Roadmap strives towards a methodology with appropriate levels of detail, integration, and completeness for novice researchers to efficiently and effectively conduct and report DSR in IS. The sub-study reported herein contributes to this larger, ongoing effort. This paper reports results from a formative evaluation effort of the DSR-Roadmap conducted using focus group analysis. Generally, participants endorsed the utility and intuitiveness of the DSR-Roadmap, while also suggesting valuable refinements. Both parent-study and sub-study make methodological contributions. The parent-study is the first attempt of utilizing DSR to develop a research methodology showing an example of how to use DSR in research methodology construction. The sub-study demonstrates the value of the focus group method in DSR for formative product evaluation.
Resumo:
This report is a formative evaluation of the operations of the DEEWR funded Stronger Smarter Learning Community (SSLC) project from September 2009 to July 2011. It is undertaken by an independent team of researchers from Queensland University of Technology, the University of Newcastle and Harvard University. It reports on findings from: documentary analysis; qualitative case studies of SSLC Hub schools; descriptive, multivariate and multilevel analysis of survey data from school leaders and teachers from SSLC Hub and Affiliate schools and from a control group of non-SSLC schools; and multilevel analysis of school-level data on SSLC Hubs, Affiliates and ACARA like-schools. Key findings from this work are that: • SSLC school leaders and teachers are reporting progress in changing school ethos around issues of: recognition of Indigenous identity, Indigenous leadership, innovative approaches to staffing and school models, Indigenous community engagement and high expectations leadership; • Many Stronger Smarter messages are reportedly having better uptake in schools with high percentages of Indigenous students; • There are no major or consistent patterns of differences between SSLC and non-SSLC schools in teacher and school leader self-reports of curriculum and pedagogy practices; and • There is no evidence to date that SSLC Hubs and Affiliates have increased attendance or increased achievement gains compared to ACARA like-schools. Twenty-one months is relatively early in this school reform project. Hence the major focus of subsequent reports will be on the documentation of comparative longitudinal gains in achievement tests and improved attendance. The 2011 and 2012 research also will model the relationships between change in school ethos/climate, changed Indigenous community relations, improved curriculum/pedagogy, and gains in Indigenous student achievement, attendance and outcomes. The key challenge for SSLC and the Stronger Smarter approach will be whether it can systematically generate change and reform in curriculum and pedagogy practices that can be empirically linked to improved student outcomes.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
In an earlier paper (Cameron & Johnson 2004) we introduced the idea of formative evaluation (or evaluation for development), the purpose of which is to provide information for improving planning programs and activities. This type of evaluation differs from the two other types: outcome evaluation which aims to judge the success or otherwise of a program; and evaluation for knowledge which seeks to contribute to theoretical work on planning processes and activities. In the earlier paper we also outlined the first stage of formative evaluation in the SEQ 2021 regional planning exercise showing how the process of planning for community engagement was modified in light of the evaluation findings. This current paper details the second stage of formative evaluation in which the collaborative planning component of SEQ 2021 was evaluated, as such it further demonstrates how formative evaluation can be used to improve planning programs. The evaluation findings also provide insights into strategies for more effective collaborative planning. We begin with an overview of collaborative approaches to regional planning, including the SEQ 2021 regional planning program. We then outline formal and informal evaluations of various collaborative regional planning exercises, including the predecessor of SEQ 2021 - SEQ 2001. This sets the scene for discussion of the approach used to evaluate the collaborative component of SEQ 2021. After outlining the main findings from the evaluation and the ways these findings were used to refine the collaborative planning process we conclude with a series of recommendations, relevant not only to SEQ 2021 but to other collaborative planning exercises
Resumo:
The need to address on-road motorcycle safety in Australia is important due to the disproportionately high percentage of riders and pillions killed and injured each year. One approach to preventing motorcycle-related injury is through training and education. However, motorcycle rider training lacks empirical support as an effective road safety countermeasure to reduce crash involvement. Previous reviews have highlighted that risk-taking is a contributing factor in many motorcycle crashes, rather than merely a lack of vehicle-control skills (Haworth & Mulvihill, 2005; Jonah, Dawson & Bragg, 1982; Watson et al, 1996). Hence, though the basic vehicle-handling skills and knowledge of road rules that are taught in most traditional motorcycle licence training programs may be seen as an essential condition of safe riding, they do not appear to be sufficient in terms of crash reduction. With this in mind there is considerable scope for the improvement of program focus and content for rider training and education. This program of research examined an existing traditional pre-licence motorcycle rider training program and formatively evaluated the addition of a new classroom-based module to address risky riding; the Three Steps to Safer Riding program. The pilot program was delivered in the real world context of the Q-Ride motorcycle licensing system in the state of Queensland, Australia. Three studies were conducted as part of the program of research: Study 1, a qualitative investigation of delivery practices and student learning needs in an existing rider training course; Study 2, an investigation of the extent to which an existing motorcycle rider training course addressed risky riding attitudes and motives; and Study 3, a formative evaluation of the new program. A literature review as well as the investigation of learning needs for motorcyclists in Study 1 aimed to inform the initial planning and development of the Three Steps to Safer Riding program. Findings from Study 1 suggested that the training delivery protocols used by the industry partner training organisation were consistent with a learner-centred approach and largely met the learning needs of trainee riders. However, it also found that information from the course needs to be reinforced by on-road experiences for some riders once licensed and that personal meaning for training information was not fully gained until some riding experience had been obtained. While this research informed the planning and development of the new program, a project team of academics and industry experts were responsible for the formulation of the final program. Study 2 and Study 3 were conducted for the purpose of formative evaluation and program refinement. Study 2 served primarily as a trial to test research protocols and data collection methods with the industry partner organisation and, importantly, also served to gather comparison data for the pilot program which was implemented with the same rider training organisation. Findings from Study 2 suggested that the existing training program of the partner organisation generally had a positive (albeit small) effect on safety in terms of influencing attitudes to risk taking, the propensity for thrill seeking, and intentions to engage in future risky riding. However, maintenance of these effects over time and the effects on riding behaviour remain unclear due to a low response rate upon follow-up 24 months after licensing. Study 3 was a formative evaluation of the new pilot program to establish program effects and possible areas for improvement. Study 3a examined the short term effects of the intervention pilot on psychosocial factors underpinning risky riding compared to the effects of the standard traditional training program (examined in Study 2). It showed that the course which included the Three Steps to Safer Riding program elicited significantly greater positive attitude change towards road safety than the existing standard licensing course. This effect was found immediately following training, and mean scores for attitudes towards safety were also maintained at the 12 month follow-up. The pilot program also had an immediate effect on other key variables such as risky riding intentions and the propensity for thrill seeking, although not significantly greater than the traditional standard training. A low response rate at the 12 month follow-up unfortunately prevented any firm conclusions being drawn regarding the impact of the pilot program on self-reported risky riding once licensed. Study 3a further showed that the use of intermediate outcomes such as self-reported attitudes and intentions for evaluation purposes provides insights into the mechanisms underpinning risky riding that can be changed by education and training. A multifaceted process evaluation conducted in Study 3b confirmed that the intervention pilot was largely delivered as designed, with course participants also rating most aspects of training delivery highly. The complete program of research contributed to the overall body of knowledge relating to motorcycle rider training, with some potential implications for policy in the area of motorcycle rider licensing. A key finding of the research was that psychosocial influences on risky riding can be shaped by structured education that focuses on awareness raising at a personal level and provides strategies to manage future riding situations. However, the formative evaluation was mainly designed to identify areas of improvement for the Three Steps to Safer Riding program and found several areas of potential refinement to improve future efficacy of the program. This included aspects of program content, program delivery, resource development, and measurement tools. The planned future follow-up of program participants' official crash and traffic offence records over time may lend further support for the application of the program within licensing systems. The findings reported in this thesis offer an initial indication that the Three Steps to Safer Riding is a useful resource to accompany skills-based training programs.
Resumo:
Introduction Electronic medication administration record (eMAR) systems are promoted as a potential intervention to enhance medication safety in residential aged care facilities (RACFs). The purpose of this study was to conduct an in-practice evaluation of an eMAR being piloted in one Australian RACF before its roll out, and to provide recommendations for system improvements. Methods A multidisciplinary team conducted direct observations of workflow (n=34 hours) in the RACF site and the community pharmacy. Semi-structured interviews (n=5) with RACF staff and the community pharmacist were conducted to investigate their views of the eMAR system. Data were analysed using a grounded theory approach to identify challenges associated with the design of the eMAR system. Results The current eMAR system does not offer an end-to-end solution for medication management. Many steps, including prescribing by doctors and communication with the community pharmacist, are still performed manually using paper charts and fax machines. Five major challenges associated with the design of eMAR system were identified: limited interactivity; inadequate flexibility; problems related to information layout and semantics; the lack of relevant decision support; and system maintenance issues.We suggest recommendations to improve the design of the eMAR system and to optimize existing workflows. Discussion Immediate value can be achieved by improving the system interactivity, reducing inconsistencies in data entry design and offering dedicated organisational support to minimise connectivity issues. Longer-term benefits can be achieved by adding decision support features and establishing system interoperability requirements with stakeholder groups (e.g. community pharmacies) prior to system roll out. In-practice evaluations of technologies like eMAR system have great value in identifying design weaknesses which inhibit optimal system use.
Resumo:
This paper discusses the different perceptions of first year accounting students about their tutorial activities and their engagements in assessment. As the literature suggests, unless participation in learning activities forms part of graded assessment, it is often difficult to engage students in these activities. Using an action research model, this paper reports the study of first year accounting students' responses to action-oriented learning tasks in tutorials. The paper focuses on the importance of aligning curriculum objectives, learning and teaching activities and assessment, i.e. the notion of constructive alignment. However, as the research findings indicate, without support at institutional level, applying constructive alignment to facilitate quality student learning outcomes is a difficult task. Thus, the impacts of policy constraints on curriculum issues are also discussed, focusing on the limitations faced by tutors and their lack of involvement in curriculum development.
Resumo:
Measuring social and environmental metrics of property is necessary for meaningful triple bottom line (TBL) assessments. This paper demonstrates how relevant indicators derived from environmental rating systems provide for reasonably straightforward collations of performance scores that support adjustments based on a sliding scale. It also highlights the absence of a corresponding consensus of important social metrics representing the third leg of the TBL tripod. Assessing TBL may be unavoidably imprecise, but if valuers and managers continue to ignore TBL concerns, their assessments may soon be less relevant given the emerging institutional milieu informing and reflecting business practices and society expectations.
Resumo:
This paper discusses perceptions of first year accounting students about their tutorial activities and their engagements in assessment. As the literature suggests, unless participation in learning activities forms part of graded assessment it is often difficult to engage students in these activities. Using an action research model, this paper reports the study of first year accounting students' responses to action-orientated learning tasks in tutorials. The paper focuses on the importance of aligning curriculum objectives, learning and teaching activities and assessment,i.e. the notion of constructive alignment. However, as the research findings indicate, without support at institutional level, applying constructive alignment to facilitate quality student learning outcomes is a difficult task. Thus, the impacts of policy constraints on curriculum issues are also discussed, focusing on the limitations faced by tutors and their lack of involvement in curriculum development.
Resumo:
The draft Year 1 Literacy and Numeracy Checkpoints Assessments were in open and supported trial during Semester 2, 2010. The purpose of these trials was to evaluate the Year 1 Literacy and Numeracy Checkpoints Assessments (hereafter the Year 1 Checkpoints) that were designed in 2009 as a way to incorporate the use of the Year 1 Literacy and Numeracy Indicators as formative assessment in Year 1 in Queensland Schools. In these trials there were no mandated reporting requirements. The processes of assessment were related to future teaching decisions. As such the trials were trials of materials and the processes of using those materials to assess students, plan and teach in year 1 classrooms. In their current form the Year 1 Checkpoints provide assessment resources for teachers to use in February, June and October. They aim to support teachers in monitoring children's progress and making judgments about their achievement of the targeted P‐3 Literacy and Numeracy Indicators by the end of Year 1 (Queensland Studies Authority, 2010 p. 1). The Year 1 Checkpoints include support materials for teachers and administrators, an introductory statement on assessment, work samples, and a Data Analysis Assessment Record (DAAR) to record student performance. The Supported Trial participants were also supported with face‐to‐face and on‐line training sessions, involvement in a moderation process after the October Assessments, opportunities to participate in discussion forums as well as additional readings and materials. The assessment resources aim to use effective early years assessment practices in that the evidence is gathered from hands‐on teaching and learning experiences, rather than more formal assessment methods. They are based in a model of assessment for learning, and aim to support teachers in the “on‐going process of determining future learning directions” (Queensland Studies Authority, 2010 p. 1) for all students. Their aim is to focus teachers on interpreting and analysing evidence to make informed judgments about the achievement of all students, as a way to support subsequent planning for learning and teaching. The Evaluation of the Year 1 Literacy and Numeracy Checkpoints Assessments Supported Trial (hereafter the Evaluation) aimed to gather information about the appropriateness, effectiveness and utility of the Year 1 Checkpoints Assessments from early years’ teachers and leaders in up to one hundred Education Queensland schools who had volunteered to be part of the Supported Trial. These sample schools represent schools across a variety of Education Queensland regions and include schools with: - A high Indigenous student population; - Urban, rural and remote school locations; - Single and multi‐age early phase classes; - A high proportion of students from low SES backgrounds. The purpose of the Evaluation was to: Evaluate the materials and report on the views of school‐based staff involved in the trial on the process, materials, and assessment practices utilised. The Evaluation has reviewed the materials, and used surveys, interviews, and observations of processes and procedures to collect relevant data to help present an informed opinion on the Year 1 Checkpoints as assessment for the early years of schooling. Student work samples and teacher planning and assessment documents were also collected. The evaluation has not evaluated the Year 1 Checkpoints in any other capacity than as a resource for Year 1 teachers and relevant support staff.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
Formative assessment is increasingly being implemented through policy initiatives in Chinese educational contexts. As an approach to assessment, formative assessment derives many of its key principles from Western contexts, notably through the work of scholars in the UK, the USA and Australia. The question for this paper is the ways that formative assessment has been interpreted in the teaching of College English in Chinese Higher Education. The paper reports on a research study that utilised a sociocultural perspective on learning and assessment to analyse how two Chinese universities – an urban-based Key University and a regional-based Non-Key University – interpreted and enacted a China Ministry of Education policy on formative assessment in College English teaching. Of particular interest for the research were the ways in which the sociocultural conditions of the Chinese context mediated understanding of Western principles and led to their adaptation. The findings from the two universities identified some consistency in localised interpretations of formative assessment which included emphases on process and student participation. The differences related to the specific sociocultural conditions contextualising each university including geographical location, socioeconomic status, and teacher and student roles, expectations and beliefs about English. The findings illustrate the sociocultural tensions in interpreting, adapting and enacting formative assessment in Chinese College English classes and the consequent challenges to and questions about retaining the spirit of formative assessment as it was originally conceptualised.
Resumo:
The purpose of this paper is to report on a methods research project investigating the evaluation of diverse teaching practice in Higher Education. The research method is a single site case study of an Australian university with data collected through published documents, surveys, interviews and focus groups. This project provides evidence of the wide variety of evaluation practice and diverse teaching practice across the university. This breadth identifies the need for greater flexibility of evaluation processes, tools and support to assist teaching staff to evaluate their diverse teaching practice. The employment opportunities for academics benchmark the university nationally and position the case study in the field. Finally this reaffirms the institutional responsibility for services to support teaching staff in an ongoing manner.
Resumo:
The College English Curriculum Requirements (CECR), announced by the Chinese Ministry of Education in 2007, recommended the inclusion of formative assessment into the existing summative assessment framework of College English. This policy had the potential to fundamentally change the nature of assessment and its role in the teaching and learning of English in Chinese universities. In order to document and analyse these changes, case studies involving English language teachers and learners were undertaken in two Chinese Universities: one a Key university in the national capital; the other a non-Key university in a western province. The case study design incorporated classroom observations and interviews with English language teachers and their students. The type and focus of feedback and the engagement of students in assessment were analysed in the two contexts. Fundamental to the analysis was the concept of enactment, with the focus of this study on the ways that policy ideas and principles were enacted in the practices of the Chinese university classroom. Understandings of formative assessment as applied in contexts other than the predominantly Western, Anglophone contexts from where many of its principles derive, are offered.