909 resultados para assessment feedback
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
Recent developments in federal policy have prompted the creation of state evaluation frameworks for principals and teachers that hold educators accountable for effective practices and student outcomes. These changes have created a demand for formative evaluation instruments that reflect current accountability pressures and can be used by schools to focus school improvement and leadership development efforts. The Comprehensive Assessment of Leadership for Learning (CALL) is a next generation, 360-degree on-line assessment and feedback system that reflect best practices in feedback design. Some unique characteristics of CALL include a focus on: leadership distributed throughout the school rather than as carried out by an individual leader; assessment of leadership tasks rather than perceptions of leadership practice; a focus on larger complex systems of middle and high school; and transparency of assessment design. This paper describes research contributing to the design and validation of the CALL survey instrument.
Resumo:
Research supported by U.S. Dept. of Housing and Urban Development, Office of Policy Development and Research.
Resumo:
Constructing quality assessment rubrics can be challenging, especially when they are used for integrated, group-centered, applied learning. We describe a collaborative assessment task in which groups of second-year dentistry students developed a complex concept map. In groups of four, the students were given a written, simulated, medical history of a patient and required to construct a concept map illustrating relevant pathophysiological concepts and pharmacological interventions. This report describes a research project aimed at making educational goals of the task more explicit through investigating student and faculty member understandings of the criteria that might be used to assess the concept map. Information was gathered about the perceptions of students in relation to the learning goals associated with the task. These were compared with faculty member perceptions. The findings were used to develop an assessment rubric intended to be more accessible to learners. The new rubric used the language of both faculty members and students to more clearly represent expectations of each criterion and standard. This assessment rubric will be used in 2005 for the next phase of the project.
Resumo:
Recent surveys reveal that many university students in the U.K. are not satisfied with the timeliness and usefulness of the feedback given by their tutors. Ensuring timeliness in marking can result in a reduction in the quality of feedback. Though suitable use of Information and Communication Technology should alleviate this problem, existing Virtual Learning Environments are inadequate to support detailed marking scheme creation and they provide little support for giving detailed feedback. This paper describes a unique new web-based tool called e-CAF for facilitating coursework assessment and feedback management directed by marking schemes. Using e-CAF, tutors can create or reuse detailed marking schemes efficiently without sacrificing the accuracy or thoroughness in marking. The flexibility in marking scheme design also makes it possible for tutors to modify a marking scheme during the marking process without having to reassess the students’ submissions. The resulting marking process will become more transparent to students.
Resumo:
Recent National Student Surveys revealed that many U.K. university students are dissatisfied with the timeliness and usefulness of the feedback received from their tutors. Ensuring timeliness in marking often results in a reduction in the quality of feedback. In Computer Science where learning relies on practising and learning from mistakes, feedback that pin-points errors and explains means of improvement is important to achieve a good student learning experience. Though suitable use of Information and Communication Technology should alleviate this problem, existing Virtual Learning Environments and e-Assessment applications such as Blackboard/WebCT, BOSS, MarkTool and GradeMark are inadequate to support a coursework assessment process that promotes timeliness and usefulness of feedback while maintaining consistency in marking involving multiple tutors. We have developed a novel Internet application, called eCAF, for facilitating an efficient and transparent coursework assessment and feedback process. The eCAF system supports detailed marking scheme editing and enables tutors to use such schemes to pin-point errors in students' work so as to provide helpful feedback efficiently. Tutors can also highlight areas in a submitted work and associate helpful feedback that clearly links to the identified mistakes and the respective marking criteria. In light of the results obtained from a recent trial of eCAF, we discuss how the key features of eCAF may facilitate an effective and efficient coursework assessment and feedback process.
Resumo:
Our key contribution is a flexible, automated marking system that adds desirable functionality to existing E-Assessment systems. In our approach, any given E-Assessment system is relegated to a data-collection mechanism, whereas marking and the generation and distribution of personalised per-student feedback is handled separately by our own system. This allows content-rich Microsoft Word feedback documents to be generated and distributed to every student simultaneously according to a per-assessment schedule.
The feedback is adaptive in that it corresponds to the answers given by the student and provides guidance on where they may have gone wrong. It is not limited to simple multiple choice which are the most prescriptive question type offered by most E-Assessment Systems and as such most straightforward to mark consistently and provide individual per-alternative feedback strings. It is also better equipped to handle the use of mathematical symbols and images within the feedback documents which is more flexible than existing E-Assessment systems, which can only handle simple text strings.
As well as MCQs the system reliably and robustly handles Multiple Response, Text Matching and Numeric style questions in a more flexible manner than Questionmark: Perception and other E-Assessment Systems. It can also reliably handle multi-part questions where the response to an earlier question influences the answer to a later one and can adjust both scoring and feedback appropriately.
New question formats can be added at any time provided a corresponding marking method conforming to certain templates can also be programmed. Indeed, any question type for which a programmatic method of marking can be devised may be supported by our system. Furthermore, since the student’s response to each is question is marked programmatically, our system can be set to allow for minor deviations from the correct answer, and if appropriate award partial marks.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Tesis (Licenciado en Lenguas Castellana, Inglés y Francés).--Universidad de La Salle. Facultad de Ciencias de La Educación. Licenciatura en Lengua Castellana, Inglés y Francés, 2014
Resumo:
A set of slides used for the RAP SIG event on 19 Jan 2017
Resumo:
The Queensland University of Technology (QUT) University Academic Board approved a new QUT Assessment Policy in September 2003, which requires a criterion-referenced approach as opposed to a norm-referenced approach to assessment across the university(QUT,MOPP,2003). In 2004, the QUT Law School embarked upon a process of awareness raising about criterion-referenced assessment amongst staff and from 2004 – 2005 staggered the implementation of criterion-referenced assessment in all first year core undergraduate law units. This paper will briefly discuss the benefits and potential pitfalls of criterion referenced assessment and the context for implementing it in the first year law program, report on student’s feedback on the introduction of criterion referenced assessment and the strategies adopted in 2005 to engage students more fully in criterion referenced assessment processes to enhance their learning outcomes.
Resumo:
Developing an effective impact evaluation framework, managing and conducting rigorous impact evaluations, and developing a strong research and evaluation culture within development communication organisations presents many challenges. This is especially so when both the community and organisational context is continually changing and the outcomes of programs are complex and difficult to clearly identify.----- This paper presents a case study from a research project being conducted from 2007-2010 that aims to address these challenges and issues, entitled Assessing Communication for Social Change: A New Agenda in Impact Assessment. Building on previous development communication projects which used ethnographic action research, this project is developing, trailing and rigorously evaluating a participatory impact assessment methodology for assessing the social change impacts of community radio programs in Nepal. This project is a collaboration between Equal Access – Nepal (EAN), Equal Access – International, local stakeholders and listeners, a network of trained community researchers, and a research team from two Australian universities. A key element of the project is the establishment of an organisational culture within EAN that values and supports the impact assessment process being developed, which is based on continuous action learning and improvement. The paper describes the situation related to monitoring and evaluation (M&E) and impact assessment before the project began, in which EAN was often reliant on time-bound studies and ‘success stories’ derived from listener letters and feedback. We then outline the various strategies used in an effort to develop stronger and more effective impact assessment and M&E systems, and the gradual changes that have occurred to date. These changes include a greater understanding of the value of adopting a participatory, holistic, evidence-based approach to impact assessment. We also critically review the many challenges experienced in this process, including:----- • Tension between the pressure from donors to ‘prove’ impacts and the adoption of a bottom-up, participatory approach based on ‘improving’ programs in ways that meet community needs and aspirations.----- • Resistance from the content teams to changing their existing M&E practices and to the perceived complexity of the approach.----- • Lack of meaningful connection between the M&E and content teams.----- • Human resource problems and lack of capacity in analysing qualitative data and reporting results.----- • The contextual challenges, including extreme poverty, wide cultural and linguistic diversity, poor transport and communications infrastructure, and political instability.----- • A general lack of acceptance of the importance of evaluation within Nepal due to accepting everything as fate or ‘natural’ rather than requiring investigation into a problem.