18 resultados para QUALITY-CONTROL GUIDELINES
Resumo:
Introduction To meet the quality standards for high-stakes OSCEs, it is necessary to ensure high quality standardized performance of the SPs involved.[1] One of the ways this can be assured is through the assessment of the quality of SPs` performance in training and during the assessment. There is some literature concerning validated instruments that have been used to assess SP performance in formative contexts but very little related to high stakes contexts.[2], [3], [4]. Content and structure During this workshop different approaches to quality control for SPs` performance, developed in medicine, pharmacy and nursing OSCEs, will be introduced. Participants will have the opportunity to use these approaches in simulated interactions. Advantages and disadvantages of these approaches will be discussed. Anticipated outcomes By the end of this session, participants will be able to discuss the rationale for quality control of SPs` performance in high stakes OSCEs, outline key factors in creating strategies for quality control, identify various strategies for assuring quality control, and reflect on applications to their own practice. Who should attend The workshop is designed for those interested in quality assurance of SP performance in high stakes OSCEs. Level All levels are welcome. References Adamo G. 2003. Simulated and standardized patients in OSCEs: achievements and challenges:1992-2003. Med Teach. 25(3), 262- 270. Wind LA, Van Dalen J, Muijtjens AM, Rethans JJ. Assessing simulated patients in an educational setting: the MaSP (Maastricht Assessment of Simulated Patients). Med Educ 2004, 38(1):39-44. Bouter S, van Weel-Baumgarten E, Bolhuis S. Construction and validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients' ability to role-play and provide feedback to students. Acad Med: Journal of the Association of American Medical Colleges 2012. May W, Fisher D, Souder D: Development of an instrument to measure the quality of standardized/simulated patient verbal feedback. Med Educ 2012, 2(1).
Resumo:
Introduction In our program, simulated patients (SPs) give feedback to medical students in the course of communication skills training. To ensure effective training, quality control of the SPs’ feedback should be implemented. At other institutions, medical students evaluate the SPs’ feedback for quality control (Bouter et al., 2012). Thinking about implementing quality control for SPs’ feedback in our program, we wondered whether the evaluation by students would result in the same scores as evaluation by experts. Methods Consultations simulated by 4th-year medical students with SPs were video taped including the SP’s feedback to the students (n=85). At the end of the training sessions students rated the SPs’ performance using a rating instrument called Bernese Assessment for Role-play and Feedback (BARF) containing 11 items concerning feedback quality. Additionally the videos were evaluated by 3 trained experts using the BARF. Results The experts showed a high interrater agreement when rating identical feedbacks (ICCunjust=0.953). Comparing the rating of students and experts, high agreement was found with regard to the following items: 1. The SP invited the student to reflect on the consultation first, Amin (= minimal agreement) 97% 2. The SP asked the student what he/she liked about the consultation, Amin = 88%. 3. The SP started with positive feedback, Amin = 91%. 4. The SP was comparing the student with other students, Amin = 92%. In contrast the following items showed differences between the rating of experts and students: 1. The SP used precise situations for feedback, Amax (=maximal agreement) 55%, Students rated 67 of SPs’ feedbacks to be perfect with regard to this item (highest rating on a 5 point Likert scale), while only 29 feedbacks were rated this way by the experts. 2. The SP gave precise suggestions for improvement, Amax 75%, 62 of SPs’ feedbacks obtained the highest rating from students, while only 44 of SPs’ feedbacks achieved the highest rating in the view of the experts. 3. The SP speaks about his/her role in the third person, Amax 60%. Students rated 77 feedbacks with the highest score, while experts judged only 43 feedbacks this way. Conclusion Although evaluation by the students was in agreement with that of experts concerning some items, students rated the SPs’ feedback more often with the optimal score than experts did. Moreover it seems difficult for students to notice when SPs talk about the role in the first instead of the third person. Since precision and talking about the role in the third person are important quality criteria of feedback, this result should be taken into account when thinking about students’ evaluation of SPs’ feedback for quality control. Bouter, S., E. van Weel-Baumgarten, and S. Bolhuis. 2012. Construction and Validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients’ Ability to Role-Play and Provide Feedback to Students. Academic Medicine: Journal of the Association of American Medical Colleges
Resumo:
The long-lived radionuclide 129I (T 1/2 = 15.7 My) occurs in the nature in very low concentrations. Since the middle of our century the environmental levels of 129I have been dramatically changed as a consequence of civil and military use of nuclear fission. Its investigation in environmental materials is of interest for environmental surveillance, retrospective dosimetry and for the use as a natural and man-made fracers of environmental processes. We are comparing two analytical methods which presently are capable of determining 129I in environmental materials, namely radiochemical neutron activation analysis (RNAA) and accelerator mass spectrometry (AMS). Emphasis is laid upon the quality control and detection capabilities for the analysis of 129I in environmental materials. Some applications are discussed.