4 resultados para student feedback

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction In our program, simulated patients (SPs) give feedback to medical students in the course of communication skills training. To ensure effective training, quality control of the SPs’ feedback should be implemented. At other institutions, medical students evaluate the SPs’ feedback for quality control (Bouter et al., 2012). Thinking about implementing quality control for SPs’ feedback in our program, we wondered whether the evaluation by students would result in the same scores as evaluation by experts. Methods Consultations simulated by 4th-year medical students with SPs were video taped including the SP’s feedback to the students (n=85). At the end of the training sessions students rated the SPs’ performance using a rating instrument called Bernese Assessment for Role-play and Feedback (BARF) containing 11 items concerning feedback quality. Additionally the videos were evaluated by 3 trained experts using the BARF. Results The experts showed a high interrater agreement when rating identical feedbacks (ICCunjust=0.953). Comparing the rating of students and experts, high agreement was found with regard to the following items: 1. The SP invited the student to reflect on the consultation first, Amin (= minimal agreement) 97% 2. The SP asked the student what he/she liked about the consultation, Amin = 88%. 3. The SP started with positive feedback, Amin = 91%. 4. The SP was comparing the student with other students, Amin = 92%. In contrast the following items showed differences between the rating of experts and students: 1. The SP used precise situations for feedback, Amax (=maximal agreement) 55%, Students rated 67 of SPs’ feedbacks to be perfect with regard to this item (highest rating on a 5 point Likert scale), while only 29 feedbacks were rated this way by the experts. 2. The SP gave precise suggestions for improvement, Amax 75%, 62 of SPs’ feedbacks obtained the highest rating from students, while only 44 of SPs’ feedbacks achieved the highest rating in the view of the experts. 3. The SP speaks about his/her role in the third person, Amax 60%. Students rated 77 feedbacks with the highest score, while experts judged only 43 feedbacks this way. Conclusion Although evaluation by the students was in agreement with that of experts concerning some items, students rated the SPs’ feedback more often with the optimal score than experts did. Moreover it seems difficult for students to notice when SPs talk about the role in the first instead of the third person. Since precision and talking about the role in the third person are important quality criteria of feedback, this result should be taken into account when thinking about students’ evaluation of SPs’ feedback for quality control. Bouter, S., E. van Weel-Baumgarten, and S. Bolhuis. 2012. Construction and Validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients’ Ability to Role-Play and Provide Feedback to Students. Academic Medicine: Journal of the Association of American Medical Colleges

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The discrepancy between the extensive impact of musculoskeletal complaints and the common deficiencies in musculoskeletal examination skills lead to increased emphasis on structured teaching and assessment. However, studies of single interventions are scarce and little is known about the time-dependent effect of assisted learning in addition to a standard curriculum. We therefore evaluated the immediate and long-term impact of a small group course on musculoskeletal examination skills. METHODS All 48 Year 4 medical students of a 6 year curriculum, attending their 8 week clerkship of internal medicine at one University department in Berne, participated in this controlled study. Twenty-seven students were assigned to the intervention of a 6×1 h practical course (4-7 students, interactive hands-on examination of real patients; systematic, detailed feedback to each student by teacher, peers and patients). Twenty-one students took part in the regular clerkship activities only and served as controls. In all students clinical skills (CS, 9 items) were assessed in an Objective Structured Clinical Examination (OSCE) station, including specific musculoskeletal examination skills (MSES, 7 items) and interpersonal skills (IPS, 2 items). Two raters assessed the skills on a 4-point Likert scale at the beginning (T0), the end (T1) and 4-12 months after (T2) the clerkship. Statistical analyses included Friedman test, Wilcoxon rank sum test and Mann-Whitney U test. RESULTS At T0 there were no significant differences between the intervention and control group. At T1 and T2 the control group showed no significant changes of CS, MSES and IPS compared to T0. In contrast, the intervention group significantly improved CS, MSES and IPS at T1 (p < 0.001). This enhancement was sustained for CS and MSES (p < 0.05), but not for IPS at T2. CONCLUSIONS Year 4 medical students were incapable of improving their musculoskeletal examination skills during regular clinical clerkship activities. However, an additional small group, interactive clinical skills course with feedback from various sources, improved these essential examination skills immediately after the teaching and several months later. We conclude that supplementary specific teaching activities are needed. Even a single, short-lasting targeted module can have a long lasting effect and is worth the additional effort.