915 resultados para medical education pipeline
Resumo:
In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.
Resumo:
The architecture of European Plastic Surgery was published in 1996 [Nicolai JPA, Scuderi N. Plastic surgical Europe in an organogram. Eur J Plast Surg 1996; 19: 253-256.] It is the objective of this paper to update information of that article. Continuing medical education (CME), science, training, examination, quality assurance and relations with the European Commission and Parliament all are aspects covered by the organisations to be discussed.
Resumo:
Residents of the European College of Veterinary Public Health (ECVPH) carried out a survey to explore the expectations and needs of potential employers of ECVPH diplomates and to assess the extent to which the ECVPH post-graduate training program meets those requirements. An online questionnaire was sent to 707 individuals working for universities, government organizations, and private companies active in the field of public health in 16 countries. Details on the structure and activities of the participants' organizations, their current knowledge of the ECVPH, and potential interest in employing veterinary public health (VPH) experts or hosting internships were collected. Participants were requested to rate 22 relevant competencies according to their importance for VPH professionals exiting the ECVPH training. A total of 138 completed questionnaires were included in the analysis. While generic skills such as "problem solving" and "broad horizon and inter-/multidisciplinary thinking" were consistently given high grades by all participants, the importance ascribed to more specialized skills was less homogeneous. The current ECVPH training more closely complies with the profile sought in academia, which may partly explain the lower employment rate of residents and diplomates within government and industry sectors. The study revealed a lack of awareness of the ECVPH among public health institutions and demonstrated the need for greater promotion of this veterinary specialization within Europe, both in terms of its training capacity and the professional skill-set of its diplomates. This study provides input for a critical revision of the ECVPH curriculum and the design of post-graduate training programs in VPH.
Resumo:
BACKGROUND Currently only a few reports exist on how to prepare medical students for skills laboratory training. We investigated how students and tutors perceive a blended learning approach using virtual patients (VPs) as preparation for skills training. METHODS Fifth-year medical students (N=617) were invited to voluntarily participate in a paediatric skills laboratory with four specially designed VPs as preparation. The cases focused on procedures in the laboratory using interactive questions, static and interactive images, and video clips. All students were asked to assess the VP design. After participating in the skills laboratory 310 of the 617 students were additionally asked to assess the blended learning approach through established questionnaires. Tutors' perceptions (N=9) were assessed by semi-structured interviews. RESULTS From the 617 students 1,459 VP design questionnaires were returned (59.1%). Of the 310 students 213 chose to participate in the skills laboratory; 179 blended learning questionnaires were returned (84.0%). Students provided high overall acceptance ratings of the VP design and blended learning approach. By using VPs as preparation, skills laboratory time was felt to be used more effectively. Tutors perceived students as being well prepared for the skills laboratory with efficient uses of time. CONCLUSION The overall acceptance of the blended learning approach was high among students and tutors. VPs proved to be a convenient cognitive preparation tool for skills training.
Resumo:
Background: The design of Virtual Patients (VPs) is essential. So far there are no validated evaluation instruments for VP design published. Summary of work: We examined three sources of validity evidence of an instrument to be filled out by students aimed at measuring the quality of VPs with a special emphasis on fostering clinical reasoning: (1) Content was examined based on theory of clinical reasoning and an international VP expert team. (2) Response process was explored in think aloud pilot studies with students and content analysis of free text questions accompanying each item of the instrument. (3) Internal structure was assessed by confirmatory factor analysis (CFA) using 2547 student evaluations and reliability was examined utilizing generalizability analysis. Summary of results: Content analysis was supported by theory underlying Gruppen and Frohna’s clinical reasoning model on which the instrument is based and an international VP expert team. The pilot study and analysis of free text comments supported the validity of the instrument. The CFA indicated that a three factor model comprising 6 items showed a good fit with the data. Alpha coefficients per factor were 0,74 - 0,82. The findings of the generalizability studies indicated that 40-200 student responses are needed in order to obtain reliable data on one VP. Conclusions: The described instrument has the potential to provide faculty with reliable and valid information about VP design. Take-home messages: We present a short instrument which can be of help in evaluating the design of VPs.
Resumo:
Hintergrund: Bei der Durchführung von summativen Prüfungen wird üblicherweise eine Mindestreliabilität von 0,8 gefordert. Bei praktischen Prüfungen wie OSCEs werden manchmal 0,7 akzeptiert (Downing 2004). Doch was kann man sich eigentlich unter der Präzision einer Messung mit einer Reliabilität von 0,7 oder 0,8 vorstellen? Methode: Mittels verschiedener statistischer Methoden wie dem Standardmessfehler oder der Generalisierbarkeitstheorie lässt sich die Reliabilität in ein Konfidenzintervall um eine festgestellte Kandidatenleistung übersetzen (Brennan 2003, Harvill 1991, McManus 2012). Hat ein Kandidat beispielsweise bei einer Prüfung 57 Punkte erreicht, schwankt seine wahre Leistung aufgrund der Messungenauigkeit der Prüfung um diesen Wert (z.B. zwischen 50 und 64 Punkte). Im Bereich der Bestehensgrenze ist die Messgenauigkeit aber besonders wichtig. Läge die Bestehensgrenze in unserem Beispiel bei 60 Punkten, wäre der Kandidat mit 57 Punkten zwar pro forma durchgefallen, allerdings könnte er aufgrund der Schwankungsbreite um seine gemessene Leistung in Wahrheit auch knapp bestanden haben. Überträgt man diese Erkenntnisse auf alle KandidatInnen einer Prüfung, kann man die Anzahl der Grenzfallkandidaten bestimmen, also all jene Kandidatinnen, die mit Ihrem Prüfungsergebnis so nahe an der Bestehensgrenze liegen, dass ihr jeweiliges Prüfungsresultate falsch positiv oder falsch negativ sein kann. Ergebnisse: Die Anzahl der GrenzfallkandidatInnen in einer Prüfung ist, nicht nur von der Reliabilität abhängig, sondern auch von der Leistung der KandidatInnen, der Varianz, dem Abstand der Bestehensgrenze zum Mittelwert und der Schiefe der Verteilung. Es wird anhand von Modelldaten und konkreten Prüfungsdaten der Zusammenhang zwischen der Reliabilität und der Anzahl der Grenzfallkandidaten auch für den Nichtstatistiker verständlich dargestellt. Es wird gezeigt, warum selbst eine Reliabilität von 0.8 in besonderen Situationen keine befriedigende Präzision der Messung bieten wird, während in manchen OSCEs die Reliabilität fast ignoriert werden kann. Schlussfolgerungen: Die Berechnung oder Schätzung der Grenzfallkandidaten anstatt der Reliabilität verbessert auf anschauliche Weise das Verständnis für die Präzision einer Prüfung. Wenn es darum geht, wie viele Stationen ein summativer OSCE benötigt oder wie lange eine MC-Prüfung dauern soll, sind Grenzfallkandidaten ein valideres Entscheidungskriterium als die Reliabilität. Brennan, R.L. (2003) Generalizability Theory. New York, Springer Downing, S.M. (2004) ‘Reliability: on the reproducibility of assessment data’, Medical Education 2004, 38, 1006–12 Harvill, L.M. (1991) ‘Standard Error of Measurement’, Educational Measurement: Issues and Practice, 33-41 McManus, I.C. (2012) ‘The misinterpretation of the standard error of measurement in medical education: A primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement’ Medical teacher, 34, 569 - 76
Resumo:
Introduction: As the population in the United States continues to age, more attention in primary practice settings is now devoted toward managing the care of the elderly. The occurrence of elder abuse is a growing problem. It is a condition many professionals in primary care may be ill prepared with the knowledge or resources to identify and manage. [See PDF for complete abstract]
Resumo:
Statement of Problem: The second background paper for the Medical School Objective Project (MSOP), defined Educational Technology (ET) as the use of information technology to facilitate student’s learning.1 Medical schools as a group have made limited progress in accomplishing the recommended educational technology goals and there had been much greater use of such technology in basic sciences courses than in clinical clerkships. We will explore the positive and negative implications of incorporating ET into the educational experience of TMC schools. [See PDF for complete abstract]
Resumo:
Introduction: Dehiscence of the suture line of an anastomosis can lead to reoperation, temporary or permanent stoma, and even sepsis or death. Few techniques for the laboratory training of tubular anastomosis use ex-vivo animal tissues. We describe a novel model that can be used in the laboratory for the training of anastomosis in tubular tissues and objectively assess any anastomotic leak. [See PDF for complete abstract]
Resumo:
Purpose: To assess the relationship between student utilization of learning resources, including streaming video (SV), and their performance in the pre-clinical curriculum. [See PDF for complete abstract]