14 resultados para Clinical Reasoning
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Background: The design of Virtual Patients (VPs) is essential. So far there are no validated evaluation instruments for VP design published. Summary of work: We examined three sources of validity evidence of an instrument to be filled out by students aimed at measuring the quality of VPs with a special emphasis on fostering clinical reasoning: (1) Content was examined based on theory of clinical reasoning and an international VP expert team. (2) Response process was explored in think aloud pilot studies with students and content analysis of free text questions accompanying each item of the instrument. (3) Internal structure was assessed by confirmatory factor analysis (CFA) using 2547 student evaluations and reliability was examined utilizing generalizability analysis. Summary of results: Content analysis was supported by theory underlying Gruppen and Frohna’s clinical reasoning model on which the instrument is based and an international VP expert team. The pilot study and analysis of free text comments supported the validity of the instrument. The CFA indicated that a three factor model comprising 6 items showed a good fit with the data. Alpha coefficients per factor were 0,74 - 0,82. The findings of the generalizability studies indicated that 40-200 student responses are needed in order to obtain reliable data on one VP. Conclusions: The described instrument has the potential to provide faculty with reliable and valid information about VP design. Take-home messages: We present a short instrument which can be of help in evaluating the design of VPs.
Resumo:
Background: Virtual patients (VPs) are increasingly used to train clinical reasoning. So far, no validated evaluation instruments for VP design are available. Aims: We examined the validity of an instrument for assessing the perception of VP design by learners. Methods: Three sources of validity evidence were examined: (i) Content was examined based on theory of clinical reasoning and an international VP expert team. (ii) The response process was explored in think-aloud pilot studies with medical students and in content analyses of free text questions accompanying each item of the instrument. (iii) Internal structure was assessed by exploratory factor analysis (EFA) and inter-rater reliability by generalizability analysis. Results: Content analysis was reasonably supported by the theoretical foundation and the VP expert team. The think-aloud studies and analysis of free text comments supported the validity of the instrument. In the EFA, using 2547 student evaluations of a total of 78 VPs, a three-factor model showed a reasonable fit with the data. At least 200 student responses are needed to obtain a reliable evaluation of a VP on all three factors. Conclusion: The instrument has the potential to provide valid information about VP design, provided that many responses per VP are available.
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
OBJECTIVES The generation of learning goals (LGs) that are aligned with learning needs (LNs) is one of the main purposes of formative workplace-based assessment. In this study, we aimed to analyse how often trainer–student pairs identified corresponding LNs in mini-clinical evaluation exercise (mini-CEX) encounters and to what degree these LNs aligned with recorded LGs, taking into account the social environment (e.g. clinic size) in which the mini-CEX was conducted. METHODS Retrospective analyses of adapted mini-CEX forms (trainers’ and students’ assessments) completed by all Year 4 medical students during clerkships were performed. Learning needs were defined by the lowest score(s) assigned to one or more of the mini-CEX domains. Learning goals were categorised qualitatively according to their correspondence with the six mini-CEX domains (e.g. history taking, professionalism). Following descriptive analyses of LNs and LGs, multi-level logistic regression models were used to predict LGs by identified LNs and social context variables. RESULTS A total of 512 trainers and 165 students conducted 1783 mini-CEXs (98% completion rate). Concordantly, trainer–student pairs most often identified LNs in the domains of ‘clinical reasoning’ (23% of 1167 complete forms), ‘organisation/efficiency’ (20%) and ‘physical examination’ (20%). At least one ‘defined’ LG was noted on 313 student forms (18% of 1710). Of the 446 LGs noted in total, the most frequently noted were ‘physical examination’ (49%) and ‘history taking’ (21%). Corresponding LNs as well as social context factors (e.g. clinic size) were found to be predictors of these LGs. CONCLUSIONS Although trainer–student pairs often agreed in the LNs they identified, many assessments did not result in aligned LGs. The sparseness of LGs, their dependency on social context and their partial non-alignment with students’ LNs raise questions about how the full potential of the mini-CEX as not only a ‘diagnostic’ but also an ‘educational’ tool can be exploited.
Resumo:
CONTEXT: E-learning resources, such as virtual patients (VPs), can be more effective when they are integrated in the curriculum. To gain insights that can inform guidelines for the curricular integration of VPs, we explored students' perceptions of scenarios with integrated and non-integrated VPs aimed at promoting clinical reasoning skills. METHODS: During their paediatric clerkship, 116 fifth-year medical students were given at least ten VPs embedded in eight integrated scenarios and as non-integrated add-ons. The scenarios differed in the sequencing and alignment of VPs and related educational activities, tutor involvement, number of VPs, relevance to assessment and involvement of real patients. We sought students' perceptions on the VP scenarios in focus group interviews with eight groups of 4-7 randomly selected students (n = 39). The interviews were recorded, transcribed and analysed qualitatively. RESULTS: The analysis resulted in six themes reflecting students' perceptions of important features for effective curricular integration of VPs: (i) continuous and stable online access, (ii) increasing complexity, adapted to students' knowledge, (iii) VP-related workload offset by elimination of other activities, (iv) optimal sequencing (e.g.: lecture--1 to 2 VP(s)--tutor-led small group discussion--real patient) and (V) optimal alignment of VPs and educational activities, (vi) inclusion of VP topics in assessment. CONCLUSIONS: The themes appear to offer starting points for the development of a framework to guide the curricular integration of VPs. Their impact needs to be confirmed by studies using quantitative controlled designs.
Resumo:
OBJECTIVES: The aim of the study was to assess whether prospective follow-up data within the Swiss HIV Cohort Study can be used to predict patients who stop smoking; or among smokers who stop, those who start smoking again. METHODS: We built prediction models first using clinical reasoning ('clinical models') and then by selecting from numerous candidate predictors using advanced statistical methods ('statistical models'). Our clinical models were based on literature that suggests that motivation drives smoking cessation, while dependence drives relapse in those attempting to stop. Our statistical models were based on automatic variable selection using additive logistic regression with component-wise gradient boosting. RESULTS: Of 4833 smokers, 26% stopped smoking, at least temporarily; because among those who stopped, 48% started smoking again. The predictive performance of our clinical and statistical models was modest. A basic clinical model for cessation, with patients classified into three motivational groups, was nearly as discriminatory as a constrained statistical model with just the most important predictors (the ratio of nonsmoking visits to total visits, alcohol or drug dependence, psychiatric comorbidities, recent hospitalization and age). A basic clinical model for relapse, based on the maximum number of cigarettes per day prior to stopping, was not as discriminatory as a constrained statistical model with just the ratio of nonsmoking visits to total visits. CONCLUSIONS: Predicting smoking cessation and relapse is difficult, so that simple models are nearly as discriminatory as complex ones. Patients with a history of attempting to stop and those known to have stopped recently are the best candidates for an intervention.
Resumo:
Background: Defining learning goals (LG) in alignment with learning needs (LN) is one of the key purposes of formative workplace-based assessment, but studies about this topic are scarce. Summary of Work: We analysed quantitatively and qualitatively how often trainer-student pairs identified the same LN during Mini Clinical Evaluation Exercises (Mini-CEX) in clerkships and to what degree those LNs were in line with the recorded LGs. Multilevel logistic regression models were used to predict LGs by identified LNs, controlling for context variables. Summary of Results: 512 trainers and 165 students conducted 1783 Mini-CEX (98% completion rate). Concordantly, trainer-student pairs most often identified LNs in the domains ‘clinical reasoning’ (23% of 1167 complete forms), ‘organisation / efficiency’ (20%) and ‘physical examination’ (20%). At least one ‘defined’ LG was noted on 313 student forms (18% of 1710), with a total of 446 LGs. Of these, the most frequent LGs were ‘physical examination’ (49% of 446 LGs) and ‘history taking’ (21%); corresponding LNs as well as context variables (e.g. clinic size) were found to be predictors of these LGs. Discussion and Conclusions: Although trainer-student pairs often agreed in their identified LNs, many assessments did not result in an aligned LG or a LG at all. Interventions are needed to enhance the proportion of (aligned) LGs in Mini-CEX in order to tap into its full potential not only as a ‘diagnostic’ but also as an ‘educational tool’. Take-home messages: The sparseness of LGs, their dependency on context variables and their partial non-alignment with students’ LNs raise the question of how the effectiveness of Mini-CEX can be further enhanced.
Resumo:
AIM Virtual patients (VPs) are a one-of-a-kind e-learning resource, fostering clinical reasoning skills through clinical case examples. The combination with face-to-face teaching is important for their successful integration, which is referred to as "blended learning". So far little is known about the use of VPs in the field of continuing medical education and residency training. The pilot study presented here inquired the application of VPs in the framework of a pediatric residency revision course. METHODS Around 200 participants of a pediatric nephology lecture ('nephrotic and nephritic syndrome in children') were offered two VPs as a wrap-up session at the revision course of the German Society for Pediatrics and Adolescent Medicine (DGKJ) 2009 in Heidelberg, Germany. Using a web-based survey form, different aspects were evaluated concerning the learning experiences with VPs, the combination with the lecture, and the use of VPs for residency training in general. RESULTS N=40 evaluable survey forms were returned (approximately 21%). The return rate was impaired by a technical problem with the local Wi-Fi firewall. The participants perceived the work-up of the VPs as a worthwhile learning experience, with proper preparation for diagnosing and treating real patients with similar complaints. Case presentations, interactivity, and locally and timely independent repetitive practices were, in particular, pointed out. On being asked about the use of VPs in general for residency training, there was a distinct demand for more such offers. CONCLUSION VPs may reasonably complement existing learning activities in residency training.
Resumo:
The medical education community is working-across disciplines and across the continuum-to address the current challenges facing the medical education system and to implement strategies to improve educational outcomes. Educational technology offers the promise of addressing these important challenges in ways not previously possible. The authors propose a role for virtual patients (VPs), which they define as multimedia, screen-based interactive patient scenarios. They believe VPs offer capabilities and benefits particularly well suited to addressing the challenges facing medical education. Well-designed, interactive VP-based learning activities can promote the deep learning that is needed to handle the rapid growth in medical knowledge. Clinically oriented learning from VPs can capture intrinsic motivation and promote mastery learning. VPs can also enhance trainees' application of foundational knowledge to promote the development of clinical reasoning, the foundation of medical practice. Although not the entire solution, VPs can support competency-based education. The data created by the use of VPs can serve as the basis for multi-institutional research that will enable the medical education community both to better understand the effectiveness of educational interventions and to measure progress toward an improved system of medical education.
Resumo:
Introduction: Clinical reasoning is essential for the practice of medicine. In theory of development of medical expertise it is stated, that clinical reasoning starts from analytical processes namely the storage of isolated facts and the logical application of the ‘rules’ of diagnosis. Then the learners successively develop so called semantic networks and illness-scripts which finally are used in an intuitive non-analytic fashion [1], [2]. The script concordance test (SCT) is an example for assessing clinical reasoning [3]. However the aggregate scoring [3] of the SCT is recognized as problematic [4]. The SCT`s scoring leads to logical inconsistencies and is likely to reflect construct-irrelevant differences in examinees’ response styles [4]. Also the expert panel judgments might lead to an unintended error of measurement [4]. In this PhD project the following research questions will be addressed: 1. How does a format look like to assess clinical reasoning (similar to the SCT but) with multiple true-false questions or other formats with unambiguous correct answers, and by this address the above mentioned pitfalls in traditional scoring of the SCT? 2. How well does this format fulfill the Ottawa criteria for good assessment, with special regards to educational and catalytic effects [5]? Methods: 1. In a first study it shall be assessed whether designing a new format using multiple true-false items to assess clinical reasoning similar to the SCT-format is arguable in a theoretically and practically sound fashion. For this study focus groups or interviews with assessment experts and students will be undertaken. 2. In an study using focus groups and psychometric data Norcini`s and colleagues Criteria for Good Assessment [5] shall be determined for the new format in a real assessment. Furthermore the scoring method for this new format shall be optimized using real and simulated data.
Resumo:
Fragestellung/Einleitung: Es ist unklar inwiefern Unterschiede bestehen im Einsatz von Key Feature Problemen (KFP) mit Long Menu Fragen und fallbasierten Typ A Fragen (FTA) für die Überprüfung des klinischen Denkens (Clinical Reasoning) in der klinischen Ausbildung von Medizinstudierenden. Methoden: Medizinstudierende des fünften Studienjahres nahmen an ihrer klinischen Pädiatrie-Rotation teil, die mit einer summativen Prüfung endete. Die Überprüfung des Wissen wurde pro Prüfung elektronisch mit 6-9 KFP [1], [3], 9-20 FTA und 9-28 nichtfallbasierten Multiple Choice Fragen (NFTA) durchgeführt. Jedes KFP bestand aus einer Fallvignette und drei Key Features und nutzen ein sog. Long Menu [4] als Antwortformat. Wir untersuchten die Perzeption der KFP und FTA in Focus Gruppen [2] (n of students=39). Weiterhin wurden die statistischen Kennwerte der KFP und FTA von 11 Prüfungen (n of students=377) verglichen. Ergebnisse: Die Analyse der Fokusgruppen resultierte in vier Themen, die die Perzeption der KFP und deren Vergleich mit FTA darstellten: KFP wurden als 1. realistischer, 2. schwerer, und 3. motivierender für das intensive Selbststudium des klinischen Denkens als FTA aufgenommen und zeigten 4. insgesamt eine gute Akzeptanz sofern gewisse Voraussetzungen berücksichtigt werden. Die statistische Auswertung zeigte keinen Unterschied im Schwierigkeitsgrad; jedoch zeigten die KFP eine höhere Diskrimination und Reliabilität (G-coefficient) selbst wenn für die Prüfungszeit korrigiert wurde. Die Korrelation der verschiedenen Prüfungsteile war mittel. Diskussion/Schlussfolgerung: Die Studierenden erfuhren die KFP als motivierenden für das Selbststudium des klinischen Denkens. Statistisch zeigten die KFP eine grössere Diskrimination und höhere Relibilität als die FTA. Der Einbezug von KFP mit Long Menu in Prüfungen des klinischen Studienabschnitts erscheint vielversprechend und einen „educational effect“ zu haben.
Resumo:
Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes’ theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment.