14 resultados para Content validity

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The original 'Örebro Musculoskeletal Pain Questionnaire' (original-ÖMPQ) has been shown to have limitations in practicality, factor structure, face and content validity. This study addressed these concerns by modifying its content producing the 'Örebro Musculoskeletal Screening Questionnaire' (ÖMSQ). The ÖMSQ and original-ÖMPQ were tested concurrently in acute/subacute low back pain working populations (pilot n = 44, main n = 106). The ÖMSQ showed improved face and content validity, which broadened potential application, and improved practicality with two-thirds less missing responses. High reliability (0.975, p < 0.05, ICC: 2.1), criterion validity (Spearman's r = 0.97) and internal consistency (α = 0.84) were achieved, as were predictive ability cut-off scores from ROC curves (112-120 ÖMSQ-points), statistically different ÖMSQ scores (p < 0.001) for each outcome trait, and a strong correlation with recovery time (Spearman's, r = 0.71). The six-component factor structure reflected the constructs originally proposed. The ÖMSQ can be substituted for the original-ÖMPQ in this population. Further research will assess its applicability in broader populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To (a) develop the Women With Vulvar Neoplasia-Patient-Reported Outcome (WOMAN-PRO) instrument as a measure of women's post-vulvar surgery symptom experience and informational needs, (b) examine its content validity, (c) describe modifications based on pilot testing, and (d) examine the content validity of the revised instrument.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context-Daytime sleepiness in kidney transplant recipients has emerged as a potential predictor of impaired adherence to the immunosuppressive medication regimen. Thus there is a need to assess daytime sleepiness in clinical practice and transplant registries.Objective-To evaluate the validity of a single-item measure of daytime sleepiness integrated in the Swiss Transplant Cohort Study (STCS), using the American Educational Research Association framework.Methods-Using a cross-sectional design, we enrolled a convenience sample of 926 home-dwelling kidney transplant recipients (median age, 59.69 years; 25%-75% quartile [Q25-Q75], 50.27-59.69), 63% men; median time since transplant 9.42 years (Q25-Q75, 4.93-15.85). Daytime sleepiness was assessed by using a single item from the STCS and the 8 items of the validated Epworth Sleepiness Scale. Receiver operating characteristic curve analysis was used to determine the cutoff for the STCS daytime sleepiness item against the Epworth Sleepiness Scale score.Results-Based on the receiver operating characteristic curve analysis, a score greater than 4 on the STCS daytime sleepiness item is recommended to detect daytime sleepiness. Content validity was high as all expert reviews were unanimous. Concurrent validity was moderate (Spearman ϱ, 0.531; P< .001) and convergent validity with depression and poor sleep quality although low, was significant (ϱ, 0.235; P<.001 and ϱ, 0.318, P=.002, respectively). For the group difference validity: kidney transplant recipients with moderate, severe, and extremely severe depressive symptom scores had 3.4, 4.3, and 5.9 times higher odds of having daytime sleepiness, respectively, as compared with recipients without depressive symptoms.Conclusion-The accumulated evidence provided evidence for the validity of the STCS daytime sleepiness item as a simple screening scale for daytime sleepiness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: Family needs and expectations are often unmet in the intensive care unit (ICU), leading to dissatisfaction. This study assesses cross-cultural adaptability of an instrument evaluating family satisfaction in the ICU. MATERIALS AND METHODS: A Canadian instrument on family satisfaction was adapted for German language and central European culture and then validated for feasibility, validity, internal consistency, reliability, and sensitivity. RESULTS: Content validity of a preliminary translated version was assessed by staff, patients, and next of kin. After adaptation, content and comprehensibility were considered good. The adapted translation was then distributed to 160 family members. The return rate was 71.8%, and 94.4% of questions in returned forms were clearly answered. In comparison with a Visual Analogue Scale, construct validity was good for overall satisfaction with care (Spearman rho = 0.60) and overall satisfaction with decision making (rho = 0.65). Cronbach alpha was .95 for satisfaction with care and .87 for decision-making. Only minor differences on repeated measurements were found for interrater and intrarater reliability. There was no floor or ceiling effect. CONCLUSIONS: A cross-cultural adaptation of a questionnaire on family satisfaction in the ICU can be feasible, valid, internally consistent, reliable, and sensitive.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: Visual hallucinations are under-reported by patients and are often undiscovered by health professionals. There is no gold standard available to assess hallucinations. Our objective was to develop a reliable, valid, semi-structured interview for identifying and assessing visual hallucinations in older people with eye disease and cognitive impairment. METHODS: We piloted the North-East Visual Hallucinations Interview (NEVHI) in 80 older people with visual and/or cognitive impairment (patient group) and 34 older people without known risks of hallucinations (control group). The informants of 11 patients were interviewed separately. We established face validity, content validity, criterion validity, inter-rater agreement and the internal consistency of the NEVHI, and assessed the factor structure for questions evaluating emotions, cognitions, and behaviours associated with hallucinations. RESULTS: Recurrent visual hallucinations were common in the patient group (68.8%) and absent in controls (0%). The criterion, face and content validities were good and the internal consistency of screening questions for hallucinations was high (Cronbach alpha: 0.71). The inter-rater agreements for simple and complex hallucinations were good (Kappa 0.72 and 0.83, respectively). Four factors associated with experiencing hallucinations (perceived control, pleasantness, distress and awareness) were identified and explained a total variance of 73%. Informants gave more 'don't know answers' than patients throughout the interview (p = 0.008), especially to questions evaluating cognitions and emotions associated with hallucinations (p = 0.02). CONCLUSIONS: NEVHI is a comprehensive assessment tool, helpful to identify the presence of visual hallucinations and to quantify cognitions, emotions and behaviours associated with hallucinations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of information technology (IT) in dentistry is far ranging. In order to produce a working document for the dental educator, this paper focuses on those methods where IT can assist in the education and competence development of dental students and dentists (e.g. e-learning, distance learning, simulations and computer-based assessment). Web pages and other information-gathering devices have become an essential part of our daily life, as they provide extensive information on all aspects of our society. This is mirrored in dental education where there are many different tools available, as listed in this report. IT offers added value to traditional teaching methods and examples are provided. In spite of the continuing debate on the learning effectiveness of e-learning applications, students request such approaches as an adjunct to the traditional delivery of learning materials. Faculty require support to enable them to effectively use the technology to the benefit of their students. This support should be provided by the institution and it is suggested that, where possible, institutions should appoint an e-learning champion with good interpersonal skills to support and encourage faculty change. From a global prospective, all students and faculty should have access to e-learning tools. This report encourages open access to e-learning material, platforms and programs. The quality of such learning materials must have well defined learning objectives and involve peer review to ensure content validity, accuracy, currency, the use of evidence-based data and the use of best practices. To ensure that the developers' intellectual rights are protected, the original content needs to be secure from unauthorized changes. Strategies and recommendations on how to improve the quality of e-learning are outlined. In the area of assessment, traditional examination schemes can be enriched by IT, whilst the Internet can provide many innovative approaches. Future trends in IT will evolve around improved uptake and access facilitated by the technology (hardware and software). The use of Web 2.0 shows considerable promise and this may have implications on a global level. For example, the one-laptop-per-child project is the best example of what Web 2.0 can do: minimal use of hardware to maximize use of the Internet structure. In essence, simple technology can overcome many of the barriers to learning. IT will always remain exciting, as it is always changing and the users, whether dental students, educators or patients are like chameleons adapting to the ever-changing landscape.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Nursing Home Survey on Patient Safety Culture (NHSPSC) was specifically developed for nursing homes to assess a facility’s safety climate and it consists of 12 dimensions. After its pilot testing, however, no fur- ther psychometric analyses were performed on the instrument. For this study of safety climate in Swiss nursing home units, the NHSPSC was linguistically adapted to the Swiss context and to address the unit as well as facility level, with the aim of testing aspects of the validity and reliability of the Swiss version before its use in Swiss nursing home units. Psychometric analyses were performed on data from 367 nurs- ing personnel from nine nursing homes in the German-speaking part of Switzerland (response rate = 66%), and content validity (CVI) examined. The statistical influence of unit membership on respondents’ answers, and on their agreement concerning their units’ safety climate, was tested using intraclass corre- lation coefficients (ICCs) and the rWG(J) interrater agreement index. A multilevel exploratory factor analysis (MEFA) with oblimin rotation was applied to examine the questionnaire’s dimensionality. Cronbach’s alpha and Raykov’s rho were calculated to assess factor reliability. The relationship of safety climate dimensions with clinical outcomes was explored. Expert feedback confirmed the relevance of the instru- ment’s items (CVI = 0.93). Personnel showed strong agreement in their perceptions in three dimensions of the questionnaire. ICCs supported a multilevel analysis. MEFA produced nine factors at the within-level (in comparison to 12 in the original version) and two factors at the between-level with satisfactory fit statis- tics. Raykov’s Rho for the single level factors ranged between 0.67 and 0.86. Some safety climate dimen- sions show moderate, but non-significant correlations with the use of bedrails, physical restraint use, and fall-related injuries. The Swiss version of the NHSPSC needs further refinement and testing before its use can be recommended in Swiss nursing homes: its dimensionality needs further clarification, particularly to distinguish items addressing the unit-level safety climate from those at the facility level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent research emphasizes the various facets of narcissism. As a consequence, newly developed questionnaires for narcissism have a large number of subscales and items. However, for the daily use in research and practice, short measures are crucial. In this study we compare different short forms of the Pathological Narcissism Questionnaire, a 54 item measure with seven subscales. In different samples (total N>2000) we applied different theoretical models to construct short forms of approximately 20 items. In particular, we compared IRT, item-total correlation, and factor loading based short forms and versions based on content validity and random selection. In all versions the original subscale structure was preserved. Results show that the short forms all have high correlations with the original version. Furthermore, correlations with criterion validation measures were comparable. We conclude that the item number can be reduced substantially without loosing information. Pros and cons of the different reduction methods are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fragestellung/Einleitung: Multisource-Feedback (MSF) ist ein anerkanntes Instrument zur Überprüfung und Verbesserung der ärztlichen Tätigkeit [1]. Es beinhaltet Feedback, das von MitarbeiterInnen verschiedener Tätigkeitsbereiche und verschiedener Hierarchiestufen gegeben wird. Das Feedback wird anonym mithilfe eines Fragebogens gegeben, der verschiedene Kriterien der ärztlichen Kompetenz beschreibt. Das Feedback wird anschlieβend für die zu beurteilenden ÄrztInnen in einem Gespräch von einer/m SupervisorIn zusammengefasst. Bislang existiert kein deutschsprachiger Fragebogen für Multisource-Feedback für die ärztliche Tätigkeit. Unsere Zielsetzung war es daher, einen deutschsprachigen Fragebogen zu erstellen und diesen bzgl. relevanter Validitätskriterien zu untersuchen. Methoden: Zur Erstellung des Fragebogens sammelten wir die beste verfügbare Evidenz der entsprechenden Literatur. Wir wählten einen validierten englischen Fragebogen, der bereits in der Weiterbildung in Groβbritannien angewendet wird [2] und den wichtigsten Kriterien entspricht. Dieser wurde übersetzt und in einigen Bereichen erweitert, um ihn sprachlichen Gegebenheiten und lokalen Bedürfnissen anzupassen. Bezüglich der Validität wurden zwei Kriterien untersucht: Inhaltsvalidität (content validity evidence) und Antwortprozesse (response process validity evidence). Um die Inhaltsvalidität zu untersuchen, wurde in einer Expertenrunde diskutiert, ob der übersetzte Fragebogen die erwarteten Kompetenzen widerspiegelt. Im Anschluss wurden die Antwortprozesse mithilfe eines sog. „think-alouds“ mit ÄrztInnen in Weiterbildung und ihren AusbilderInnen untersucht. Ergebnisse: Der resultierende Fragebogen umfasst 20 Fragen. Davon sind 15 Items den Bereichen „Klinische Fähigkeiten“, „Umgang mit Patienten“, „Umgang mit Kollegen“ und „Arbeitsweise“ zuzuordnen. Diese Fragen werden auf einer fünfstufigen Likert-Skala beantwortet. Zusätzlich bietet jede Frage die Möglichkeit, einen Freitext zu besonderen Stärken und Schwächen der KandidatInnen aufzuführen. Weiterhin gibt es fünf globale Fragen zu Stärken und Verbesserungsmöglichkeiten, äuβeren Einflüssen, den Arbeitsbedingungen und nach Zweifeln an der Gesundheit oder Integrität des Arztes/ der Ärztin. In der Expertenrunde wurde der Fragebogen als für den deutschsprachigen Raum ohne Einschränkungen anwendbar eingeschätzt. Die Analyse der Antwortprozesse führte zu kleineren sprachlichen Anpassungen und bestätigt, dass der Fragebogen verständlich und eindeutig zu beantworten ist und das gewählte Konstrukt der ärztlichen Tätigkeit vollständig umschreibt. Diskussion/Schlussfolgerung: Wir entwickelten einen deutschsprachigen Fragebogen zur Durchführung von Multisource-Feedback in der ärztlichen Weiterbildung. Wir fanden Hinweise für die Validität dieses Fragebogens bzgl. des Inhalts und der Antwortprozesse. Zusätzliche Untersuchungen zur Validität wie z.B. die durch den Fragebogen entstehenden Auswirkungen (consequences) sind vorgesehen. Dieser Fragebogen könnte zum breiteren Einsatz von MSF in der ärztlichen Weiterbildung auch im deutschsprachigen Raum beitragen. This is an Open Access article distributed under the terms of the Creative Commons Attribution License. You are free: to Share - to copy, distribute and transmit the work, provided the original author and source are credited. See license information at http://creativecommons.org/licenses/by-nc-nd/3.0/.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We evaluated the face, content and construct validity of the novel da Vinci® Skills Simulator™ using the da Vinci Si™ Surgeon Console as the surgeon interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The design of Virtual Patients (VPs) is essential. So far there are no validated evaluation instruments for VP design published. Summary of work: We examined three sources of validity evidence of an instrument to be filled out by students aimed at measuring the quality of VPs with a special emphasis on fostering clinical reasoning: (1) Content was examined based on theory of clinical reasoning and an international VP expert team. (2) Response process was explored in think aloud pilot studies with students and content analysis of free text questions accompanying each item of the instrument. (3) Internal structure was assessed by confirmatory factor analysis (CFA) using 2547 student evaluations and reliability was examined utilizing generalizability analysis. Summary of results: Content analysis was supported by theory underlying Gruppen and Frohna’s clinical reasoning model on which the instrument is based and an international VP expert team. The pilot study and analysis of free text comments supported the validity of the instrument. The CFA indicated that a three factor model comprising 6 items showed a good fit with the data. Alpha coefficients per factor were 0,74 - 0,82. The findings of the generalizability studies indicated that 40-200 student responses are needed in order to obtain reliable data on one VP. Conclusions: The described instrument has the potential to provide faculty with reliable and valid information about VP design. Take-home messages: We present a short instrument which can be of help in evaluating the design of VPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Virtual patients (VPs) are increasingly used to train clinical reasoning. So far, no validated evaluation instruments for VP design are available. Aims: We examined the validity of an instrument for assessing the perception of VP design by learners. Methods: Three sources of validity evidence were examined: (i) Content was examined based on theory of clinical reasoning and an international VP expert team. (ii) The response process was explored in think-aloud pilot studies with medical students and in content analyses of free text questions accompanying each item of the instrument. (iii) Internal structure was assessed by exploratory factor analysis (EFA) and inter-rater reliability by generalizability analysis. Results: Content analysis was reasonably supported by the theoretical foundation and the VP expert team. The think-aloud studies and analysis of free text comments supported the validity of the instrument. In the EFA, using 2547 student evaluations of a total of 78 VPs, a three-factor model showed a reasonable fit with the data. At least 200 student responses are needed to obtain a reliable evaluation of a VP on all three factors. Conclusion: The instrument has the potential to provide valid information about VP design, provided that many responses per VP are available.