105 resultados para Clinical performance
Resumo:
BACKGROUND/AIMS Clinical differentiation between organic hypersomnia and non-organic hypersomnia (NOH) is challenging. We aimed to determine the diagnostic value of sleepiness and performance tests in patients with excessive daytime sleepiness (EDS) of organic and non-organic origin. METHODS We conducted a retrospective comparison of the multiple sleep latency test (MSLT), pupillography, and the Steer Clear performance test in three patient groups complaining of EDS: 19 patients with NOH, 23 patients with narcolepsy (NAR), and 46 patients with mild to moderate obstructive sleep apnoea syndrome (OSAS). RESULTS As required by the inclusion criteria, all patients had Epworth Sleepiness Scale (ESS) scores >10. The mean sleep latency in the MSLT indicated mild objective sleepiness in NOH (8.1 ± 4.0 min) and OSAS (7.2 ± 4.1 min), but more severe sleepiness in NAR (2.5 ± 2.0 min). The difference between NAR and the other two groups was significant; the difference between NOH and OSAS was not. In the Steer Clear performance test, NOH patients performed worst (error rate = 10.4%) followed by NAR (8.0%) and OSAS patients (5.9%; p = 0.008). The difference between OSAS and the other two groups was significant, but not between NOH and NAR. The pupillary unrest index was found to be highest in NAR (11.5) followed by NOH (9.2) and OSAS (7.4; n.s.). CONCLUSION A high error rate in the Steer Clear performance test along with mild sleepiness in an objective sleepiness test (MSLT) in a patient with subjective sleepiness (ESS) is suggestive of NOH. This disproportionately high error rate in NOH may be caused by factors unrelated to sleep pressure, such as anergia, reduced attention and motivation affecting performance, but not conventional sleepiness measurements.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SP’s performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Methods SP trainers from all five Swiss medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised with the partners twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. Both, SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). FAIR_OSCE The list of items to assess the quality of the simulation by SPs was primarily developed and used to provide formative feedback to the SPs in order to help them to improve their performance. It was therefore named “Feedbackstruckture for the Assessment of Interactive Role play in Objective Structured Clinical Exams (FAIR_OSCE). It was also used to assess the quality of patient portrayal during the exam. The results were calculated for each of the five faculties individually. Formative evaluation was given to the five faculties with individual feedback without revealing results of other faculties other than overall results. Results High quality of patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. In example the rate of completely correct reaction in medical tests increased from 88% to 95%. 95% completely correct reactions together with 4% sufficient reactions add up to 99% of the reactions meeting the requirements of the exam. SP educators using the instrument reported an augmentation of SPs performance induced by the use of the instrument. Disadvantages mentioned were high concentration needed to explicitly observe all criteria and cumbersome handling of the paper-based forms. Conclusion We were able to document a very high quality of SP performance in our exam. The data also indicate that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed augmentation of performance. The development of an iPad based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
BACKGROUND Ultrathin strut biodegradable polymer sirolimus-eluting stents (BP-SES) proved noninferior to durable polymer everolimus-eluting stents (DP-EES) for a composite clinical end point in a population with minimal exclusion criteria. We performed a prespecified subgroup analysis of the Ultrathin Strut Biodegradable Polymer Sirolimus-Eluting Stent Versus Durable Polymer Everolimus-Eluting Stent for Percutaneous Coronary Revascularisation (BIOSCIENCE) trial to compare the performance of BP-SES and DP-EES in patients with diabetes mellitus. METHODS AND RESULTS BIOSCIENCE trial was an investigator-initiated, single-blind, multicentre, randomized, noninferiority trial comparing BP-SES versus DP-EES. The primary end point, target lesion failure, was a composite of cardiac death, target-vessel myocardial infarction, and clinically indicated target lesion revascularization within 12 months. Among a total of 2119 patients enrolled between February 2012 and May 2013, 486 (22.9%) had diabetes mellitus. Overall diabetic patients experienced a significantly higher risk of target lesion failure compared with patients without diabetes mellitus (10.1% versus 5.7%; hazard ratio [HR], 1.80; 95% confidence interval [CI], 1.27-2.56; P=0.001). At 1 year, there were no differences between BP-SES versus DP-EES in terms of the primary end point in both diabetic (10.9% versus 9.3%; HR, 1.19; 95% CI, 0.67-2.10; P=0.56) and nondiabetic patients (5.3% versus 6.0%; HR, 0.88; 95% CI, 0.58-1.33; P=0.55). Similarly, no significant differences in the risk of definite or probable stent thrombosis were recorded according to treatment arm in both study groups (4.0% versus 3.1%; HR, 1.30; 95% CI, 0.49-3.41; P=0.60 for diabetic patients and 2.4% versus 3.4%; HR, 0.70; 95% CI, 0.39-1.25; P=0.23, in nondiabetics). CONCLUSIONS In the prespecified subgroup analysis of the BIOSCIENCE trial, clinical outcomes among diabetic patients treated with BP-SES or DP-EES were comparable at 1 year. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT01443104.
Resumo:
OBJECTIVES Clinical benefit response (CBR), based on changes in pain, Karnofsky performance status, and weight, is an established palliative endpoint in trials for advanced gastrointestinal cancer. We investigated whether CBR is associated with survival, and whether CBR reflects a wide-enough range of domains to adequately capture patients' perception. METHODS CBR was prospectively evaluated in an international phase III chemotherapy trial in patients with advanced pancreatic cancer (n = 311) in parallel with patient-reported outcomes (PROs). RESULTS The median time to treatment failure was 3.4 months (range: 0-6). The majority of the CBRs (n = 39) were noted in patients who received chemotherapy for at least 5 months. Patients with CBR (n = 62) had longer survival than non-responders (n = 182) (hazard ratio = 0.69; 95% confidence interval: 0.51-0.94; p = 0.013). CBR was predicted with a sensitivity and specificity of 77-80% by various combinations of 3 mainly physical PROs. A comparison between the duration of CBR (n = 62, median = 8 months, range = 4-31) and clinically meaningful improvements in the PROs (n = 100-116; medians = 9-11 months, range = 4-24) showed similar intervals. CONCLUSION CBR is associated with survival and mainly reflects physical domains. Within phase III chemotherapy trials for advanced gastrointestinal cancer, CBR can be replaced by a PRO evaluation, without losing substantial information but gaining complementary information.
Resumo:
Neuropsychologists often face interpretational difficulties when assessing cognitive deficits, particularly in cases of unclear cerebral etiology. How can we be sure whether a single test score below the population average is indicative of a pathological brain condition or normal? In the past few years, the topic of intra-individual performance variability has gained great interest. On the basis of a large normative sample, two measures of performance variability and their importance for neuropsychological interpretation will be presented in this paper: the number of low scores and the level of dispersion.We conclude that low scores are common in healthy individuals. On the other hand, the level of dispersion is relatively small. Here, base rate information about abnormally low scores and abnormally high dispersion across cognitive abilities are providedto improve the awareness of normal variability and to serve clinicians as additional interpretive measures in the diagnostic process.
Resumo:
OBJECTIVE Short implants are increasingly used, but there is doubt about their performance being similar to that of regular implants. The aim of this study was to compare the mechanical stability of short implants vs. regular implants placed in the edentulous posterior mandible. MATERIAL AND METHODS Twenty-three patients received a total of 48 short implants (5 × 5.5 mm and 5 × 7 mm) and 42 regular implants (4 × 10 mm and 4 × 11.5 mm) in the posterior mandible. Patients who received short implants had <10 mm of bone height measured from the bone crest to the outer wall of the mandibular canal. Resonance frequency analysis (RFA) was performed at time intervals T0 (immediately after implant placement), T1 (after 15 days), T2 (after 30 days), T3 (after 60 days), and T4 (after 90 days). RESULTS The survival rate after 90 days was 87.5% for the short implants and 100% for regular implants (P < 0.05). There was no significant difference between the implants in time intervals T1, T2, T3, and T4. In T0, the RFA values of 5 × 5.5 implants were higher than values of 5 × 7 and 4 × 11.5 implants (P < 0.05). A total of six short implants that were placed in four patients were lost (three of 5 × 5.5 mm and three of 5 × 7 mm). Three lost implants started with high ISQ values, which progressively decreased. The other three lost implants started with a slightly lower ISQ value, which rose and then began to fall. CONCLUSIONS Survival rate of short implants after 90 days was lower than that of regular implants. However, short implants may be considered a reasonable alternative for rehabilitation of severely resorbed mandibles with reduced height, to avoid performing bone reconstruction before implant placement. Patients need to be aware of the reduced survival rate compared with regular implants before implant placement to avoid disappointments.
Resumo:
OBJECTIVES The aim of the Cavalier trial was to evaluate the safety and performance of the Perceval sutureless aortic valve in patients undergoing aortic valve replacement (AVR). We report the 30-day clinical and haemodynamic outcomes from the largest study cohort with a sutureless valve. METHODS From February 2010 to September 2013, 658 consecutive patients (mean age 77.8 years; 64.4% females; mean logistic EuroSCORE 10.2%) underwent AVR in 25 European Centres. Isolated AVRs were performed in 451 (68.5%) patients with a less invasive approach in 219 (33.3%) cases. Of the total, 40.0% were octogenarians. Congenital bicuspid aortic valve was considered an exclusion criterion. RESULTS Implantation was successful in 628 patients (95.4%). In isolated AVR through sternotomy, the mean cross-clamp time and the cardiopulmonary bypass (CPB) time were 32.6 and 53.7 min, and with the less invasive approach 38.8 and 64.5 min, respectively. The 30-day overall and valve-related mortality rates were 3.7 and 0.5%, respectively. Valve explants, stroke and endocarditis occurred in 0.6, 2.1 and in 0.1% of cases, respectively. Preoperative mean and peak pressure gradients decreased from 44.8 and 73.24 mmHg to 10.24 and 19.27 mmHg at discharge, respectively. The mean effective orifice area improved from 0.72 to 1.46 cm(2). CONCLUSIONS The current 30-day results show that the Perceval valve is safe (favourable haemodynamic effect and low complication rate), and can be implanted with a fast and reproducible technique after a short learning period. Short cross-clamp and CPB times were achieved in both isolated and combined procedures. The Perceval valve represents a promising alternative to biological AVR, especially with a less invasive approach and in older patients.
Resumo:
BACKGROUND In contrast to objective structured clinical examinations (OSCEs), mini-clinical evaluation exercises (mini-CEXs) take place at the clinical workplace. As both mini-CEXs and OSCEs assess clinical skills, but within different contexts, this study aims at analyzing to which degree students' mini-CEX scores can be predicted by their recent OSCE scores and/or context characteristics. METHODS Medical students participated in an end of Year 3 OSCE and in 11 mini-CEXs during 5 different clerkships of Year 4. The students' mean scores of 9 clinical skills OSCE stations and mean 'overall' and 'domain' mini-CEX scores, averaged over all mini-CEXs of each student were computed. Linear regression analyses including random effects were used to predict mini-CEX scores by OSCE performance and characteristics of clinics, trainers, students and assessments. RESULTS A total of 512 trainers in 45 clinics provided 1783 mini-CEX ratings for 165 students; OSCE results were available for 144 students (87 %). Most influential for the prediction of 'overall' mini-CEX scores was the trainers' clinical position with a regression coefficient of 0.55 (95 %-CI: 0.26-0.84; p < .001) for residents compared to heads of department. Highly complex tasks and assessments taking place in large clinics significantly enhanced 'overall' mini-CEX scores, too. In contrast, high OSCE performance did not significantly increase 'overall' mini-CEX scores. CONCLUSION In our study, Mini-CEX scores depended rather on context characteristics than on students' clinical skills as demonstrated in an OSCE. Ways are discussed which focus on either to enhance the scores' validity or to use narrative comments only.
Resumo:
BACKGROUND E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. OBJECTIVE This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. METHODS A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. RESULTS Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. CONCLUSIONS Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.
Resumo:
AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.
Resumo:
Proton therapy is a high precision technique in cancer radiation therapy which allows irradiating the tumor with minimal damage to the surrounding healthy tissues. Pencil beam scanning is the most advanced dose distribution technique and it is based on a variable energy beam of a few millimeters FWHM which is moved to cover the target volume. Due to spurious effects of the accelerator, of dose distribution system and to the unavoidable scattering inside the patient's body, the pencil beam is surrounded by a halo that produces a peripheral dose. To assess this issue, nuclear emulsion films interleaved with tissue equivalent material were used for the first time to characterize the beam in the halo region and to experimentally evaluate the corresponding dose. The high-precision tracking performance of the emulsion films allowed studying the angular distribution of the protons in the halo. Measurements with this technique were performed on the clinical beam of the Gantry1 at the Paul Scherrer Institute. Proton tracks were identified in the emulsion films and the track density was studied at several depths. The corresponding dose was assessed by Monte Carlo simulations and the dose profile was obtained as a function of the distance from the center of the beam spot.
Resumo:
BACKGROUND The application of therapeutic hypothermia (TH) for 12 to 24 hours following out-of-hospital cardiac arrest (OHCA) has been associated with decreased mortality and improved neurological function. However, the optimal duration of cooling is not known. We aimed to investigate whether targeted temperature management (TTM) at 33 ± 1 °C for 48 hours compared to 24 hours results in a better long-term neurological outcome. METHODS The TTH48 trial is an investigator-initiated pragmatic international trial in which patients resuscitated from OHCA are randomised to TTM at 33 ± 1 °C for either 24 or 48 hours. Inclusion criteria are: age older than 17 and below 80 years; presumed cardiac origin of arrest; and Glasgow Coma Score (GCS) <8, on admission. The primary outcome is neurological outcome at 6 months using the Cerebral Performance Category score (CPC) by an assessor blinded to treatment allocation and dichotomised to good (CPC 1-2) or poor (CPC 3-5) outcome. Secondary outcomes are: 6-month mortality, incidence of infection, bleeding and organ failure and CPC at hospital discharge, at day 28 and at day 90 following OHCA. Assuming that 50 % of the patients treated for 24 hours will have a poor outcome at 6 months, a study including 350 patients (175/arm) will have 80 % power (with a significance level of 5 %) to detect an absolute 15 % difference in primary outcome between treatment groups. A safety interim analysis was performed after the inclusion of 175 patients. DISCUSSION This is the first randomised trial to investigate the effect of the duration of TTM at 33 ± 1 °C in adult OHCA patients. We anticipate that the results of this trial will add significant knowledge regarding the management of cooling procedures in OHCA patients. TRIAL REGISTRATION NCT01689077.