855 resultados para Clinical performance
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
BACKGROUND Ultrathin strut biodegradable polymer sirolimus-eluting stents (BP-SES) proved noninferior to durable polymer everolimus-eluting stents (DP-EES) for a composite clinical end point in a population with minimal exclusion criteria. We performed a prespecified subgroup analysis of the Ultrathin Strut Biodegradable Polymer Sirolimus-Eluting Stent Versus Durable Polymer Everolimus-Eluting Stent for Percutaneous Coronary Revascularisation (BIOSCIENCE) trial to compare the performance of BP-SES and DP-EES in patients with diabetes mellitus. METHODS AND RESULTS BIOSCIENCE trial was an investigator-initiated, single-blind, multicentre, randomized, noninferiority trial comparing BP-SES versus DP-EES. The primary end point, target lesion failure, was a composite of cardiac death, target-vessel myocardial infarction, and clinically indicated target lesion revascularization within 12 months. Among a total of 2119 patients enrolled between February 2012 and May 2013, 486 (22.9%) had diabetes mellitus. Overall diabetic patients experienced a significantly higher risk of target lesion failure compared with patients without diabetes mellitus (10.1% versus 5.7%; hazard ratio [HR], 1.80; 95% confidence interval [CI], 1.27-2.56; P=0.001). At 1 year, there were no differences between BP-SES versus DP-EES in terms of the primary end point in both diabetic (10.9% versus 9.3%; HR, 1.19; 95% CI, 0.67-2.10; P=0.56) and nondiabetic patients (5.3% versus 6.0%; HR, 0.88; 95% CI, 0.58-1.33; P=0.55). Similarly, no significant differences in the risk of definite or probable stent thrombosis were recorded according to treatment arm in both study groups (4.0% versus 3.1%; HR, 1.30; 95% CI, 0.49-3.41; P=0.60 for diabetic patients and 2.4% versus 3.4%; HR, 0.70; 95% CI, 0.39-1.25; P=0.23, in nondiabetics). CONCLUSIONS In the prespecified subgroup analysis of the BIOSCIENCE trial, clinical outcomes among diabetic patients treated with BP-SES or DP-EES were comparable at 1 year. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT01443104.
Resumo:
OBJECTIVES Clinical benefit response (CBR), based on changes in pain, Karnofsky performance status, and weight, is an established palliative endpoint in trials for advanced gastrointestinal cancer. We investigated whether CBR is associated with survival, and whether CBR reflects a wide-enough range of domains to adequately capture patients' perception. METHODS CBR was prospectively evaluated in an international phase III chemotherapy trial in patients with advanced pancreatic cancer (n = 311) in parallel with patient-reported outcomes (PROs). RESULTS The median time to treatment failure was 3.4 months (range: 0-6). The majority of the CBRs (n = 39) were noted in patients who received chemotherapy for at least 5 months. Patients with CBR (n = 62) had longer survival than non-responders (n = 182) (hazard ratio = 0.69; 95% confidence interval: 0.51-0.94; p = 0.013). CBR was predicted with a sensitivity and specificity of 77-80% by various combinations of 3 mainly physical PROs. A comparison between the duration of CBR (n = 62, median = 8 months, range = 4-31) and clinically meaningful improvements in the PROs (n = 100-116; medians = 9-11 months, range = 4-24) showed similar intervals. CONCLUSION CBR is associated with survival and mainly reflects physical domains. Within phase III chemotherapy trials for advanced gastrointestinal cancer, CBR can be replaced by a PRO evaluation, without losing substantial information but gaining complementary information.
Resumo:
Neuropsychologists often face interpretational difficulties when assessing cognitive deficits, particularly in cases of unclear cerebral etiology. How can we be sure whether a single test score below the population average is indicative of a pathological brain condition or normal? In the past few years, the topic of intra-individual performance variability has gained great interest. On the basis of a large normative sample, two measures of performance variability and their importance for neuropsychological interpretation will be presented in this paper: the number of low scores and the level of dispersion.We conclude that low scores are common in healthy individuals. On the other hand, the level of dispersion is relatively small. Here, base rate information about abnormally low scores and abnormally high dispersion across cognitive abilities are providedto improve the awareness of normal variability and to serve clinicians as additional interpretive measures in the diagnostic process.
Resumo:
OBJECTIVE Short implants are increasingly used, but there is doubt about their performance being similar to that of regular implants. The aim of this study was to compare the mechanical stability of short implants vs. regular implants placed in the edentulous posterior mandible. MATERIAL AND METHODS Twenty-three patients received a total of 48 short implants (5 × 5.5 mm and 5 × 7 mm) and 42 regular implants (4 × 10 mm and 4 × 11.5 mm) in the posterior mandible. Patients who received short implants had <10 mm of bone height measured from the bone crest to the outer wall of the mandibular canal. Resonance frequency analysis (RFA) was performed at time intervals T0 (immediately after implant placement), T1 (after 15 days), T2 (after 30 days), T3 (after 60 days), and T4 (after 90 days). RESULTS The survival rate after 90 days was 87.5% for the short implants and 100% for regular implants (P < 0.05). There was no significant difference between the implants in time intervals T1, T2, T3, and T4. In T0, the RFA values of 5 × 5.5 implants were higher than values of 5 × 7 and 4 × 11.5 implants (P < 0.05). A total of six short implants that were placed in four patients were lost (three of 5 × 5.5 mm and three of 5 × 7 mm). Three lost implants started with high ISQ values, which progressively decreased. The other three lost implants started with a slightly lower ISQ value, which rose and then began to fall. CONCLUSIONS Survival rate of short implants after 90 days was lower than that of regular implants. However, short implants may be considered a reasonable alternative for rehabilitation of severely resorbed mandibles with reduced height, to avoid performing bone reconstruction before implant placement. Patients need to be aware of the reduced survival rate compared with regular implants before implant placement to avoid disappointments.
Resumo:
OBJECTIVES The aim of the Cavalier trial was to evaluate the safety and performance of the Perceval sutureless aortic valve in patients undergoing aortic valve replacement (AVR). We report the 30-day clinical and haemodynamic outcomes from the largest study cohort with a sutureless valve. METHODS From February 2010 to September 2013, 658 consecutive patients (mean age 77.8 years; 64.4% females; mean logistic EuroSCORE 10.2%) underwent AVR in 25 European Centres. Isolated AVRs were performed in 451 (68.5%) patients with a less invasive approach in 219 (33.3%) cases. Of the total, 40.0% were octogenarians. Congenital bicuspid aortic valve was considered an exclusion criterion. RESULTS Implantation was successful in 628 patients (95.4%). In isolated AVR through sternotomy, the mean cross-clamp time and the cardiopulmonary bypass (CPB) time were 32.6 and 53.7 min, and with the less invasive approach 38.8 and 64.5 min, respectively. The 30-day overall and valve-related mortality rates were 3.7 and 0.5%, respectively. Valve explants, stroke and endocarditis occurred in 0.6, 2.1 and in 0.1% of cases, respectively. Preoperative mean and peak pressure gradients decreased from 44.8 and 73.24 mmHg to 10.24 and 19.27 mmHg at discharge, respectively. The mean effective orifice area improved from 0.72 to 1.46 cm(2). CONCLUSIONS The current 30-day results show that the Perceval valve is safe (favourable haemodynamic effect and low complication rate), and can be implanted with a fast and reproducible technique after a short learning period. Short cross-clamp and CPB times were achieved in both isolated and combined procedures. The Perceval valve represents a promising alternative to biological AVR, especially with a less invasive approach and in older patients.
Resumo:
BACKGROUND In contrast to objective structured clinical examinations (OSCEs), mini-clinical evaluation exercises (mini-CEXs) take place at the clinical workplace. As both mini-CEXs and OSCEs assess clinical skills, but within different contexts, this study aims at analyzing to which degree students' mini-CEX scores can be predicted by their recent OSCE scores and/or context characteristics. METHODS Medical students participated in an end of Year 3 OSCE and in 11 mini-CEXs during 5 different clerkships of Year 4. The students' mean scores of 9 clinical skills OSCE stations and mean 'overall' and 'domain' mini-CEX scores, averaged over all mini-CEXs of each student were computed. Linear regression analyses including random effects were used to predict mini-CEX scores by OSCE performance and characteristics of clinics, trainers, students and assessments. RESULTS A total of 512 trainers in 45 clinics provided 1783 mini-CEX ratings for 165 students; OSCE results were available for 144 students (87 %). Most influential for the prediction of 'overall' mini-CEX scores was the trainers' clinical position with a regression coefficient of 0.55 (95 %-CI: 0.26-0.84; p < .001) for residents compared to heads of department. Highly complex tasks and assessments taking place in large clinics significantly enhanced 'overall' mini-CEX scores, too. In contrast, high OSCE performance did not significantly increase 'overall' mini-CEX scores. CONCLUSION In our study, Mini-CEX scores depended rather on context characteristics than on students' clinical skills as demonstrated in an OSCE. Ways are discussed which focus on either to enhance the scores' validity or to use narrative comments only.
Resumo:
BACKGROUND E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. OBJECTIVE This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. METHODS A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. RESULTS Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. CONCLUSIONS Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.
Resumo:
AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.
Resumo:
In a large health care system, the importance of accurate information as feedback mechanisms about its performance is necessary on many levels from the senior level management to service level managers for valid decision-making purposes. The implementation of dashboards is one way to remedy the problem of data overload by providing up-to-date, accurate, and concise information. As this health care system seeks to have an organized, systematic review mechanism in place, dashboards are being created in a variety of the hospital service departments to monitor performance indicators. The Infection Control Administration of this health care system is one that does not currently utilize a dashboard but seeks to implement one. ^ The purpose of this project is to research and design a clinical dashboard for the Infection Control Administration. The intent is that the implementation and usefulness of the clinical dashboard translates into improvement in the measurement of health care quality.^
Resumo:
Objective. The study reviewed one year of Texas hospital discharge data and Trauma Registry data for the 22 trauma services regions in Texas to identify regional variations in capacity, process of care and clinical outcomes for trauma patients, and analyze the statistical associations among capacity, process of care, and outcomes. ^ Methods. Cross sectional study design covering one year of state-wide Texas data. Indicators of trauma capacity, trauma care processes, and clinical outcomes were defined and data were collected on each indicator. Descriptive analyses were conducted of regional variations in trauma capacity, process of care, and clinical outcomes at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers. Multilevel regression models were performed to test the relations among trauma capacity, process of care, and outcome measures at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers while controlling for confounders such as age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization. ^ Results. Significant regional variation was found among the 22 trauma services regions across Texas in trauma capacity, process of care, and clinical outcomes. The regional trauma bed rate, the average staffed bed per 100,000 varied significantly by trauma service region. Pre-hospital trauma care processes were significantly variable by region---EMS time, transfer time, and triage. Clinical outcomes including mortality, hospital and intensive care unit length of stay, and hospital charges also varied significantly by region. In multilevel regression analysis, the average trauma bed rate was significantly related to trauma care processes including ambulance delivery time, transfer time, and triage after controlling for age, gender, race/ethnicity, injury severity, level of trauma centers, and urbanization at all trauma centers. Transfer time only among processes of care was significant with the average trauma bed rate by region at Level III and IV. Also trauma mortality only among outcomes measures was significantly associated with the average trauma bed rate by region at all trauma centers. Hospital charges only among outcomes measures were statistically related to trauma bed rate at Level I and II trauma centers. The effect of confounders on processes and outcomes such as age, gender, race/ethnicity, injury severity, and urbanization was found significantly variable by level of trauma centers. ^ Conclusions. Regional variation in trauma capacity, process, and outcomes in Texas was extensive. Trauma capacity, age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization were significantly associated with trauma process and clinical outcomes depending on level of trauma centers. ^ Key words: regionalized trauma systems, trauma capacity, pre-hospital trauma care, process, trauma outcomes, trauma performance, evaluation measures, regional variations ^
Resumo:
The number of people with end-stage-renal-disease (ESRD) and living with dialysis is a growing public health concern. Most studies about the impact of ESRD on people’s lives have placed attention on the medical and clinical dimension of ESRD. Very few have given attention to the environmental and cultural context in which people with ESRD live, the adaptation that these individuals must make to adjust to living with ESRD and dialysis, or the occupations in which they engage. Additionally these studies have not focused on Mexican Americans who are disproportionately affected by this illness and condition. This qualitative study explores the needs, perceptions, and issues facing Mexican Americans with ESRD living with dialysis as well as their families. Participants were residents of the Lower Rio Grande Valley and included individuals with ESRD, family members, and the healthcare providers who give care to them. The Health Belief Model and Lifestyle Performance Model served as the theoretical frameworks. The study also explored the daily occupations of this population. ^ In-depth interviews were conducted on 15 Mexican Americans with ESRD living with dialysis, 15 family members, and six dialysis healthcare providers. A video documentary of the day-to-day life of three individuals with ESRD and their families was produced. Such data do not currently exist and will greatly enhance the understanding of the human experience of living with ESRD. The results suggest that a collective effort of the family unit is at work to deal with the demands of dialysis. An imbalance and disharmony exist among the occupational activities, which creates occupational deprivation and disruption for both the individuals and family members. Implications for practice and recommendations for further research are described. ^
Resumo:
Interim clinical trial monitoring procedures were motivated by ethical and economic considerations. Classical Brownian motion (Bm) techniques for statistical monitoring of clinical trials were widely used. Conditional power argument and α-spending function based boundary crossing probabilities are popular statistical hypothesis testing procedures under the assumption of Brownian motion. However, it is not rare that the assumptions of Brownian motion are only partially met for trial data. Therefore, I used a more generalized form of stochastic process, called fractional Brownian motion (fBm), to model the test statistics. Fractional Brownian motion does not hold Markov property and future observations depend not only on the present observations but also on the past ones. In this dissertation, we simulated a wide range of fBm data, e.g., H = 0.5 (that is, classical Bm) vs. 0.5< H <1, with treatment effects vs. without treatment effects. Then the performance of conditional power and boundary-crossing based interim analyses were compared by assuming that the data follow Bm or fBm. Our simulation study suggested that the conditional power or boundaries under fBm assumptions are generally higher than those under Bm assumptions when H > 0.5 and also matches better with the empirical results. ^
Resumo:
This study developed proxy measures to test the independent effects of medical specialty, institutional ethics committee (IEC) and the interaction between the two, upon a proxy for the dependent variable of the medical decision to withhold/withdraw care for the dying--the resuscitation index (R-index). Five clinical vignettes were constructed and validated to convey the realism and contextual factors implicit in the decision to withhold/withdraw care. A scale was developed to determine the range of contact by an IEC in terms of physician knowledge and use of IEC policy.^ This study was composed of a sample of 215 physicians in a teaching hospital in the Southwest where proxy measures were tested for two competing influences, medical specialty and IEC, which alternately oppose and support the decision to withhold/withdraw care for the dying. A sub-sample of surgeons supported the hypothesis that an IEC is influential in opposing the medical training imperative to prolong life.^ Those surgeons with a low IEC score were 326 percent more likely to continue care than were surgeons with a high IEC score when compared to all other specialties. IEC alone was also found to significantly predict the decision to withhold/withdraw care. Interaction of IEC with the specialty of surgery was found to be the best predictor for a decision to withhold/withdraw care for the dying. ^