74 resultados para Clinical-prediction Rules
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Background The loose and stringent Asthma Predictive Indices (API), developed in Tucson, are popular rules to predict asthma in preschool children. To be clinically useful, they require validation in different settings. Objective To assess the predictive performance of the API in an independent population and compare it with simpler rules based only on preschool wheeze. Methods We studied 1954 children of the population-based Leicester Respiratory Cohort, followed up from age 1 to 10 years. The API and frequency of wheeze were assessed at age 3 years, and we determined their association with asthma at ages 7 and 10 years by using logistic regression. We computed test characteristics and measures of predictive performance to validate the API and compare it with simpler rules. Results The ability of the API to predict asthma in Leicester was comparable to Tucson: for the loose API, odds ratios for asthma at age 7 years were 5.2 in Leicester (5.5 in Tucson), and positive predictive values were 26% (26%). For the stringent API, these values were 8.2 (9.8) and 40% (48%). For the simpler rule early wheeze, corresponding values were 5.4 and 21%; for early frequent wheeze, 6.7 and 36%. The discriminative ability of all prediction rules was moderate (c statistic ≤ 0.7) and overall predictive performance low (scaled Brier score < 20%). Conclusion Predictive performance of the API in Leicester, although comparable to the original study, was modest and similar to prediction based only on preschool wheeze. This highlights the need for better prediction rules.
Resumo:
Prognostic assessment is important for the management of patients with a pulmonary embolism (PE). A number of clinical prediction rules (CPRs) have been proposed for stratifying PE mortality risk. The aim of this systematic review was to assess the performance of prognostic CPRs in identifying a low-risk PE.
Resumo:
Venous thromboembolism (VTE) is a potentially lethal clinical condition that is suspected in patients with common clinical complaints, in many and varied, clinical care settings. Once VTE is diagnosed, optimal therapeutic management (thrombolysis, IVC filters, type and duration of anticoagulants) and ideal therapeutic management settings (outpatient, critical care) are also controversial. Clinical prediction tools, including clinical decision rules and D-Dimer, have been developed, and some validated, to assist clinical decision making along the diagnostic and therapeutic management paths for VTE. Despite these developments, practice variation is high and there remain many controversies in the use of the clinical prediction tools. In this narrative review, we highlight challenges and controversies in VTE diagnostic and therapeutic management with a focus on clinical decision rules and D-Dimer.
Resumo:
OBJECTIVE Cognitive impairments are regarded as a core component of schizophrenia. However, the cognitive dimension of psychosis is hardly considered by ultra-high risk (UHR) criteria. Therefore, we studied whether the combination of symptomatic UHR criteria and the basic symptom criterion "cognitive disturbances" (COGDIS) is superior in predicting first-episode psychosis. METHOD In a naturalistic 48-month follow-up study, the conversion rate to first-episode psychosis was studied in 246 outpatients of an early detection of psychosis service (FETZ); thereby, the association between conversion, and the combined and singular use of UHR criteria and COGDIS was compared. RESULTS Patients that met UHR criteria and COGDIS (n=127) at baseline had a significantly higher risk of conversion (hr=0.66 at month 48) and a shorter time to conversion than patients that met only UHR criteria (n=37; hr=0.28) or only COGDIS (n=30; hr=0.23). Furthermore, the risk of conversion was higher for the combined criteria than for UHR criteria (n=164; hr=0.56 at month 48) and COGDIS (n=158; hr=0.56 at month 48) when considered irrespective of each other. CONCLUSIONS Our findings support the merits of considering both COGDIS and UHR criteria in the early detection of persons who are at high risk of developing a first psychotic episode within 48months. Applying both sets of criteria improves sensitivity and individual risk estimation, and may thereby support the development of stage-targeted interventions. Moreover, since the combined approach enables the identification of considerably more homogeneous at-risk samples, it should support both preventive and basic research.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
Simple clinical scores to predict large vessel occlusion (LVO) in acute ischemic stroke would be helpful to triage patients in the prehospital phase. We assessed the ability of various combinations of National Institutes of Health Stroke Scale (NIHSS) subitems and published stroke scales (i.e., RACE scale, 3I-SS, sNIHSS-8, sNIHSS-5, sNIHSS-1, mNIHSS, a-NIHSS items profiles A-E, CPSS1, CPSS2, and CPSSS) to predict LVO on CT or MR arteriography in 1085 consecutive patients (39.4 % women, mean age 67.7 years) with anterior circulation strokes within 6 h of symptom onset. 657 patients (61 %) had an occlusion of the internal carotid artery or the M1/M2 segment of the middle cerebral artery. Best cut-off value of the total NIHSS score to predict LVO was 7 (PPV 84.2 %, sensitivity 81.0 %, specificity 76.6 %, NPV 72.4 %, ACC 79.3 %). Receiver operating characteristic curves of various combinations of NIHSS subitems and published scores were equally or less predictive to show LVO than the total NIHSS score. At intersection of sensitivity and specificity curves in all scores, at least 1/5 of patients with LVO were missed. Best odds ratios for LVO among NIHSS subitems were best gaze (9.6, 95 %-CI 6.765-13.632), visual fields (7.0, 95 %-CI 3.981-12.370), motor arms (7.6, 95 %-CI 5.589-10.204), and aphasia/neglect (7.1, 95 %-CI 5.352-9.492). There is a significant correlation between clinical scores based on the NIHSS score and LVO on arteriography. However, if clinically relevant thresholds are applied to the scores, a sizable number of LVOs are missed. Therefore, clinical scores cannot replace vessel imaging.
Resumo:
BACKGROUND: In clinical practice a diagnosis is based on a combination of clinical history, physical examination and additional diagnostic tests. At present, studies on diagnostic research often report the accuracy of tests without taking into account the information already known from history and examination. Due to this lack of information, together with variations in design and quality of studies, conventional meta-analyses based on these studies will not show the accuracy of the tests in real practice. By using individual patient data (IPD) to perform meta-analyses, the accuracy of tests can be assessed in relation to other patient characteristics and allows the development or evaluation of diagnostic algorithms for individual patients. In this study we will examine these potential benefits in four clinical diagnostic problems in the field of gynaecology, obstetrics and reproductive medicine. METHODS/DESIGN: Based on earlier systematic reviews for each of the four clinical problems, studies are considered for inclusion. The first authors of the included studies will be invited to participate and share their original data. After assessment of validity and completeness the acquired datasets are merged. Based on these data, a series of analyses will be performed, including a systematic comparison of the results of the IPD meta-analysis with those of a conventional meta-analysis, development of multivariable models for clinical history alone and for the combination of history, physical examination and relevant diagnostic tests and development of clinical prediction rules for the individual patients. These will be made accessible for clinicians. DISCUSSION: The use of IPD meta-analysis will allow evaluating accuracy of diagnostic tests in relation to other relevant information. Ultimately, this could increase the efficiency of the diagnostic work-up, e.g. by reducing the need for invasive tests and/or improving the accuracy of the diagnostic workup. This study will assess whether these benefits of IPD meta-analysis over conventional meta-analysis can be exploited and will provide a framework for future IPD meta-analyses in diagnostic and prognostic research.
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.
Resumo:
Background Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study. Methodology/Principal Findings We built up two prediction rules (“Snap-shot rule” for a single sample and “Track-shot rule” for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior ≥5% or <5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200×106/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold. Conclusions/Significance Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count >650 for a threshold of 200, >900 for 350, or >1150 for 500×106/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.
Resumo:
PURPOSE OF REVIEW Fever and neutropenia is the most common complication in the treatment of childhood cancer. This review will summarize recent publications that focus on improving the management of this condition as well as those that seek to optimize translational research efforts. RECENT FINDINGS A number of clinical decision rules are available to assist in the identification of low-risk fever and neutropenia however few have undergone external validation and formal impact analysis. Emerging evidence suggests acute fever and neutropenia management strategies should include time to antibiotic recommendations, and quality improvement initiatives have focused on eliminating barriers to early antibiotic administration. Despite reported increases in antimicrobial resistance, few studies have focused on the prediction, prevention, and optimal treatment of these infections and the effect on risk stratification remains unknown. A consensus guideline for paediatric fever and neutropenia research is now available and may help reduce some of the heterogeneity between studies that have previously limited the translation of evidence into clinical practice. SUMMARY Risk stratification is recommended for children with cancer and fever and neutropenia. Further research is required to quantify the overall impact of this approach and to refine exactly which children will benefit from early antibiotic administration as well as modifications to empiric regimens to cover antibiotic-resistant organisms.
Resumo:
Minor brain injury is a frequent condition. Validated clinical decision rules can help in deciding whether a computed tomogram (CT) of the head is required. We hypothesized that institutional guidelines are not frequently used, and that psychological factors are a common reason for ordering an unnecessary CT.
Resumo:
Clinical scores may help physicians to better assess the individual risk/benefit of oral anticoagulant therapy. We aimed to externally validate and compare the prognostic performance of 7 clinical prediction scores for major bleeding events during oral anticoagulation therapy.
Resumo:
OBJECTIVE To assess recommended and actual use of statins in primary prevention of cardiovascular disease (CVD) based on clinical prediction scores in adults who develop their first acute coronary syndrome (ACS). METHOD Cross-sectional study of 3172 adults without previous CVD hospitalized with ACS at 4 university centers in Switzerland. The number of participants eligible for statins before hospitalization was estimated based on the European Society of Cardiology (ESC) guidelines and compared to the observed number of participants on statins at hospital entry. RESULTS Overall, 1171 (37%) participants were classified as high-risk (10-year risk of cardiovascular mortality ≥5% or diabetes); 1025 (32%) as intermediate risk (10-year risk <5% but ≥1%); and 976 (31%) as low risk (10-year risk <1%). Before hospitalization, 516 (16%) were on statins; among high-risk participants, only 236 of 1171 (20%) were on statins. If ESC primary prevention guidelines had been fully implemented, an additional 845 high-risk adults (27% of the whole sample) would have been eligible for statins before hospitalization. CONCLUSION Although statins are recommended for primary prevention in high-risk adults, only one-fifth of them are on statins when hospitalized for a first ACS.
Resumo:
Symptoms of primary ciliary dyskinesia (PCD) are nonspecific and guidance on whom to refer for testing is limited. Diagnostic tests for PCD are highly specialised, requiring expensive equipment and experienced PCD scientists. This study aims to develop a practical clinical diagnostic tool to identify patients requiring testing.Patients consecutively referred for testing were studied. Information readily obtained from patient history was correlated with diagnostic outcome. Using logistic regression, the predictive performance of the best model was tested by receiver operating characteristic curve analyses. The model was simplified into a practical tool (PICADAR) and externally validated in a second diagnostic centre.Of 641 referrals with a definitive diagnostic outcome, 75 (12%) were positive. PICADAR applies to patients with persistent wet cough and has seven predictive parameters: full-term gestation, neonatal chest symptoms, neonatal intensive care admittance, chronic rhinitis, ear symptoms, situs inversus and congenital cardiac defect. Sensitivity and specificity of the tool were 0.90 and 0.75 for a cut-off score of 5 points. Area under the curve for the internally and externally validated tool was 0.91 and 0.87, respectively.PICADAR represents a simple diagnostic clinical prediction rule with good accuracy and validity, ready for testing in respiratory centres referring to PCD centres.