961 resultados para Clinical-prediction Rules
Resumo:
The main objective of this study was to determine the external validity of a clinical prediction rule developed by the European Multicenter Study on Human Spinal Cord Injury (EM-SCI) to predict the ambulation outcomes 12 months after traumatic spinal cord injury. Data from the North American Clinical Trials Network (NACTN) data registry with approximately 500 SCI cases were used for this validity study. The predictive accuracy of the EM-SCI prognostic model was evaluated using calibration and discrimination based on 231 NACTN cases. The area under the receiver-operating-characteristics curve (ROC) curve was 0.927 (95% CI 0.894 – 0.959) for the EM-SCI model when applied to NACTN population. This is lower than the AUC of 0.956 (95% CI 0.936 – 0.976) reported for the EM-SCI population, but suggests that the EM-SCI clinical prediction rule distinguished well between those patients in the NACTN population who were able to achieve independent ambulation and those who did not achieve independent ambulation. The calibration curve suggests that higher the prediction score is, the better the probability of walking with the best prediction for AIS D patients. In conclusion, the EM-SCI clinical prediction rule was determined to be generalizable to the adult NACTN SCI population.^
Resumo:
Simple clinical scores to predict large vessel occlusion (LVO) in acute ischemic stroke would be helpful to triage patients in the prehospital phase. We assessed the ability of various combinations of National Institutes of Health Stroke Scale (NIHSS) subitems and published stroke scales (i.e., RACE scale, 3I-SS, sNIHSS-8, sNIHSS-5, sNIHSS-1, mNIHSS, a-NIHSS items profiles A-E, CPSS1, CPSS2, and CPSSS) to predict LVO on CT or MR arteriography in 1085 consecutive patients (39.4 % women, mean age 67.7 years) with anterior circulation strokes within 6 h of symptom onset. 657 patients (61 %) had an occlusion of the internal carotid artery or the M1/M2 segment of the middle cerebral artery. Best cut-off value of the total NIHSS score to predict LVO was 7 (PPV 84.2 %, sensitivity 81.0 %, specificity 76.6 %, NPV 72.4 %, ACC 79.3 %). Receiver operating characteristic curves of various combinations of NIHSS subitems and published scores were equally or less predictive to show LVO than the total NIHSS score. At intersection of sensitivity and specificity curves in all scores, at least 1/5 of patients with LVO were missed. Best odds ratios for LVO among NIHSS subitems were best gaze (9.6, 95 %-CI 6.765-13.632), visual fields (7.0, 95 %-CI 3.981-12.370), motor arms (7.6, 95 %-CI 5.589-10.204), and aphasia/neglect (7.1, 95 %-CI 5.352-9.492). There is a significant correlation between clinical scores based on the NIHSS score and LVO on arteriography. However, if clinically relevant thresholds are applied to the scores, a sizable number of LVOs are missed. Therefore, clinical scores cannot replace vessel imaging.
Resumo:
BACKGROUND: Chest pain is a common complaint in primary care, with coronary heart disease (CHD) being the most concerning of many potential causes. Systematic reviews on the sensitivity and specificity of symptoms and signs summarize the evidence about which of them are most useful in making a diagnosis. Previous meta-analyses are dominated by studies of patients referred to specialists. Moreover, as the analysis is typically based on study-level data, the statistical analyses in these reviews are limited while meta-analyses based on individual patient data can provide additional information. Our patient-level meta-analysis has three unique aims. First, we strive to determine the diagnostic accuracy of symptoms and signs for myocardial ischemia in primary care. Second, we investigate associations between study- or patient-level characteristics and measures of diagnostic accuracy. Third, we aim to validate existing clinical prediction rules for diagnosing myocardial ischemia in primary care. This article describes the methods of our study and six prospective studies of primary care patients with chest pain. Later articles will describe the main results. METHODS/DESIGN: We will conduct a systematic review and IPD meta-analysis of studies evaluating the diagnostic accuracy of symptoms and signs for diagnosing coronary heart disease in primary care. We will perform bivariate analyses to determine the sensitivity, specificity and likelihood ratios of individual symptoms and signs and multivariate analyses to explore the diagnostic value of an optimal combination of all symptoms and signs based on all data of all studies. We will validate existing clinical prediction rules from each of the included studies by calculating measures of diagnostic accuracy separately by study. DISCUSSION: Our study will face several methodological challenges. First, the number of studies will be limited. Second, the investigators of original studies defined some outcomes and predictors differently. Third, the studies did not collect the same standard clinical data set. Fourth, missing data, varying from partly missing to fully missing, will have to be dealt with.Despite these limitations, we aim to summarize the available evidence regarding the diagnostic accuracy of symptoms and signs for diagnosing CHD in patients presenting with chest pain in primary care. REVIEW REGISTRATION: Centre for Reviews and Dissemination (University of York): CRD42011001170.
Resumo:
Oral anticoagulants are frequently used in clinical practice. The most important complication of oral anticoagulation is major bleeding. The incidence of major bleeding is about 2-3%/year in randomized controlled trials but may be considerably higher under real life conditions. Major bleeding risk in patients receiving oral anticoagulants depends on factors related to anticoagulation itself (intensity and quality), patient-related factors (demographic characteristics and comorbid diseases), and concomitant treatments with antiplatelet or non-steroidal anti-inflammatory drugs. The role of clinical prediction rules for major bleeding is discussed.
Resumo:
In recent years many clinical prediction rules (CPR) have been developed. Before a CPR can be used in clinical practice, different methodical steps are necessary, from the development of the score, the internal and external validation to the impact study. Before using a CPR in daily practice family doctors have to verify how the rules have been developed and whether this has been done in a population similar to the population in which they would use them. The aim of this paper is to describe the development of a CPR, and to discuss advantages and risks related to the use of CPR in order to help family doctors in their choice of scores for use in their daily practice.
Resumo:
L’accident thromboembolique veineux, tel que la thrombose veineuse profonde (TVP) ou thrombophlébite des membres inférieurs, est une pathologie vasculaire caractérisée par la formation d’un caillot sanguin causant une obstruction partielle ou totale de la lumière sanguine. Les embolies pulmonaires sont une complication mortelle des TVP qui surviennent lorsque le caillot se détache, circule dans le sang et produit une obstruction de la ramification artérielle irriguant les poumons. La combinaison d’outils et de techniques d’imagerie cliniques tels que les règles de prédiction cliniques (signes et symptômes) et les tests sanguins (D-dimères) complémentés par un examen ultrasonographique veineux (test de compression, écho-Doppler), permet de diagnostiquer les premiers épisodes de TVP. Cependant, la performance de ces outils diagnostiques reste très faible pour la détection de TVP récurrentes. Afin de diriger le patient vers une thérapie optimale, la problématique n’est plus basée sur la détection de la thrombose mais plutôt sur l’évaluation de la maturité et de l’âge du thrombus, paramètres qui sont directement corrélées à ses propriétés mécaniques (e.g. élasticité, viscosité). L’élastographie dynamique (ED) a récemment été proposée comme une nouvelle modalité d’imagerie non-invasive capable de caractériser quantitativement les propriétés mécaniques de tissus. L’ED est basée sur l’analyse des paramètres acoustiques (i.e. vitesse, atténuation, pattern de distribution) d’ondes de cisaillement basses fréquences (10-7000 Hz) se propageant dans le milieu sondé. Ces ondes de cisaillement générées par vibration externe, ou par source interne à l’aide de la focalisation de faisceaux ultrasonores (force de radiation), sont mesurées par imagerie ultrasonore ultra-rapide ou par résonance magnétique. Une méthode basée sur l’ED adaptée à la caractérisation mécanique de thromboses veineuses permettrait de quantifier la sévérité de cette pathologie à des fins d’amélioration diagnostique. Cette thèse présente un ensemble de travaux reliés au développement et à la validation complète et rigoureuse d’une nouvelle technique d’imagerie non-invasive élastographique pour la mesure quantitative des propriétés mécaniques de thromboses veineuses. L’atteinte de cet objectif principal nécessite une première étape visant à améliorer les connaissances sur le comportement mécanique du caillot sanguin (sang coagulé) soumis à une sollicitation dynamique telle qu’en ED. Les modules de conservation (comportement élastique, G’) et de perte (comportement visqueux, G’’) en cisaillement de caillots sanguins porcins sont mesurés par ED lors de la cascade de coagulation (à 70 Hz), et après coagulation complète (entre 50 Hz et 160 Hz). Ces résultats constituent les toutes premières mesures du comportement dynamique de caillots sanguins dans une gamme fréquentielle aussi étendue. L’étape subséquente consiste à mettre en place un instrument innovant de référence (« gold standard »), appelé RheoSpectris, dédié à la mesure de la viscoélasticité hyper-fréquence (entre 10 Hz et 1000 Hz) des matériaux et biomatériaux. Cet outil est indispensable pour valider et calibrer toute nouvelle technique d’élastographie dynamique. Une étude comparative entre RheoSpectris et la rhéométrie classique est réalisée afin de valider des mesures faites sur différents matériaux (silicone, thermoplastique, biomatériaux, gel). L’excellente concordance entre les deux technologies permet de conclure que RheoSpectris est un instrument fiable pour la mesure mécanique à des fréquences difficilement accessibles par les outils actuels. Les bases théoriques d’une nouvelle modalité d’imagerie élastographique, nommée SWIRE (« shear wave induced resonance dynamic elastography »), sont présentées et validées sur des fantômes vasculaires. Cette approche permet de caractériser les propriétés mécaniques d’une inclusion confinée (e.g. caillot sanguin) à partir de sa résonance (amplification du déplacement) produite par la propagation d’ondes de cisaillement judicieusement orientées. SWIRE a également l’avantage d’amplifier l’amplitude de vibration à l’intérieur de l’hétérogénéité afin de faciliter sa détection et sa segmentation. Finalement, la méthode DVT-SWIRE (« Deep venous thrombosis – SWIRE ») est adaptée à la caractérisation de l’élasticité quantitative de thromboses veineuses pour une utilisation en clinique. Cette méthode exploite la première fréquence de résonance mesurée dans la thrombose lors de la propagation d’ondes de cisaillement planes (vibration d’une plaque externe) ou cylindriques (simulation de la force de radiation par génération supersonique). DVT-SWIRE est appliquée sur des fantômes simulant une TVP et les résultats sont comparés à ceux donnés par l’instrument de référence RheoSpectris. Cette méthode est également utilisée avec succès dans une étude ex vivo pour l’évaluation de l’élasticité de thromboses porcines explantées après avoir été induites in vivo par chirurgie.
Resumo:
BACKGROUND: In clinical practice a diagnosis is based on a combination of clinical history, physical examination and additional diagnostic tests. At present, studies on diagnostic research often report the accuracy of tests without taking into account the information already known from history and examination. Due to this lack of information, together with variations in design and quality of studies, conventional meta-analyses based on these studies will not show the accuracy of the tests in real practice. By using individual patient data (IPD) to perform meta-analyses, the accuracy of tests can be assessed in relation to other patient characteristics and allows the development or evaluation of diagnostic algorithms for individual patients. In this study we will examine these potential benefits in four clinical diagnostic problems in the field of gynaecology, obstetrics and reproductive medicine. METHODS/DESIGN: Based on earlier systematic reviews for each of the four clinical problems, studies are considered for inclusion. The first authors of the included studies will be invited to participate and share their original data. After assessment of validity and completeness the acquired datasets are merged. Based on these data, a series of analyses will be performed, including a systematic comparison of the results of the IPD meta-analysis with those of a conventional meta-analysis, development of multivariable models for clinical history alone and for the combination of history, physical examination and relevant diagnostic tests and development of clinical prediction rules for the individual patients. These will be made accessible for clinicians. DISCUSSION: The use of IPD meta-analysis will allow evaluating accuracy of diagnostic tests in relation to other relevant information. Ultimately, this could increase the efficiency of the diagnostic work-up, e.g. by reducing the need for invasive tests and/or improving the accuracy of the diagnostic workup. This study will assess whether these benefits of IPD meta-analysis over conventional meta-analysis can be exploited and will provide a framework for future IPD meta-analyses in diagnostic and prognostic research.
Resumo:
In diagnosis and prognosis, we should avoid intuitive “guesstimates” and seek a validated numerical aid
Resumo:
El impacto que ha generado el trauma en Colombia a lo largo de la historia, nos ha obligado a mejorar y adaptar diferentes tipos de sistemas de atención en trauma, basados en los lineamientos internacionales, los cuales buscan evitar el significativo aumento en las tasas de mortalidad y discapacidad que se obtienen de este, especialmente en los servicios de Emergencias en los cuales se reciben el 100% de estos pacientes con traumatismo múltiple o politraumatismo. Dentro de este grupo de pacientes hay un subgrupo que son las pacientes con trauma de abdomen que cursan con estabilidad hemodinámica y además son clasificados de bajo riesgo, ya sea por índices de trauma o por otros métodos como la medición sérica de lactato, los cuales tienen un papel poco despreciable al momento de ver mortalidad y discapacidad por trauma, ya sea penetrante o cerrado; en este trabajo específicamente nos centramos en las personas que consultan al servicio de Emergencias con trauma cerrado de abdomen los cuales son considerados de bajo riesgo, siendo este subgrupo de pacientes uno de los más difíciles de abordar y enfocar al momento de la valoración inicial, ya que se debe tener la seguridad de que no hay lesiones que comprometen la vida y por consiguiente estos pacientes puedan ser dados de alta.
Resumo:
BACKGROUND: Physicians need a specific risk-stratification tool to facilitate safe and cost-effective approaches to the management of patients with cancer and acute pulmonary embolism (PE). The objective of this study was to develop a simple risk score for predicting 30-day mortality in patients with PE and cancer by using measures readily obtained at the time of PE diagnosis. METHODS: Investigators randomly allocated 1,556 consecutive patients with cancer and acute PE from the international multicenter Registro Informatizado de la Enfermedad TromboEmbólica to derivation (67%) and internal validation (33%) samples. The external validation cohort for this study consisted of 261 patients with cancer and acute PE. Investigators compared 30-day all-cause mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. RESULTS: In the derivation sample, multivariable analyses produced the risk score, which contained six variables: age > 80 years, heart rate ≥ 110/min, systolic BP < 100 mm Hg, body weight < 60 kg, recent immobility, and presence of metastases. In the internal validation cohort (n = 508), the 22.2% of patients (113 of 508) classified as low risk by the prognostic model had a 30-day mortality of 4.4% (95% CI, 0.6%-8.2%) compared with 29.9% (95% CI, 25.4%-34.4%) in the high-risk group. In the external validation cohort, the 18% of patients (47 of 261) classified as low risk by the prognostic model had a 30-day mortality of 0%, compared with 19.6% (95% CI, 14.3%-25.0%) in the high-risk group. CONCLUSIONS: The developed clinical prediction rule accurately identifies low-risk patients with cancer and acute PE.
Resumo:
Background: Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study.Methodology/Principal Findings: We built up two prediction rules ("Snap-shot rule" for a single sample and "Track-shot rule" for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior >= 5% or < 5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200x10(6)/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold.Conclusions/Significance: Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count > 650 for a threshold of 200, > 900 for 350, or > 1150 for 500x10(6)/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
Introduction: The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.Methods: Consecutive inpatients and outpatients with a clinical suspicion of DVT were prospectively enrolled. Pre-test DVT probability (low/intermediate/high) was determined using both scores. Patients with a non-high probability based on the original Wells score underwent D-dimers measurement. Patients with D-dimers <500 mu g/L did not undergo further testing, and treatment was withheld. All others underwent complete lower limb compression ultrasound, and those diagnosed with DVT were anticoagulated. The primary study outcome was objectively confirmed symptomatic venous thromboembolism within 3 months of enrollment.Results: 298 patients with suspected DVT were included. Of these, 82 (27.5%) had DVT, and 46 of them were proximal. Compared to the modified score, the original Wells score classified a higher proportion of patients as low-risk (53 vs 48%; p<0.01) and a lower proportion as high-risk (17 vs 15%; p=0.02); the prevalence of proximal DVT in each category was similar with both scores (7-8% low, 16-19% intermediate, 36-37% high). The area under the receiver operating characteristic curve regarding proximal DVT detection was similar for both scores, but they both performed poorly in predicting isolated distal DVT and DVT in inpatients.Conclusion: The study demonstrates that both Wells scores perform equally well in proximal DVT pre-test probability prediction. Neither score appears to be particularly useful in hospitalized patients and those with isolated distal DVT. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Invasive candidiasis, including candidemia and deep-seated Candida infections, is a severe opportunistic infection with an overall mortality in ICU patients comparable to that of severe sepsis/septic shock. With an incidence ranging from 5 to 10 cases per 1000 ICU admissions, invasive candidiasis represents 510% of all ICU-acquired infections. Although a high proportion of critically ill patients is colonised with Candida spp., only 540% develop an invasive infection. The occurrence of this complication is difficult to predict and an early diagnosis remains a major challenge. Indeed, blood cultures are positive in a minority of cases and often late in the course of infection. New non-culture based laboratory techniques may contribute to early diagnosis and management of invasive candidiasis. Recent data suggest that prediction rules based on risk factors, clinical and microbiological parameters or monitoring of Candida colonisation may efficiently identify critically ill patients at high risk of invasive candidiasis who may benefit of preventive or pre-emptive antifungal therapy. In many cancer centres, exposure to azoles antifungals has been associated with an epidemiological shift from Candida albicans to non-albicans Candida species with reduced antifungal susceptibility or intrinsic resistance. This trend has not been observed in recent surveys on candidemia in non-immunocompromised ICU patients. Prophylaxis, pre-emptive or empirical antifungal treatment are possible approaches for prevention or early management of invasive candidiasis. However, the selection of high-risk patients remains critical for an efficient management aimed at reducing the number needed to treat and thus avoiding unnecessary treatments associated with the emergence of resistance, drug toxicity and costs.
Resumo:
Invasive candidiasis ranges from 5 to 10 cases per 1,000 ICU admissions and represents 5% to 10% of all ICU-acquired infections, with an overall mortality comparable to that of severe sepsis/septic shock. A large majority of them are due to Candida albicans, but the proportion of strains with decreased sensitivity or resistance to fluconazole is increasingly reported. A high proportion of ICU patients become colonized, but only 5% to 30% of them develop an invasive infection. Progressive colonization and major abdominal surgery are common risk factors, but invasive candidiasis is difficult to predict and early diagnosis remains a major challenge. Indeed, blood cultures are positive in a minority of cases and often late in the course of infection. New nonculture-based laboratory techniques may contribute to early diagnosis and management of invasive candidiasis. Both serologic (mannan, antimannan, and betaglucan) and molecular (Candida-specific PCR in blood and serum) have been applied as serial screening procedures in high-risk patients. However, although reasonably sensitive and specific, these techniques are largely investigational and their clinical usefulness remains to be established. Identification of patients susceptible to benefit from empirical antifungal treatment remains challenging, but it is mandatory to avoid antifungal overuse in critically ill patients. Growing evidence suggests that monitoring the dynamic of Candida colonization in surgical patients and prediction rules based on combined risk factors may be used to identify ICU patients at high risk of invasive candidiasis susceptible to benefit from prophylaxis or preemptive antifungal treatment.