870 resultados para positive predictive values


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Arthroscopy is considered as "the gold standard" for the diagnosis of traumatic intraarticular knee lesions. However, recent developments in magnetic resonance imaging (MRI) now offer good opportunities for the indirect assessment of the integrity and structural changes of the knee articular cartilage. The study was to investigate whether cartilage-specific sequences on a 3-Tesla MRI provide accurate assessment for the detection of cartilage defects. METHODS A 3-Tesla (3-T) MRI combined with three-dimensional double-echo steady-state (3D-DESS) cartilage specific sequences was performed on 210 patients with knee pain prior to knee arthroscopy. Sensitivity, specificity, and positive and negative predictive values of magnetic resonance imaging were calculated and correlated to the arthroscopic findings of cartilaginous lesions. Lesions were classified using the modified Outerbridge classification. RESULTS For the 210 patients (1260 cartilage surfaces: patella, trochlea, medial femoral condyle, medial tibia, lateral femoral condyle, lateral tibia) evaluated, the sensitivities, specificities, positive predictive values, and negative predictive values of 3-T MRI were 83.3, 99.8, 84.4, and 99.8 %, respectively, for the detection of grade IV lesions; 74.1, 99.6, 85.2, and 99.3 %, respectively, for grade III lesions; 67.9, 99.2, 76.6, and 98.2 %, respectively, for grade II lesions; and 8.8, 99.5, 80, and 92 %, respectively, for grade I lesions. CONCLUSIONS For grade III and IV lesions, 3-T MRI combined with 3D-DESS cartilage-specific sequences represents an accurate diagnostic tool. For grade II lesions, the technique demonstrates moderate sensitivity, while for grade I lesions, the sensitivity is limited to provide reliable diagnosis compared to knee arthroscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of exercise electrocardiography (ECG) to detect latent coronary heart disease (CHD) is discouraged in apparently healthy populations because of low sensitivity. These recommendations however, are based on the efficacy of evaluation of ischemia (ST segment changes) with little regard for other measures of cardiac function that are available during exertion. The purpose of this investigation was to determine the association of maximal exercise hemodynamic responses with risk of mortality due to all-causes, cardiovascular disease (CVD), and coronary heart disease (CHD) in apparently healthy individuals. Study participants were 20,387 men (mean age = 42.2 years) and 6,234 women (mean age = 41.9 years) patients of a preventive medicine center in Dallas, TX examined between 1971 and 1989. During an average of 8.1 years of follow-up, there were 348 deaths in men and 66 deaths in women. In men, age-adjusted all-cause death rates (per 10,000 person years) across quartiles of maximal systolic blood pressure (SBP) (low to high) were: 18.2, 16.2, 23.8, and 24.6 (p for trend $<$0.001). Corresponding rates for maximal heart rate were: 28.9, 15.9, 18.4, and 15.1 (p trend $<$0.001). After adjustment for confounding variables including age, resting systolic pressure, serum cholesterol and glucose, body mass index, smoking status, physical fitness and family history of CVD, risks (and 95% confidence interval (CI)) of all-cause mortality for quartiles of maximal SBP, relative to the lowest quartile, were: 0.96 (0.70-1.33), 1.36 (1.01-1.85), and 1.37 (0.98-1.92) for quartiles 2-4 respectively. Similar risks for maximal heart rate were: 0.61 (0.44-0.85), 0.69 (0.51-0.93), and 0.60 (0.41-0.87). No associations were noted between maximal exercise rate-pressure product mortality. Similar results were seen for risk of CVD and CHD death. In women, similar trends in age-adjusted all-cause and CVD death rates across maximal SBP and heart rate categories were observed. Sensitivity of the exercise test in predicting mortality was enhanced when ECG results were evaluated together with maximal exercise SBP or heart rate with a concomitant decrease in specificity. Positive predictive values were not improved. The efficacy of the exercise test in predicting mortality in apparently healthy men and women was not enhanced by using maximal exercise hemodynamic responses. These results suggest that an exaggerated systolic blood pressure or an attenuated heart rate response to maximal exercise are risk factors for mortality in apparently healthy individuals. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nutrient intake and specific food item data from 24-hour dietary recalls were utilized to study the relationship between measures of diet diversity and dietary adequacy in a population of white females of child-bearing age and socioeconomic subgroups of that population. As the basis of the diet diversity measures, twelve food groups were constructed from the 24-hour recall data and the number of unique foods per food group counted and weighted according to specified weighting schemes. Utilizing these food groups, nine diet diversity indices were developed.^ Sensitivity/specificity analysis was used to determine the ability of varying levels of selected diet diversity indices to identify individuals above and below preselected intakes of different nutrients. The true prevalence proportions, sensitivity and specificity, false positive and false negative rates, and positive predictive values observed at the selected levels of diet diversity indices were investigated in relation to the objectives and resources of a variety of nutrition improvement programs. Diet diversity indices constructed from the total population data were evaluated as screening tools for respondent nutrient intakes in each of the socioeconomic subgroups as well.^ The results of the sensitivity/specificity analysis demonstrated that the false positive rate, the false negative rate, or both were too high at each diversity cut-off level to validate the widespread use of any of the diversity indices in the dietary assessment of the study population. Although diet diversity has been shown to be highly correlated with the intakes of a number of nutrients, the diet diversity indices constructed in this study did not adequately represent nutrient intakes in the diet as reported, in this study, intakes as reported in the 24-hour dietary recall. Specific cut-off levels of selected diversity indices might have limited application in some nutrition programs. The results were applicable to the sensitivity/specificity analyses in the socioeconomic subgroups as well as in the total population. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lesão do plexo braquial é considerada a alteração neural mais grave das extremidades. A principal causa é o trauma de alta energia, especialmente acidentes envolvendo veículos a motor. Por este motivo, as lesões traumáticas do plexo braquial são cada vez mais frequentes. O presente estudo avaliou a acurácia da ressonância magnética (RM) no diagnóstico das lesões traumáticas do plexo braquial no adulto, utilizando o achado intraoperatório como padrão-ouro. Também foi avaliada a acurácia da neurografia pesada em difusão (neurografia DW) em relação à RM convencional e a capacidade de diferenciação dos três tipos de lesão: avulsão, ruptura e lesão em continuidade. Trinta e três pacientes com história e diagnóstico clínico de lesão traumática do plexo braquial foram prospectivamente estudados por RM. Os achados obtidos pela RM sem e com o uso da neurografia DW, e os achados de exame clínico foram comparados com os achados intraoperatórios. A análise estatística foi feita com associação de significância de 5%. Observou-se alta correlação entre a RM com neurografia DW e a cirurgia (rs=0,79), e baixa correlação entre a RM convencional e a cirurgia (rs=0,41). A correlação interobservador foi maior para a RM com neurografia DW (rs = 0,94) do que para a RM sem neurografia DW (rs = 0,75). Os resultados de sensibilidade, acurácia e valor preditivo positivo foram acima de 95% para as RM com e sem neurografia DW no estudo de todo o plexo. As especificidades foram, em geral, maiores para a neurografia DW (p < 0,05). Em relação à diferenciação dos tipos de lesão, a RM com neurografia DW apresentou altas acurácias e sensibilidades no diagnóstico da avulsão/rotura, e alta especificidade no diagnóstico da lesão em continuidade. A acurácia da RM (93,9%) foi significativamente maior que a do exame clínico (76,5%) no diagnóstico das lesões de todo o plexo braquial (p < 0,05).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The description and evaluation of the performance of a new real-time seizure detection algorithm in the newborn infant. Methods: The algorithm includes parallel fragmentation of EEG signal into waves; wave-feature extraction and averaging; elementary, preliminary and final detection. The algorithm detects EEG waves with heightened regularity, using wave intervals, amplitudes and shapes. The performance of the algorithm was assessed with the use of event-based and liberal and conservative time-based approaches and compared with the performance of Gotman's and Liu's algorithms. Results: The algorithm was assessed on multi-channel EEG records of 55 neonates including 17 with seizures. The algorithm showed sensitivities ranging 83-95% with positive predictive values (PPV) 48-77%. There were 2.0 false positive detections per hour. In comparison, Gotman's algorithm (with 30 s gap-closing procedure) displayed sensitivities of 45-88% and PPV 29-56%; with 7.4 false positives per hour and Liu's algorithm displayed sensitivities of 96-99%, and PPV 10-25%; with 15.7 false positives per hour. Conclusions: The wave-sequence analysis based algorithm displayed higher sensitivity, higher PPV and a substantially lower level of false positives than two previously published algorithms. Significance: The proposed algorithm provides a basis for major improvements in neonatal seizure detection and monitoring. Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Vulvovaginal candidiasis is characterized by curd-like vaginal discharge and itching, and is associated with considerable health and economic costs. Materials and Methods. We examined the incidence, prevalence, and risk factors for vulvovaginal candidiasis among a cohort of 898 women in south India. Participants completed three study visits over six months, comprised of a structured interview and a pelvic examination. Results. The positive predictive values for diagnosis of vulvovaginal candidiasis using individual signs or symptoms were low (<19%). We did not find strong evidence for associations between sociodemographic characteristics and the prevalence of vulvovaginal candidiasis. Women clinically diagnosed with bacterial vaginosis had a higher prevalence of vulvovaginal candidiasis (Prevalence 12%, 95% CI 8.2, 15.8) compared to women assessed to be negative for bacterial vaginosis (Prevalence 6.5%, 95% 5.3, 7.6); however, differences in the prevalence of vulvovaginal candidiasis were not observed by the presence or absence of laboratory-confirmed bacterial vaginosis. Conclusions. For correct diagnosis of vulvovaginal candidiasis, laboratory confirmation of infection with Candida is necessary as well as assessment of whether the discharge has been caused by bacterial vaginosis. Studies are needed of women infected with Candida yeast species to determine the risk factors for yeast’s overgrowth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Intravoxel incoherent motion (IVIM) is an MRI technique with potential applications in measuring brain tumor perfusion, but its clinical impact remains to be determined. We assessed the usefulness of IVIM-metrics in predicting survival in newly diagnosed glioblastoma. METHODS: Fifteen patients with glioblastoma underwent MRI including spin-echo echo-planar DWI using 13 b-values ranging from 0 to 1000 s/mm2. Parametric maps for diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) were generated for contrast-enhancing regions (CER) and non-enhancing regions (NCER). Regions of interest were manually drawn in regions of maximum f and on the corresponding dynamic susceptibility contrast images. Prognostic factors were evaluated by Kaplan-Meier survival and Cox proportional hazards analyses. RESULTS: We found that fCER and D*CER correlated with rCBFCER. The best cutoffs for 6-month survival were fCER>9.86% and D*CER>21.712 x10-3mm2/s (100% sensitivity, 71.4% specificity, 100% and 80% positive predictive values, and 80% and 100% negative predictive values; AUC:0.893 and 0.857, respectively). Treatment yielded the highest hazard ratio (5.484; 95% CI: 1.162-25.88; AUC: 0.723; P = 0.031); fCER combined with treatment predicted survival with 100% accuracy. CONCLUSIONS: The IVIM-metrics fCER and D*CER are promising biomarkers of 6-month survival in newly diagnosed glioblastoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The value of cerebrospinal fluid (CSF) lactate level and CSF/blood glucose ratio for the identification of bacterial meningitis following neurosurgery was assessed in a retrospective study. During a 3-year period, 73 patients fulfilled the inclusion criteria and could be grouped by preset criteria in one of three categories: proven bacterial meningitis (n = 12), presumed bacterial meningitis (n = 14), and nonbacterial meningeal syndrome (n = 47). Of 73 patients analyzed, 45% were treated with antibiotics and 33% with steroids at the time of first lumbar puncture. CSF lactate values (cutoff, 4 mmol/L), in comparison with CSF/blood glucose ratios (cutoff, 0.4), were associated with higher sensitivity (0.88 vs. 0.77), specificity (0.98 vs. 0.87), and positive (0.96 vs. 0.77) and negative (0.94 vs. 0.87) predictive values. In conclusion, determination of the CSF lactate value is a quick, sensitive, and specific test to identify patients with bacterial meningitis after neurosurgery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To compare the effectiveness of the STRATIFY falls tool with nurses’ clinical judgments in predicting patient falls. Study Design and Setting: A prospective cohort study was conducted among the inpatients of an acute tertiary hospital. Participants were patients over 65 years of age admitted to any hospital unit. Sensitivity, specificity, and positive predictive value (PPV) and negative predictive values (NPV) of the instrument and nurses’ clinical judgments in predicting falls were calculated. Results: Seven hundred and eighty-eight patients were screened and followed up during the study period. The fall prevalence was 9.2%. Of the 335 patients classified as being ‘‘at risk’’ for falling using the STRATIFY tool, 59 (17.6%) did sustain a fall (sensitivity50.82, specificity50.61, PPV50.18, NPV50.97). Nurses judged that 501 patients were at risk of falling and, of these, 60 (12.0%) fell (sensitivity50.84, specificity50.38, PPV50.12, NPV50.96). The STRATIFY tool correctly identified significantly more patients as either fallers or nonfallers than the nurses (P50.027). Conclusion: Considering the poor specificity and high rates of false-positive results for both the STRATIFY tool and nurses’ clinical judgments, we conclude that neither of these approaches are useful for screening of falls in acute hospital settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The Achenbach child behaviour checklist (CBCL/YSR) is a widely used screening tool for affective problems. Several studies report good association between the checklists and psychiatric diagnoses; although with varying degrees of agreement. Most are cross-sectional studies involving adolescents referred to mental health services. This paper aims to evaluate the performance of the youth self report (YSR) empirical and DSM-oriented internalising scales in predicting later depressive disorders in young adults. Methods Sample was 2431 young adults from an Australian birth cohort study. The strength of association between the empirical and DSM-oriented scales assessed at 14 and 21 years and structured-interview derived depression in young adulthood (18 to 22 years) were tested using odds ratios, ROC analyses and related diagnostic efficiency tests (sensitivity, specificity, positive and negative predictive values). Results Adolescents with internalising symptoms were twice (OR 2.3, 95%CI 1.7 to 3.1) as likely to be diagnosed with DSM-IV depression by age 21. Use of DSM-oriented depressive scales did not improve the concordance between the internalising behaviour and DSM-IV diagnosed depression at age 14 (ORs ranged from 1.9 to 2.5). Limitations Some loss to follow-up over the 7-year gap between the two waves of follow-up. Conclusion DSM-oriented scales perform no better than the standard internalising or anxious/depressed scales in identifying young adults with later DSM-IV depressive disorder.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The Achenbach problem behaviour scales (CBCL/YSR) are widely used. The DSM-oriented anxiety and depression scales have been created to improve concordance between Achenbach’s internalising scales and DSM-IV depression and anxiety. To date no study has examined the concurrent utility of the young adult (YASR) internalising scales, either the empirical or newly developed DSM-oriented depressive or anxiety scales. Methods A sample of 2,551 young adults, aged 18–23 years, from an Australian cohort study. The association between the empirical and DSM-oriented anxiety and depression scales were individually assessed against DSMIV depression and anxiety diagnoses derived from structured interview. Odds ratios, ROC analyses and diagnostic efficiency tests (sensitivity, specificity, positive and negative predictive values) were used to report findings. Results YASR empirical internalising scale predicted DSM-IV mood disorders (depression OR = 6.9, 95% CI 5.0–9.5; anxiety OR = 5.1, 95% CI 3.8–6.7) in the previous 12 months. DSM-oriented depressive or anxiety scales did not appear to improve the concordance with DSM-IV diagnosed depression or anxiety. The internalising scales were much more effective at identifying those with comorbid depression and anxiety, with Ors between 10.1 and 21.7 depending on the internalising scale used. Conclusion DSM-oriented scales perform no better than the standard internalising in identifying young adults with DSM-IV mood or anxiety disorder.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background A reliable standardized diagnosis of pneumonia in children has long been difficult to achieve. Clinical and radiological criteria have been developed by the World Health Organization (WHO), however, their generalizability to different populations is uncertain. We evaluated WHO defined chest radiograph (CXRs) confirmed alveolar pneumonia in the clinical context in Central Australian Aboriginal children, a high risk population, hospitalized with acute lower respiratory illness (ALRI). Methods CXRs in children (aged 1-60 months) hospitalized and treated with intravenous antibiotics for ALRI and enrolled in a randomized controlled trial (RCT) of Vitamin A/Zinc supplementation were matched with data collected during a population-based study of WHO-defined primary endpoint pneumonia (WHO-EPC). These CXRs were reread by a pediatric pulmonologist (PP) and classified as pneumonia-PP when alveolar changes were present. Sensitivities, specificities, positive and negative predictive values (PPV, NPV) for clinical presentations were compared between WHO-EPC and pneumonia-PP. Results Of the 147 episodes of hospitalized ALRI, WHO-EPC was significantly less commonly diagnosed in 40 (27.2%) compared to pneumonia-PP (difference 20.4%, 95% CI 9.6-31.2, P < 0.001). Clinical signs on admission were poor predictors for both pneumonia-PP and WHO-EPC; the sensitivities of clinical signs ranged from a high of 45% for tachypnea to 5% for fever + tachypnea + chest-indrawing. The PPV range was 40-20%, respectively. Higher PPVs were observed against the pediatric pulmonologist's diagnosis compared to WHO-EPC. Conclusions WHO-EPC underestimates alveolar consolidation in a clinical context. Its use in clinical practice or in research designed to inform clinical management in this population should be avoided. Pediatr Pulmonol. 2012; 47:386-392. (C) 2011 Wiley Periodicals, Inc.