40 resultados para Predictive values
Resumo:
BACKGROUND Cam-type femoroacetabular impingement (FAI) resulting from an abnormal nonspherical femoral head shape leads to chondrolabral damage and is considered a cause of early osteoarthritis. A previously developed experimental ovine FAI model induces a cam-type impingement that results in localized chondrolabral damage, replicating the patterns found in the human hip. Biochemical MRI modalities such as T2 and T2* may allow for evaluation of the cartilage biochemistry long before cartilage loss occurs and, for that reason, may be a worthwhile avenue of inquiry. QUESTIONS/PURPOSES We asked: (1) Does the histological grading of degenerated cartilage correlate with T2 or T2* values in this ovine FAI model? (2) How accurately can zones of degenerated cartilage be predicted with T2 or T2* MRI in this model? METHODS A cam-type FAI was induced in eight Swiss alpine sheep by performing a closing wedge intertrochanteric varus osteotomy. After ambulation of 10 to 14 weeks, the sheep were euthanized and a 3-T MRI of the hip was performed. T2 and T2* values were measured at six locations on the acetabulum and compared with the histological damage pattern using the Mankin score. This is an established histological scoring system to quantify cartilage degeneration. Both T2 and T2* values are determined by cartilage water content and its collagen fiber network. Of those, the T2* mapping is a more modern sequence with technical advantages (eg, shorter acquisition time). Correlation of the Mankin score and the T2 and T2* values, respectively, was evaluated using the Spearman's rank correlation coefficient. We used a hierarchical cluster analysis to calculate the positive and negative predictive values of T2 and T2* to predict advanced cartilage degeneration (Mankin ≥ 3). RESULTS We found a negative correlation between the Mankin score and both the T2 (p < 0.001, r = -0.79) and T2* values (p < 0.001, r = -0.90). For the T2 MRI technique, we found a positive predictive value of 100% (95% confidence interval [CI], 79%-100%) and a negative predictive value of 84% (95% CI, 67%-95%). For the T2* technique, we found a positive predictive value of 100% (95% CI, 79%-100%) and a negative predictive value of 94% (95% CI, 79%-99%). CONCLUSIONS T2 and T2* MRI modalities can reliably detect early cartilage degeneration in the experimental ovine FAI model. CLINICAL RELEVANCE T2 and T2* MRI modalities have the potential to allow for monitoring the natural course of osteoarthrosis noninvasively and to evaluate the results of surgical treatments targeted to joint preservation.
Resumo:
BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.
Resumo:
AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
Primary ciliary dyskinesia is a rare heterogeneous recessive genetic disorder of motile cilia, leading to chronic upper and lower respiratory symptoms. Prevalence is estimated at around 1:10,000, but many patients remain undiagnosed, while others receive the label incorrectly. Proper diagnosis is complicated by the fact that the key symptoms such as wet cough, chronic rhinitis and recurrent upper and lower respiratory infection, are common and nonspecific. There is no single gold standard test to diagnose PCD. Presently, the diagnosis is made by augmenting the medical history and physical examination with in patients with a compatible medical history following a demanding combination of tests including nasal nitric oxide, high- speed video microscopy, transmission electron microscopy, genetics, and ciliary culture. These tests are costly and need sophisticated equipment and experienced staff, restricting use to highly specialised centers. Therefore, it would be desirable to have a screening test for identifying those patients who should undergo detailed diagnostic testing. Three recent studies focused on potential screening tools: one paper assessed the validity of nasal nitric oxide for screening, and two studies developed new symptom-based screening tools. These simple tools are welcome, and hopefully remind physicians whom to refer for definitive testing. However, they have been developed in tertiary care settings, where 10 to 50% of tested patients have PCD. Sensitivity and specificity of the tools are reasonable, but positive and negative predictive values may be poor in primary or secondary care settings. While these studies take an important step forward towards an earlier diagnosis of PCD, more remains to be done before we have tools tailored to different health care settings.
Resumo:
A small proportion of individuals with non-specific low back pain (NSLBP) develop persistent problems. Up to 80% of the total costs for NSLBP are owing to chronic NSLBP. Psychosocial factors have been described to be important in the transition from acute to chronic NSLBP. Guidelines recommend the use of the Acute Low Back Pain Screening Questionnaire (ALBPSQ) and the Örebro Musculoskeletal Pain Screening Questionnaire (ÖMPSQ) to identify individuals at risk of developing persistent problems, such as long-term absence of work, persistent restriction in function or persistent pain. These instruments can be used with a cutoff value, where patients with values above the threshold are further assessed with a more comprehensive examination.
Resumo:
BACKGROUND: In contrast to hypnosis, there is no surrogate parameter for analgesia in anesthetized patients. Opioids are titrated to suppress blood pressure response to noxious stimulation. The authors evaluated a novel model predictive controller for closed-loop administration of alfentanil using mean arterial blood pressure and predicted plasma alfentanil concentration (Cp Alf) as input parameters. METHODS: The authors studied 13 healthy patients scheduled to undergo minor lumbar and cervical spine surgery. After induction with propofol, alfentanil, and mivacurium and tracheal intubation, isoflurane was titrated to maintain the Bispectral Index at 55 (+/- 5), and the alfentanil administration was switched from manual to closed-loop control. The controller adjusted the alfentanil infusion rate to maintain the mean arterial blood pressure near the set-point (70 mmHg) while minimizing the Cp Alf toward the set-point plasma alfentanil concentration (Cp Alfref) (100 ng/ml). RESULTS: Two patients were excluded because of loss of arterial pressure signal and protocol violation. The alfentanil infusion was closed-loop controlled for a mean (SD) of 98.9 (1.5)% of presurgery time and 95.5 (4.3)% of surgery time. The mean (SD) end-tidal isoflurane concentrations were 0.78 (0.1) and 0.86 (0.1) vol%, the Cp Alf values were 122 (35) and 181 (58) ng/ml, and the Bispectral Index values were 51 (9) and 52 (4) before surgery and during surgery, respectively. The mean (SD) absolute deviations of mean arterial blood pressure were 7.6 (2.6) and 10.0 (4.2) mmHg (P = 0.262), and the median performance error, median absolute performance error, and wobble were 4.2 (6.2) and 8.8 (9.4)% (P = 0.002), 7.9 (3.8) and 11.8 (6.3)% (P = 0.129), and 14.5 (8.4) and 5.7 (1.2)% (P = 0.002) before surgery and during surgery, respectively. A post hoc simulation showed that the Cp Alfref decreased the predicted Cp Alf compared with mean arterial blood pressure alone. CONCLUSION: The authors' controller has a similar set-point precision as previous hypnotic controllers and provides adequate alfentanil dosing during surgery. It may help to standardize opioid dosing in research and may be a further step toward a multiple input-multiple output controller.
Resumo:
Objective Arterial lactate, base excess (BE), lactate clearance, and Sequential Organ Failure Assessment (SOFA) score have been shown to correlate with outcome in severely injured patients. The goal of the present study was to separately assess their predictive value in patients suffering from traumatic brain injury (TBI) as opposed to patients suffering from injuries not related to the brain. Materials and methods A total of 724 adult trauma patients with an Injury Severity Score (ISS) ≥ 16 were grouped into patients without TBI (non-TBI), patients with isolated TBI (isolated TBI), and patients with a combination of TBI and non-TBI injuries (combined injuries). The predictive value of the above parameters was then analyzed using both uni- and multivariate analyses. Results The mean age of the patients was 39 years (77 % males), with a mean ISS of 32 (range 16–75). Mortality ranged from 14 % (non-TBI) to 24 % (combined injuries). Admission and serial lactate/BE values were higher in non-survivors of all groups (all p < 0.01), but not in patients with isolated TBI. Admission SOFA scores were highest in non-survivors of all groups (p = 0.023); subsequently septic patients also showed elevated SOFA scores (p < 0.01), except those with isolated TBI. In this group, SOFA score was the only parameter which showed significant differences between survivors and non-survivors. Receiver operating characteristic (ROC) analysis revealed lactate to be the best overall predictor for increased mortality and further septic complications, irrespective of the leading injury. Conclusion Lactate showed the best performance in predicting sepsis or death in all trauma patients except those with isolated TBI, and the differences were greatest in patients with substantial bleeding. Following isolated TBI, SOFA score was the only parameter which could differentiate survivors from non-survivors on admission, although the SOFA score, too, was not an independent predictor of death following multivariate analysis.
Resumo:
Tables of estimated regression coefficients, usually accompanied by additional information such as standard errors, t-statistics, p-values, confidence intervals or significance stars, have long been the preferred way of communicating results from statistical models. In recent years, however, the limits of this form of exposition have been increasingly recognized. For example, interpretation of regression tables can be very challenging in the presence of complications such as interaction effects, categorical variables, or nonlinear functional forms. Furthermore, while these issues might still be manageable in the case of linear regression, interpretational difficulties can be overwhelming in nonlinear models such as logistic regression. To facilitate sensible interpretation of such models it is often necessary to compute additional results such as marginal effects, predictive margins, or contrasts. Moreover, smart graphical displays of results can be very valuable in making complex relations accessible. A number of helpful commands geared at supporting these tasks have been recently introduced in Stata, making elaborate interpretation and communication of regression results possible without much extra effort. Examples of such commands are -margins-, -contrasts-, and -marginsplot-. In my talk, I will discuss the capabilities of these commands and present a range of examples illustrating their use.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.