114 resultados para Receiver-operating Characteristics
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
OBJECTIVES To evaluate the diagnostic performance of seven non-invasive tests (NITs) of liver fibrosis and to assess fibrosis progression over time in HIV/HCV co-infected patients. METHODS Transient elastography (TE) and six blood tests were compared to histopathological fibrosis stage (METAVIR). Participants were followed over three years with NITs at yearly intervals. RESULTS Area under the receiver operating characteristic curve (AUROC) for significant fibrosis (> = F2) in 105 participants was highest for TE (0.85), followed by FIB-4 (0.77), ELF-Test (0.77), APRI (0.76), Fibrotest (0.75), hyaluronic acid (0.70), and Hepascore (0.68). AUROC for cirrhosis (F4) was 0.97 for TE followed by FIB-4 (0.91), APRI (0.89), Fibrotest (0.84), Hepascore (0.82), ELF-Test (0.82), and hyaluronic acid (0.79). A three year follow-up was completed by 87 participants, all on antiretroviral therapy and in 20 patients who completed HCV treatment (9 with sustained virologic response). TE, APRI and Fibrotest did not significantly change during follow-up. There was weak evidence for an increase of FIB-4 (mean increase: 0.22, p = 0.07). 42 participants had a second liver biopsy: Among 38 participants with F0-F3 at baseline, 10 were progessors (1-stage increase in fibrosis, 8 participants; 2-stage, 1; 3-stage, 1). Among progressors, mean increase in TE was 3.35 kPa, in APRI 0.36, and in FIB-4 0.75. Fibrotest results did not change over 3 years. CONCLUSION TE was the best NIT for liver fibrosis staging in HIV/HCV co-infected patients. APRI-Score, FIB-4 Index, Fibrotest, and ELF-Test were less reliable. Routinely available APRI and FIB-4 performed as good as more expensive tests. NITs did not change significantly during a follow-up of three years, suggesting slow liver disease progression in a majority of HIV/HCV co-infected persons on antiretroviral therapy.
Resumo:
Phosphatidylethanol (PEth) is considered as specific biomarker of alcohol consumption. Due to accumulation after repeated drinking, PEth is suitable to monitor long-term drinking behavior. To examine the applicability of PEth in "driving under the influence of alcohol" cases, 142 blood samples with blood alcohol concentrations (BAC) ranging from 0.0-3.12 ‰ were analyzed for the presence of PEth homologues 16:0/18:1 (889 ± 878 ng/mL; range
Resumo:
An accurate detection of individuals at clinical high risk (CHR) for psychosis is a prerequisite for effective preventive interventions. Several psychometric interviews are available, but their prognostic accuracy is unknown. We conducted a prognostic accuracy meta-analysis of psychometric interviews used to examine referrals to high risk services. The index test was an established CHR psychometric instrument used to identify subjects with and without CHR (CHR+ and CHR-). The reference index was psychosis onset over time in both CHR+ and CHR- subjects. Data were analyzed with MIDAS (STATA13). Area under the curve (AUC), summary receiver operating characteristic curves, quality assessment, likelihood ratios, Fagan's nomogram and probability modified plots were computed. Eleven independent studies were included, with a total of 2,519 help-seeking, predominately adult subjects (CHR+: N=1,359; CHR-: N=1,160) referred to high risk services. The mean follow-up duration was 38 months. The AUC was excellent (0.90; 95% CI: 0.87-0.93), and comparable to other tests in preventive medicine, suggesting clinical utility in subjects referred to high risk services. Meta-regression analyses revealed an effect for exposure to antipsychotics and no effects for type of instrument, age, gender, follow-up time, sample size, quality assessment, proportion of CHR+ subjects in the total sample. Fagan's nomogram indicated a low positive predictive value (5.74%) in the general non-help-seeking population. Albeit the clear need to further improve prediction of psychosis, these findings support the use of psychometric prognostic interviews for CHR as clinical tools for an indicated prevention in subjects seeking help at high risk services worldwide.
Resumo:
BACKGROUND & AIMS It is not clear whether symptoms alone can be used to estimate the biologic activity of eosinophilic esophagitis (EoE). We aimed to evaluate whether symptoms can be used to identify patients with endoscopic and histologic features of remission. METHODS Between April 2011 and June 2014, we performed a prospective, observational study and recruited 269 consecutive adults with EoE (67% male; median age, 39 years old) in Switzerland and the United States. Patients first completed the validated symptom-based EoE activity index patient-reported outcome instrument and then underwent esophagogastroduodenoscopy with esophageal biopsy collection. Endoscopic and histologic findings were evaluated with a validated grading system and standardized instrument, respectively. Clinical remission was defined as symptom score <20 (range, 0-100); histologic remission was defined as a peak count of <20 eosinophils/mm(2) in a high-power field (corresponds to approximately <5 eosinophils/median high-power field); and endoscopic remission as absence of white exudates, moderate or severe rings, strictures, or combination of furrows and edema. We used receiver operating characteristic analysis to determine the best symptom score cutoff values for detection of remission. RESULTS Of the study subjects, 111 were in clinical remission (41.3%), 79 were in endoscopic remission (29.7%), and 75 were in histologic remission (27.9%). When the symptom score was used as a continuous variable, patients in endoscopic, histologic, and combined (endoscopic and histologic remission) remission were detected with area under the curve values of 0.67, 0.60, and 0.67, respectively. A symptom score of 20 identified patients in endoscopic remission with 65.1% accuracy and histologic remission with 62.1% accuracy; a symptom score of 15 identified patients with both types of remission with 67.7% accuracy. CONCLUSIONS In patients with EoE, endoscopic or histologic remission can be identified with only modest accuracy based on symptoms alone. At any given time, physicians cannot rely on lack of symptoms to make assumptions about lack of biologic disease activity in adults with EoE. ClinicalTrials.gov, Number: NCT00939263.
An Increased Iliocapsularis-to-rectus-femoris Ratio Is Suggestive for Instability in Borderline Hips
Resumo:
BACKGROUND The iliocapsularis muscle is an anterior hip structure that appears to function as a stabilizer in normal hips. Previous studies have shown that the iliocapsularis is hypertrophied in developmental dysplasia of the hip (DDH). An easy MR-based measurement of the ratio of the size of the iliocapsularis to that of adjacent anatomical structures such as the rectus femoris muscle might be helpful in everyday clinical use. QUESTIONS/PURPOSES We asked (1) whether the iliocapsularis-to-rectus-femoris ratio for cross-sectional area, thickness, width, and circumference is increased in DDH when compared with hips with acetabular overcoverage or normal hips; and (2) what is the diagnostic performance of these ratios to distinguish dysplastic from pincer hips? METHODS We retrospectively compared the anatomy of the iliocapsularis muscle between two study groups with symptomatic hips with different acetabular coverage and a control group with asymptomatic hips. The study groups were selected from a series of patients seen at the outpatient clinic for DDH or femoroacetabular impingement. The allocation to a study group was based on conventional radiographs: the dysplasia group was defined by a lateral center-edge (LCE) angle of < 25° with a minimal acetabular index of 14° and consisted of 45 patients (45 hips); the pincer group was defined by an LCE angle exceeding 39° and consisted of 37 patients (40 hips). The control group consisted of 30 asymptomatic hips (26 patients) with MRIs performed for nonorthopaedic reasons. The anatomy of the iliocapsularis and rectus femoris muscle was evaluated using MR arthrography of the hip and the following parameters: cross-sectional area, thickness, width, and circumference. The iliocapsularis-to-rectus-femoris ratio of these four anatomical parameters was then compared between the two study groups and the control group. The diagnostic performance of these ratios to distinguish dysplasia from protrusio was evaluated by calculating receiver operating characteristic (ROC) curves and the positive predictive value (PPV) for a ratio > 1. Presence and absence of DDH (ground truth) were determined on plain radiographs using the previously mentioned radiographic parameters. Evaluation of radiographs and MRIs was performed in a blinded fashion. The PPV was chosen because it indicates how likely a hip is dysplastic if the iliocapsularis-to-rectus-femoris ratio was > 1. RESULTS The iliocapsularis-to-rectus-femoris ratio for cross-sectional area, thickness, width, and circumference was increased in hips with radiographic evidence of DDH (ratios ranging from 1.31 to 1.35) compared with pincer (ratios ranging from 0.71 to 0.90; p < 0.001) and compared with the control group, the ratio of cross-sectional area, thickness, width, and circumference was increased (ratios ranging from 1.10 to 1.15; p ranging from 0.002 to 0.039). The area under the ROC curve ranged from 0.781 to 0.852. For a one-to-one iliocapsularis-to-rectus-femoris ratio, the PPV was 89% (95% confidence interval [CI], 73%-96%) for cross-sectional area, 77% (95% CI, 61%-88%) for thickness, 83% (95% CI, 67%-92%) for width, and 82% (95% CI, 67%-91%) for circumference. CONCLUSIONS The iliocapsularis-to-rectus-femoris ratio seems to be a valuable secondary sign of DDH. This parameter can be used as an adjunct for clinical decision-making in hips with borderline hip dysplasia and a concomitant cam-type deformity to identify the predominant pathology. Future studies will need to prove this finding can help clinicians determine whether the borderline dysplasia accounts for the hip symptoms with which the patient presents. LEVEL OF EVIDENCE Level III, prognostic study.
Resumo:
BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.
Resumo:
AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.
Resumo:
Although there has been a significant decrease in caries prevalence in developed countries, the slower progression of dental caries requires methods capable of detecting and quantifying lesions at an early stage. The aim of this study was to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent 2095 laser fluorescence device [LF], DIAGNOdent 2190 pen [LFpen], and VistaProof fluorescence camera [FC]) in monitoring the progression of noncavitated caries-like lesions on smooth surfaces. Caries-like lesions were developed in 60 blocks of bovine enamel using a bacterial model of Streptococcus mutans and Lactobacillus acidophilus . Enamel blocks were evaluated by two independent examiners at baseline (phase I), after the first cariogenic challenge (eight days) (phase II), and after the second cariogenic challenge (a further eight days) (phase III) by two independent examiners using the LF, LFpen, and FC. Blocks were submitted to surface microhardness (SMH) and cross-sectional microhardness analyses. The intraclass correlation coefficient for intra- and interexaminer reproducibility ranged from 0.49 (FC) to 0.94 (LF/LFpen). SMH values decreased and fluorescence values increased significantly among the three phases. Higher values for sensitivity, specificity, and area under the receiver operating characteristic curve were observed for FC (phase II) and LFpen (phase III). A significant correlation was found between fluorescence values and SMH in all phases and integrated loss of surface hardness (ΔKHN) in phase III. In conclusion, fluorescence-based methods were effective in monitoring noncavitated caries-like lesions on smooth surfaces, with moderate correlation with SMH, allowing differentiation between sound and demineralized enamel.
Resumo:
OBJECTIVE The aim of this study was to investigate the performance of the arterial enhancement fraction (AEF) in multiphasic computed tomography (CT) acquisitions to detect hepatocellular carcinoma (HCC) in liver transplant recipients in correlation with the pathologic analysis of the corresponding liver explants. MATERIALS AND METHODS Fifty-five transplant recipients were analyzed: 35 patients with 108 histologically proven HCC lesions and 20 patients with end-stage liver disease without HCC. Six radiologists looked at the triphasic CT acquisitions with the AEF maps in a first readout. For the second readout without the AEF maps, 3 radiologists analyzed triphasic CT acquisitions (group 1), whereas the other 3 readers had 4 contrast acquisitions available (group 2). A jackknife free-response reader receiver operating characteristic analysis was used to compare the readout performance of the readers. Receiver operating characteristic analysis was used to determine the optimal cutoff value of the AEF. RESULTS The figure of merit (θ = 0.6935) for the conventional triphasic readout was significantly inferior compared with the triphasic readout with additional use of the AEF (θ = 0.7478, P < 0.0001) in group 1. There was no significant difference between the fourphasic conventional readout (θ = 0.7569) and the triphasic readout (θ = 0.7615, P = 0.7541) with the AEF in group 2. Without the AEF, HCC lesions were detected with a sensitivity of 30.7% (95% confidence interval [CI], 25.5%-36.4%) and a specificity of 97.1% (96.0%-98.0%) by group 1 looking at 3 CT acquisition phases and with a sensitivity of 42.1% (36.2%-48.1%) and a specificity of 97.5% (96.4%-98.3%) in group 2 looking at 4 CT acquisition phases. Using the AEF maps, both groups looking at the same 3 acquisition phases, the sensitivity was 47.7% (95% CI, 41.9%-53.5%) with a specificity of 97.4% (96.4%-98.3%) in group 1 and 49.8% (95% CI, 43.9%-55.8%)/97.6% (96.6%-98.4%) in group 2. The optimal cutoff for the AEF was 50%. CONCLUSION The AEF is a helpful tool to screen for HCC with CT. The use of the AEF maps may significantly improve HCC detection, which allows omitting the fourth CT acquisition phase and thus making a 25% reduction of radiation dose possible.
Resumo:
AIM To evaluate the prognostic value of electrophysiological stimulation (EPS) in the risk stratification for tachyarrhythmic events and sudden cardiac death (SCD). METHODS We conducted a prospective cohort study and analyzed the long-term follow-up of 265 consecutive patients who underwent programmed ventricular stimulation at the Luzerner Kantonsspital (Lucerne, Switzerland) between October 2003 and April 2012. Patients underwent EPS for SCD risk evaluation because of structural or functional heart disease and/or electrical conduction abnormality and/or after syncope/cardiac arrest. EPS was considered abnormal, if a sustained ventricular tachycardia (VT) was inducible. The primary endpoint of the study was SCD or, in implanted patients, adequate ICD-activation. RESULTS During EPS, sustained VT was induced in 125 patients (47.2%) and non-sustained VT in 60 patients (22.6%); in 80 patients (30.2%) no arrhythmia could be induced. In our cohort, 153 patients (57.7%) underwent ICD implantation after the EPS. During follow-up (mean duration 4.8 ± 2.3 years), a primary endpoint event occurred in 49 patients (18.5%). The area under the receiver operating characteristic curve (AUROC) was 0.593 (95%CI: 0.515-0.670) for a left ventricular ejection fraction (LVEF) < 35% and 0.636 (95%CI: 0.563-0.709) for inducible sustained VT during EPS. The AUROC of EPS was higher in the subgroup of patients with LVEF ≥ 35% (0.681, 95%CI: 0.578-0.785). Cox regression analysis showed that both, sustained VT during EPS (HR: 2.26, 95%CI: 1.22-4.19, P = 0.009) and LVEF < 35% (HR: 2.00, 95%CI: 1.13-3.54, P = 0.018) were independent predictors of primary endpoint events. CONCLUSION EPS provides a benefit in risk stratification for future tachyarrhythmic events and SCD and should especially be considered in patients with LVEF ≥ 35%.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
The updated Vienna Prediction Model for estimating recurrence risk after an unprovoked venous thromboembolism (VTE) has been developed to identify individuals at low risk for VTE recurrence in whom anticoagulation (AC) therapy may be stopped after 3 months. We externally validated the accuracy of the model to predict recurrent VTE in a prospective multicenter cohort of 156 patients aged ≥65 years with acute symptomatic unprovoked VTE who had received 3 to 12 months of AC. Patients with a predicted 12-month risk within the lowest quartile based on the updated Vienna Prediction Model were classified as low risk. The risk of recurrent VTE did not differ between low- vs higher-risk patients at 12 months (13% vs 10%; P = .77) and 24 months (15% vs 17%; P = 1.0). The area under the receiver operating characteristic curve for predicting VTE recurrence was 0.39 (95% confidence interval [CI], 0.25-0.52) at 12 months and 0.43 (95% CI, 0.31-0.54) at 24 months. In conclusion, in elderly patients with unprovoked VTE who have stopped AC, the updated Vienna Prediction Model does not discriminate between patients who develop recurrent VTE and those who do not. This study was registered at www.clinicaltrials.gov as #NCT00973596.
Resumo:
RATIONALE The use of 6-minute-walk distance (6MWD) as an indicator of exercise capacity to predict postoperative survival in lung transplantation has not previously been well studied. OBJECTIVES To evaluate the association between 6MWD and postoperative survival following lung transplantation. METHODS Adult, first time, lung-only transplantations per the United Network for Organ Sharing database from May 2005 to December 2011 were analyzed. Kaplan-Meier methods and Cox proportional hazards modeling were used to determine the association between preoperative 6MWD and post-transplant survival after adjusting for potential confounders. A receiver operating characteristic curve was used to determine the 6MWD value that provided maximal separation in 1-year mortality. A subanalysis was performed to assess the association between 6MWD and post-transplant survival by disease category. MEASUREMENTS AND MAIN RESULTS A total of 9,526 patients were included for analysis. The median 6MWD was 787 ft (25th-75th percentiles = 450-1,082 ft). Increasing 6MWD was associated with significantly lower overall hazard of death (P < 0.001). Continuous increase in walk distance through 1,200-1,400 ft conferred an incremental survival advantage. Although 6MWD strongly correlated with survival, the impact of a single dichotomous value to predict outcomes was limited. All disease categories demonstrated significantly longer survival with increasing 6MWD (P ≤ 0.009) except pulmonary vascular disease (P = 0.74); however, the low volume in this category (n = 312; 3.3%) may limit the ability to detect an association. CONCLUSIONS 6MWD is significantly associated with post-transplant survival and is best incorporated into transplant evaluations on a continuous basis given limited ability of a single, dichotomous value to predict outcomes.