105 resultados para FECAL SCORE
Resumo:
BACKGROUND AND PURPOSE Previous studies have suggested that advanced age predicts worse outcome following mechanical thrombectomy. We assessed outcomes from 2 recent large prospective studies to determine the association among TICI, age, and outcome. MATERIALS AND METHODS Data from the Solitaire FR Thrombectomy for Acute Revascularization (STAR) trial, an international multicenter prospective single-arm thrombectomy study and the Solitaire arm of the Solitaire FR With the Intention For Thrombectomy (SWIFT) trial were pooled. TICI was determined by core laboratory review. Good outcome was defined as an mRS score of 0-2 at 90 days. We analyzed the association among clinical outcome, successful-versus-unsuccessful reperfusion (TICI 2b-3 versus TICI 0-2a), and age (dichotomized across the median). RESULTS Two hundred sixty-nine of 291 patients treated with Solitaire in the STAR and SWIFT data bases for whom TICI and 90-day outcome data were available were included. The median age was 70 years (interquartile range, 60-76 years) with an age range of 25-88 years. The mean age of patients 70 years of age or younger was 59 years, and it was 77 years for patients older than 70 years. There was no significant difference between baseline NIHSS scores or procedure time metrics. Hemorrhage and device-related complications were more common in the younger age group but did not reach statistical significance. In absolute terms, the rate of good outcome was higher in the younger population (64% versus 44%, P < .001). However, the magnitude of benefit from successful reperfusion was higher in the 70 years of age and older group (OR, 4.82; 95% CI, 1.32-17.63 versus OR 7.32; 95% CI, 1.73-30.99). CONCLUSIONS Successful reperfusion is the strongest predictor of good outcome following mechanical thrombectomy, and the magnitude of benefit is highest in the patient population older than 70 years of age.
Resumo:
OBJECTIVES To assess the clinical profile and long-term mortality in SYNTAX score II based strata of patients who received percutaneous coronary interventions (PCI) in contemporary randomized trials. BACKGROUND The SYNTAX score II was developed in the randomized, all-comers' SYNTAX trial population and is composed by 2 anatomical and 6 clinical variables. The interaction of these variables with the treatment provides individual long-term mortality predictions if a patient undergoes coronary artery bypass grafting (CABG) or PCI. METHODS Patient-level (n=5433) data from 7 contemporary coronary drug-eluting stent (DES) trials were pooled. The mortality for CABG or PCI was estimated for every patient. The difference in mortality estimates for these two revascularization strategies was used to divide the patients into three groups of theoretical treatment recommendations: PCI, CABG or PCI/CABG (the latter means equipoise between CABG and PCI for long term mortality). RESULTS The three groups had marked differences in their baseline characteristics. According to the predicted risk differences, 5115 patients could be treated either by PCI or CABG, 271 should be treated only by PCI and, rarely, CABG (n=47) was recommended. At 3-year follow-up, according to the SYNTAX score II recommendations, patients recommended for CABG had higher mortality compared to the PCI and PCI/CABG groups (17.4%; 6.1% and 5.3%, respectively; P<0.01). CONCLUSIONS The SYNTAX score II demonstrated capability to help in stratifying PCI procedures.
Resumo:
OBJECTIVE The purpose of this study was to investigate outcomes of patients treated with prasugrel or clopidogrel after percutaneous coronary intervention (PCI) in a nationwide acute coronary syndrome (ACS) registry. BACKGROUND Prasugrel was found to be superior to clopidogrel in a randomized trial of ACS patients undergoing PCI. However, little is known about its efficacy in everyday practice. METHODS All ACS patients enrolled in the Acute Myocardial Infarction in Switzerland (AMIS)-Plus registry undergoing PCI and being treated with a thienopyridine P2Y12 inhibitor between January 2010-December 2013 were included in this analysis. Patients were stratified according to treatment with prasugrel or clopidogrel and outcomes were compared using propensity score matching. The primary endpoint was a composite of death, recurrent infarction and stroke at hospital discharge. RESULTS Out of 7621 patients, 2891 received prasugrel (38%) and 4730 received clopidogrel (62%). Independent predictors of in-hospital mortality were age, Killip class >2, STEMI, Charlson comorbidity index >1, and resuscitation prior to admission. After propensity score matching (2301 patients per group), the primary endpoint was significantly lower in prasugrel-treated patients (3.0% vs 4.3%; p=0.022) while bleeding events were more frequent (4.1% vs 3.0%; p=0.048). In-hospital mortality was significantly reduced (1.8% vs 3.1%; p=0.004), but no significant differences were observed in rates of recurrent infarction (0.8% vs 0.7%; p=1.00) or stroke (0.5% vs 0.6%; p=0.85). In a predefined subset of matched patients with one-year follow-up (n=1226), mortality between discharge and one year was not significantly reduced in prasugrel-treated patients (1.3% vs 1.9%, p=0.38). CONCLUSIONS In everyday practice in Switzerland, prasugrel is predominantly used in younger patients with STEMI undergoing primary PCI. A propensity score-matched analysis suggests a mortality benefit from prasugrel compared with clopidogrel in these patients.
Resumo:
PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.
Resumo:
OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.
Resumo:
BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.
Resumo:
BACKGROUND The aim of newborn screening (NBS) for CF is to detect children with 'classic' CF where early treatment is possible and improves prognosis. Children with inconclusive CF diagnosis (CFSPID) should not be detected, as there is no evidence for improvement through early treatment. No algorithm in current NBS guidelines explains what to do when sweat test (ST) fails. This study compares the performance of three different algorithms for further diagnostic evaluations when first ST is unsuccessful, regarding the numbers of children detected with CF and CFSPID, and the time until a definite diagnosis. METHODS In Switzerland, CF-NBS was introduced in January 2011 using an IRT-DNA-IRT algorithm followed by a ST. In children, in whom ST was not possible (no or insufficient sweat), 3 different protocols were applied between 2011 and 2014: in 2011, ST was repeated until it was successful (protocol A), in 2012 we proceeded directly to diagnostic DNA testing (protocol B), and 2013-2014, fecal elastase (FE) was measured in the stool, in order to determine a pancreas insufficiency needing immediate treatment (protocol C). RESULTS The ratio CF:CFSPID was 7:1 (27/4) with protocol A, 2:1 (22/10) with protocol B, and 14:1 (54/4) with protocol C. The mean time to definite diagnosis was significantly shorter with protocol C (33days) compared to protocol A or B (42 and 40days; p=0.014 compared to A, and p=0.036 compared to B). CONCLUSIONS The algorithm for the diagnostic part of the newborn screening used in the CF centers is important and affects the performance of a CF-NBS program with regard to the ratio CF:CFSPID and the time until definite diagnosis. Our results suggest to include FE after initial sweat test failure in the CF-NBS guidelines to keep the proportion of CFSPID low and the time until definite diagnosis short.
Resumo:
OBJECTIVE We endeavored to develop an unruptured intracranial aneurysm (UIA) treatment score (UIATS) model that includes and quantifies key factors involved in clinical decision-making in the management of UIAs and to assess agreement for this model among specialists in UIA management and research. METHODS An international multidisciplinary (neurosurgery, neuroradiology, neurology, clinical epidemiology) group of 69 specialists was convened to develop and validate the UIATS model using a Delphi consensus. For internal (39 panel members involved in identification of relevant features) and external validation (30 independent external reviewers), 30 selected UIA cases were used to analyze agreement with UIATS management recommendations based on a 5-point Likert scale (5 indicating strong agreement). Interrater agreement (IRA) was assessed with standardized coefficients of dispersion (vr*) (vr* = 0 indicating excellent agreement and vr* = 1 indicating poor agreement). RESULTS The UIATS accounts for 29 key factors in UIA management. Agreement with UIATS (mean Likert scores) was 4.2 (95% confidence interval [CI] 4.1-4.3) per reviewer for both reviewer cohorts; agreement per case was 4.3 (95% CI 4.1-4.4) for panel members and 4.5 (95% CI 4.3-4.6) for external reviewers (p = 0.017). Mean Likert scores were 4.2 (95% CI 4.1-4.3) for interventional reviewers (n = 56) and 4.1 (95% CI 3.9-4.4) for noninterventional reviewers (n = 12) (p = 0.290). Overall IRA (vr*) for both cohorts was 0.026 (95% CI 0.019-0.033). CONCLUSIONS This novel UIA decision guidance study captures an excellent consensus among highly informed individuals on UIA management, irrespective of their underlying specialty. Clinicians can use the UIATS as a comprehensive mechanism for indicating how a large group of specialists might manage an individual patient with a UIA.
Resumo:
Background: The efficacy of cognitive behavioral therapy (CBT) for the treatment of depressive disorders has been demonstrated in many randomized controlled trials (RCTs). This study investigated whether for CBT similar effects can be expected under routine care conditions when the patients are comparable to those examined in RCTs. Method: N=574 CBT patients from an outpatient clinic were stepwise matched to the patients undergoing CBT in the National Institute of Mental Health Treatment of Depression Collaborative Research Program (TDCRP). First, the exclusion criteria of the RCT were applied to the naturalistic sample of the outpatient clinic. Second, propensity score matching (PSM) was used to adjust the remaining naturalistic sample on the basis of baseline covariate distributions. Matched samples were then compared regarding treatment effects using effect sizes, average treatment effect on the treated (ATT) and recovery rates. Results: CBT in the adjusted naturalistic subsample was as effective as in the RCT. However, treatments lasted significantly longer under routine care conditions. Limitations: The samples included only a limited amount of common predictor variables and stemmed from different countries. There might be additional covariates, which could potentially further improve the matching between the samples. Conclusions: CBT for depression in clinical practice might be equally effective as manual-based treatments in RCTs when they are applied to comparable patients. The fact that similar effects under routine conditions were reached with more sessions, however, points to the potential to optimize treatments in clinical practice with respect to their efficiency.
Resumo:
INTRODUCTION The aim of the study was to identify the appropriate level of Charlson comorbidity index (CCI) in older patients (>70 years) with high-risk prostate cancer (PCa) to achieve survival benefit following radical prostatectomy (RP). METHODS We retrospectively analyzed 1008 older patients (>70 years) who underwent RP with pelvic lymph node dissection for high-risk prostate cancer (preoperative prostate-specific antigen >20 ng/mL or clinical stage ≥T2c or Gleason ≥8) from 14 tertiary institutions between 1988 and 2014. The study population was further grouped into CCI < 2 and ≥2 for analysis. Survival rate for each group was estimated with Kaplan-Meier method and competitive risk Fine-Gray regression to estimate the best explanatory multivariable model. Area under the curve (AUC) and Akaike information criterion were used to identify ideal 'Cut off' for CCI. RESULTS The clinical and cancer characteristics were similar between the two groups. Comparison of the survival analysis using the Kaplan-Meier curve between two groups for non-cancer death and survival estimations for 5 and 10 years shows significant worst outcomes for patients with CCI ≥ 2. In multivariate model to decide the appropriate CCI cut-off point, we found CCI 2 has better AUC and p value in log rank test. CONCLUSION Older patients with fewer comorbidities harboring high-risk PCa appears to benefit from RP. Sicker patients are more likely to die due to non-prostate cancer-related causes and are less likely to benefit from RP.
Resumo:
Gastrointestinal (GI) protein loss, due to lymphangiectasia or chronic inflammation, can be challenging to diagnose. This study evaluated the diagnostic accuracy of serum and fecal canine α1-proteinase inhibitor (cα1PI) concentrations to detect crypt abscesses and/or lacteal dilation in dogs. Serum and fecal cα1PI concentrations were measured in 120 dogs undergoing GI tissue biopsies, and were compared between dogs with and without crypt abscesses/lacteal dilation. Sensitivity and specificity were calculated for dichotomous outcomes. Serial serum cα1PI concentrations were also evaluated in 12 healthy corticosteroid-treated dogs. Serum cα1PI and albumin concentrations were significantly lower in dogs with crypt abscesses and/or lacteal dilation than in those without (both P <0.001), and more severe lesions were associated with lower serum cα1PI concentrations, higher 3 days-mean fecal cα1PI concentrations, and lower serum/fecal cα1PI ratios. Serum and fecal cα1PI, and their ratios, distinguished dogs with moderate or severe GI crypt abscesses/lacteal dilation from dogs with only mild or none such lesions with moderate sensitivity (56-92%) and specificity (67-81%). Serum cα1PI concentrations increased during corticosteroid administration. We conclude that serum and fecal α1PI concentrations reflect the severity of intestinal crypt abscesses/lacteal dilation in dogs. Due to its specificity for the GI tract, measurement of fecal cα1PI appears to be superior to serum cα1PI for diagnosing GI protein loss in dogs. In addition, the serum/fecal cα1PI ratio has an improved accuracy in hypoalbuminemic dogs, but serum cα1PI concentrations should be carefully interpreted in corticosteroid-treated dogs.
Resumo:
PURPOSE To compare patient outcomes and complication rates after different decompression techniques or instrumented fusion (IF) in lumbar spinal stenosis (LSS). METHODS The multicentre study was based on Spine Tango data. Inclusion criteria were LSS with a posterior decompression and pre- and postoperative COMI assessment between 3 and 24 months. 1,176 cases were assigned to four groups: (1) laminotomy (n = 642), (2) hemilaminectomy (n = 196), (3) laminectomy (n = 230) and (4) laminectomy combined with an IF (n = 108). Clinical outcomes were achievement of minimum relevant change in COMI back and leg pain and COMI score (2.2 points), surgical and general complications, measures taken due to complications, and reintervention on the index level based on patient information. The inverse propensity score weighting method was used for adjustment. RESULTS Laminotomy, hemilaminectomy and laminectomy were significantly less beneficial than laminectomy in combination with IF regarding leg pain (ORs with 95% CI 0.52, 0.34-0.81; 0.25, 0.15-0.41; 0.44, 0.27-0.72, respectively) and COMI score improvement (ORs with 95% CI 0.51, 0.33-0.81; 0.30, 0.18-0.51; 0.48, 0.29-0.79, respectively). However, the sole decompressions caused significantly fewer surgical (ORs with 95% CI 0.42, 0.26-0.69; 0.33, 0.17-0.63; 0.39, 0.21-0.71, respectively) and general complications (ORs with 95% CI 0.11, 0.04-0.29; 0.03, 0.003-0.41; 0.25, 0.09-0.71, respectively) than laminectomy in combination with IF. Accordingly, the likelihood of required measures was also significantly lower after laminotomy (OR 0.28, 95% CI 0.17-0.46), hemilaminectomy (OR 0.28, 95% CI 0.15-0.53) and after laminectomy (OR 0.39, 95% CI 0.22-0.68) in comparison with laminectomy with IF. The likelihood of a reintervention was not significantly different between the treatment groups. DISCUSSION As already demonstrated in the literature, decompression in patients with LSS is a very effective treatment. Despite better patient outcomes after laminectomy in combination with IF, caution is advised due to higher rates of surgical and general complications and consequent required measures. Based on the current study, laminotomy or laminectomy, rather than hemilaminectomy, is recommendable for minimum relevant pain relief.