140 resultados para Complexity score
Resumo:
BACKGROUND AND PURPOSE Previous studies have suggested that advanced age predicts worse outcome following mechanical thrombectomy. We assessed outcomes from 2 recent large prospective studies to determine the association among TICI, age, and outcome. MATERIALS AND METHODS Data from the Solitaire FR Thrombectomy for Acute Revascularization (STAR) trial, an international multicenter prospective single-arm thrombectomy study and the Solitaire arm of the Solitaire FR With the Intention For Thrombectomy (SWIFT) trial were pooled. TICI was determined by core laboratory review. Good outcome was defined as an mRS score of 0-2 at 90 days. We analyzed the association among clinical outcome, successful-versus-unsuccessful reperfusion (TICI 2b-3 versus TICI 0-2a), and age (dichotomized across the median). RESULTS Two hundred sixty-nine of 291 patients treated with Solitaire in the STAR and SWIFT data bases for whom TICI and 90-day outcome data were available were included. The median age was 70 years (interquartile range, 60-76 years) with an age range of 25-88 years. The mean age of patients 70 years of age or younger was 59 years, and it was 77 years for patients older than 70 years. There was no significant difference between baseline NIHSS scores or procedure time metrics. Hemorrhage and device-related complications were more common in the younger age group but did not reach statistical significance. In absolute terms, the rate of good outcome was higher in the younger population (64% versus 44%, P < .001). However, the magnitude of benefit from successful reperfusion was higher in the 70 years of age and older group (OR, 4.82; 95% CI, 1.32-17.63 versus OR 7.32; 95% CI, 1.73-30.99). CONCLUSIONS Successful reperfusion is the strongest predictor of good outcome following mechanical thrombectomy, and the magnitude of benefit is highest in the patient population older than 70 years of age.
Resumo:
OBJECTIVES To assess the clinical profile and long-term mortality in SYNTAX score II based strata of patients who received percutaneous coronary interventions (PCI) in contemporary randomized trials. BACKGROUND The SYNTAX score II was developed in the randomized, all-comers' SYNTAX trial population and is composed by 2 anatomical and 6 clinical variables. The interaction of these variables with the treatment provides individual long-term mortality predictions if a patient undergoes coronary artery bypass grafting (CABG) or PCI. METHODS Patient-level (n=5433) data from 7 contemporary coronary drug-eluting stent (DES) trials were pooled. The mortality for CABG or PCI was estimated for every patient. The difference in mortality estimates for these two revascularization strategies was used to divide the patients into three groups of theoretical treatment recommendations: PCI, CABG or PCI/CABG (the latter means equipoise between CABG and PCI for long term mortality). RESULTS The three groups had marked differences in their baseline characteristics. According to the predicted risk differences, 5115 patients could be treated either by PCI or CABG, 271 should be treated only by PCI and, rarely, CABG (n=47) was recommended. At 3-year follow-up, according to the SYNTAX score II recommendations, patients recommended for CABG had higher mortality compared to the PCI and PCI/CABG groups (17.4%; 6.1% and 5.3%, respectively; P<0.01). CONCLUSIONS The SYNTAX score II demonstrated capability to help in stratifying PCI procedures.
Resumo:
OBJECTIVE The purpose of this study was to investigate outcomes of patients treated with prasugrel or clopidogrel after percutaneous coronary intervention (PCI) in a nationwide acute coronary syndrome (ACS) registry. BACKGROUND Prasugrel was found to be superior to clopidogrel in a randomized trial of ACS patients undergoing PCI. However, little is known about its efficacy in everyday practice. METHODS All ACS patients enrolled in the Acute Myocardial Infarction in Switzerland (AMIS)-Plus registry undergoing PCI and being treated with a thienopyridine P2Y12 inhibitor between January 2010-December 2013 were included in this analysis. Patients were stratified according to treatment with prasugrel or clopidogrel and outcomes were compared using propensity score matching. The primary endpoint was a composite of death, recurrent infarction and stroke at hospital discharge. RESULTS Out of 7621 patients, 2891 received prasugrel (38%) and 4730 received clopidogrel (62%). Independent predictors of in-hospital mortality were age, Killip class >2, STEMI, Charlson comorbidity index >1, and resuscitation prior to admission. After propensity score matching (2301 patients per group), the primary endpoint was significantly lower in prasugrel-treated patients (3.0% vs 4.3%; p=0.022) while bleeding events were more frequent (4.1% vs 3.0%; p=0.048). In-hospital mortality was significantly reduced (1.8% vs 3.1%; p=0.004), but no significant differences were observed in rates of recurrent infarction (0.8% vs 0.7%; p=1.00) or stroke (0.5% vs 0.6%; p=0.85). In a predefined subset of matched patients with one-year follow-up (n=1226), mortality between discharge and one year was not significantly reduced in prasugrel-treated patients (1.3% vs 1.9%, p=0.38). CONCLUSIONS In everyday practice in Switzerland, prasugrel is predominantly used in younger patients with STEMI undergoing primary PCI. A propensity score-matched analysis suggests a mortality benefit from prasugrel compared with clopidogrel in these patients.
Resumo:
PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
Since the immunochemical identification of the bullous pemphigoid antigen 230 (BP230) as one of the major target autoantigens of bullous pemphigoid (BP) in 1981, our understanding of this protein has significantly increased. Cloning of its gene, development and characterization of animal models with engineered gene mutations or spontaneous mouse mutations have revealed an unexpected complexity of the gene encoding BP230. The latter, now called dystonin (DST), is composed of at least 100 exons and gives rise to three major isoforms, an epithelial, a neuronal and a muscular isoform, named BPAG1e (corresponding to the original BP230), BPAG1a and BPAG1b, respectively. The various BPAG1 isoforms play a key role in fundamental processes, such as cell adhesion, cytoskeleton organization, and cell migration. Genetic defects of BPAG1 isoforms are the culprits of epidermolysis bullosa and complex, devastating neurological diseases. In this review, we summarize recent advances of our knowledge about several BPAG1 isoforms, their role in various biological processes and in human diseases.
Resumo:
BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.
Resumo:
OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.
Resumo:
BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.
Resumo:
BACKGROUND Biomarkers of myocardial injury increase frequently during transcatheter aortic valve implantation (TAVI). The impact of postprocedural cardiac troponin (cTn) elevation on short-term outcomes remains controversial, and the association with long-term prognosis is unknown. METHODS AND RESULTS We evaluated 577 consecutive patients with severe aortic stenosis treated with TAVI between 2007 and 2012. Myocardial injury, defined according to the Valve Academic Research Consortium (VARC)-2 as post-TAVI cardiac troponin T (cTnT) >15× the upper limit of normal, occurred in 338 patients (58.1%). In multivariate analyses, myocardial injury was associated with higher risk of all-cause mortality at 30 days (adjusted hazard ratio [HR], 8.77; 95% CI, 2.07-37.12; P=0.003) and remained a significant predictor at 2 years (adjusted HR, 1.98; 95% CI, 1.36-2.88; P<0.001). Higher cTnT cutoffs did not add incremental predictive value compared with the VARC-2-defined cutoff. Whereas myocardial injury occurred more frequently in patients with versus without coronary artery disease (CAD), the relative impact of cTnT elevation on 2-year mortality did not differ between patients without CAD (adjusted HR, 2.59; 95% CI, 1.27-5.26; P=0.009) and those with CAD (adjusted HR, 1.71; 95% CI, 1.10-2.65; P=0.018; P for interaction=0.24). Mortality rates at 2 years were lowest in patients without CAD and no myocardial injury (11.6%) and highest in patients with complex CAD (SYNTAX score >22) and myocardial injury (41.1%). CONCLUSIONS VARC-2-defined cTnT elevation emerged as a strong, independent predictor of 30-day mortality and remained a modest, but significant, predictor throughout 2 years post-TAVI. The prognostic value of cTnT elevation was modified by the presence and complexity of underlying CAD with highest mortality risk observed in patients combining SYNTAX score >22 and evidence of myocardial injury.
Resumo:
The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.
Resumo:
OBJECTIVE We endeavored to develop an unruptured intracranial aneurysm (UIA) treatment score (UIATS) model that includes and quantifies key factors involved in clinical decision-making in the management of UIAs and to assess agreement for this model among specialists in UIA management and research. METHODS An international multidisciplinary (neurosurgery, neuroradiology, neurology, clinical epidemiology) group of 69 specialists was convened to develop and validate the UIATS model using a Delphi consensus. For internal (39 panel members involved in identification of relevant features) and external validation (30 independent external reviewers), 30 selected UIA cases were used to analyze agreement with UIATS management recommendations based on a 5-point Likert scale (5 indicating strong agreement). Interrater agreement (IRA) was assessed with standardized coefficients of dispersion (vr*) (vr* = 0 indicating excellent agreement and vr* = 1 indicating poor agreement). RESULTS The UIATS accounts for 29 key factors in UIA management. Agreement with UIATS (mean Likert scores) was 4.2 (95% confidence interval [CI] 4.1-4.3) per reviewer for both reviewer cohorts; agreement per case was 4.3 (95% CI 4.1-4.4) for panel members and 4.5 (95% CI 4.3-4.6) for external reviewers (p = 0.017). Mean Likert scores were 4.2 (95% CI 4.1-4.3) for interventional reviewers (n = 56) and 4.1 (95% CI 3.9-4.4) for noninterventional reviewers (n = 12) (p = 0.290). Overall IRA (vr*) for both cohorts was 0.026 (95% CI 0.019-0.033). CONCLUSIONS This novel UIA decision guidance study captures an excellent consensus among highly informed individuals on UIA management, irrespective of their underlying specialty. Clinicians can use the UIATS as a comprehensive mechanism for indicating how a large group of specialists might manage an individual patient with a UIA.