792 resultados para Propensity score
Resumo:
Trabecular bone score (TBS) rests on the textural analysis of DXA to reflect the decay in trabecular structure characterising osteoporosis. Yet, its discriminative power in fracture studies remains incomprehensible as prior biomechanical tests found no correlation with vertebral strength. To verify this result possibly due to an unrealistic set-up and to cover a wide range of loading scenarios, the data from three previous biomechanical studies using different experimental settings was used. They involved the compressive failure of 62 human lumbar vertebrae loaded 1) via intervertebral discs to mimic the in vivo situation (“full vertebra”), 2) via the classical endplate embedding (“vertebral body”) or 3) via a ball joint to induce anterior wedge failure (“vertebral section”). HR-pQCT scans acquired prior testing were used to simulate anterior-posterior DXA from which areal bone mineral density (aBMD) and the initial slope of the variogram (ISV), the early definition of TBS, were evaluated. Finally, the relation of aBMD and ISV with failure load (Fexp) and apparent failure stress (σexp) was assessed and their relative contribution to a multi-linear model was quantified via ANOVA. We found that, unlike aBMD, ISV did not significantly correlate with Fexp and σexp, except for the “vertebral body” case (r2 = 0.396, p = 0.028). Aside from the “vertebra section” set-up where it explained only 6.4% of σexp (p = 0.037), it brought no significant improvement to aBMD. These results indicate that ISV, a replica of TBS, is a poor surrogate for vertebral strength no matter the testing set-up, which supports the prior observations and raises a fortiori the question of the deterministic factors underlying the statistical relationship between TBS and vertebral fracture risk.
Resumo:
One key hypothesis in the study of brain size evolution is the expensive tissue hypothesis; the idea that increased investment into the brain should be compensated by decreased investment into other costly organs, for instance the gut. Although the hypothesis is supported by both comparative and experimental evidence, little is known about the potential changes in energetic requirements or digestive traits following such evolutionary shifts in brain and gut size. Organisms may meet the greater metabolic requirements of larger brains despite smaller guts via increased food intake or better digestion. But increased investment in the brain may also hamper somatic growth. To test these hypotheses we here used guppy (Poecilia reticulata) brain size selection lines with a pronounced negative association between brain and gut size and investigated feeding propensity, digestive efficiency (DE), and juvenile growth rate. We did not find any difference in feeding propensity or DE between large- and small-brained individuals. Instead, we found that large-brained females had slower growth during the first 10 weeks after birth. Our study provides experimental support that investment into larger brains at the expense of gut tissue carries costs that are not necessarily compensated by a more efficient digestive system.
Resumo:
BACKGROUND AND PURPOSE Previous studies have suggested that advanced age predicts worse outcome following mechanical thrombectomy. We assessed outcomes from 2 recent large prospective studies to determine the association among TICI, age, and outcome. MATERIALS AND METHODS Data from the Solitaire FR Thrombectomy for Acute Revascularization (STAR) trial, an international multicenter prospective single-arm thrombectomy study and the Solitaire arm of the Solitaire FR With the Intention For Thrombectomy (SWIFT) trial were pooled. TICI was determined by core laboratory review. Good outcome was defined as an mRS score of 0-2 at 90 days. We analyzed the association among clinical outcome, successful-versus-unsuccessful reperfusion (TICI 2b-3 versus TICI 0-2a), and age (dichotomized across the median). RESULTS Two hundred sixty-nine of 291 patients treated with Solitaire in the STAR and SWIFT data bases for whom TICI and 90-day outcome data were available were included. The median age was 70 years (interquartile range, 60-76 years) with an age range of 25-88 years. The mean age of patients 70 years of age or younger was 59 years, and it was 77 years for patients older than 70 years. There was no significant difference between baseline NIHSS scores or procedure time metrics. Hemorrhage and device-related complications were more common in the younger age group but did not reach statistical significance. In absolute terms, the rate of good outcome was higher in the younger population (64% versus 44%, P < .001). However, the magnitude of benefit from successful reperfusion was higher in the 70 years of age and older group (OR, 4.82; 95% CI, 1.32-17.63 versus OR 7.32; 95% CI, 1.73-30.99). CONCLUSIONS Successful reperfusion is the strongest predictor of good outcome following mechanical thrombectomy, and the magnitude of benefit is highest in the patient population older than 70 years of age.
Resumo:
OBJECTIVES To assess the clinical profile and long-term mortality in SYNTAX score II based strata of patients who received percutaneous coronary interventions (PCI) in contemporary randomized trials. BACKGROUND The SYNTAX score II was developed in the randomized, all-comers' SYNTAX trial population and is composed by 2 anatomical and 6 clinical variables. The interaction of these variables with the treatment provides individual long-term mortality predictions if a patient undergoes coronary artery bypass grafting (CABG) or PCI. METHODS Patient-level (n=5433) data from 7 contemporary coronary drug-eluting stent (DES) trials were pooled. The mortality for CABG or PCI was estimated for every patient. The difference in mortality estimates for these two revascularization strategies was used to divide the patients into three groups of theoretical treatment recommendations: PCI, CABG or PCI/CABG (the latter means equipoise between CABG and PCI for long term mortality). RESULTS The three groups had marked differences in their baseline characteristics. According to the predicted risk differences, 5115 patients could be treated either by PCI or CABG, 271 should be treated only by PCI and, rarely, CABG (n=47) was recommended. At 3-year follow-up, according to the SYNTAX score II recommendations, patients recommended for CABG had higher mortality compared to the PCI and PCI/CABG groups (17.4%; 6.1% and 5.3%, respectively; P<0.01). CONCLUSIONS The SYNTAX score II demonstrated capability to help in stratifying PCI procedures.
Resumo:
PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.
Resumo:
BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.
Resumo:
OBJECTIVE We endeavored to develop an unruptured intracranial aneurysm (UIA) treatment score (UIATS) model that includes and quantifies key factors involved in clinical decision-making in the management of UIAs and to assess agreement for this model among specialists in UIA management and research. METHODS An international multidisciplinary (neurosurgery, neuroradiology, neurology, clinical epidemiology) group of 69 specialists was convened to develop and validate the UIATS model using a Delphi consensus. For internal (39 panel members involved in identification of relevant features) and external validation (30 independent external reviewers), 30 selected UIA cases were used to analyze agreement with UIATS management recommendations based on a 5-point Likert scale (5 indicating strong agreement). Interrater agreement (IRA) was assessed with standardized coefficients of dispersion (vr*) (vr* = 0 indicating excellent agreement and vr* = 1 indicating poor agreement). RESULTS The UIATS accounts for 29 key factors in UIA management. Agreement with UIATS (mean Likert scores) was 4.2 (95% confidence interval [CI] 4.1-4.3) per reviewer for both reviewer cohorts; agreement per case was 4.3 (95% CI 4.1-4.4) for panel members and 4.5 (95% CI 4.3-4.6) for external reviewers (p = 0.017). Mean Likert scores were 4.2 (95% CI 4.1-4.3) for interventional reviewers (n = 56) and 4.1 (95% CI 3.9-4.4) for noninterventional reviewers (n = 12) (p = 0.290). Overall IRA (vr*) for both cohorts was 0.026 (95% CI 0.019-0.033). CONCLUSIONS This novel UIA decision guidance study captures an excellent consensus among highly informed individuals on UIA management, irrespective of their underlying specialty. Clinicians can use the UIATS as a comprehensive mechanism for indicating how a large group of specialists might manage an individual patient with a UIA.
Resumo:
INTRODUCTION The aim of the study was to identify the appropriate level of Charlson comorbidity index (CCI) in older patients (>70 years) with high-risk prostate cancer (PCa) to achieve survival benefit following radical prostatectomy (RP). METHODS We retrospectively analyzed 1008 older patients (>70 years) who underwent RP with pelvic lymph node dissection for high-risk prostate cancer (preoperative prostate-specific antigen >20 ng/mL or clinical stage ≥T2c or Gleason ≥8) from 14 tertiary institutions between 1988 and 2014. The study population was further grouped into CCI < 2 and ≥2 for analysis. Survival rate for each group was estimated with Kaplan-Meier method and competitive risk Fine-Gray regression to estimate the best explanatory multivariable model. Area under the curve (AUC) and Akaike information criterion were used to identify ideal 'Cut off' for CCI. RESULTS The clinical and cancer characteristics were similar between the two groups. Comparison of the survival analysis using the Kaplan-Meier curve between two groups for non-cancer death and survival estimations for 5 and 10 years shows significant worst outcomes for patients with CCI ≥ 2. In multivariate model to decide the appropriate CCI cut-off point, we found CCI 2 has better AUC and p value in log rank test. CONCLUSION Older patients with fewer comorbidities harboring high-risk PCa appears to benefit from RP. Sicker patients are more likely to die due to non-prostate cancer-related causes and are less likely to benefit from RP.
Resumo:
The main objective of this study was to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that can be translated into a simple scoring system in order to ascertain stroke cases using hospital admission medical records data. This algorithm, the Risk Index Score (RISc), was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christ (BASIC) project. The validity of the RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment accomplished by physician review of hospital admission records. The goal of this study was to develop a rapid, simple, efficient, and accurate method to ascertain the incidence of stroke from routine hospital admission hospital admission records for epidemiologic investigations. ^ The main objectives of this study were to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that could be translated into a simple scoring system to ascertain stroke cases using hospital admission medical records data. (Abstract shortened by UMI.)^
Resumo:
In order to better take advantage of the abundant results from large-scale genomic association studies, investigators are turning to a genetic risk score (GRS) method in order to combine the information from common modest-effect risk alleles into an efficient risk assessment statistic. The statistical properties of these GRSs are poorly understood. As a first step toward a better understanding of GRSs, a systematic analysis of recent investigations using a GRS was undertaken. GRS studies were searched in the areas of coronary heart disease (CHD), cancer, and other common diseases using bibliographic databases and by hand-searching reference lists and journals. Twenty-one independent case-control studies, cohort studies, and simulation studies (12 in CHD, 9 in other diseases) were identified. The underlying statistical assumptions of the GRS using the experience of the Framingham risk score were investigated. Improvements in the construction of a GRS guided by the concept of composite indicators are discussed. The GRS will be a promising risk assessment tool to improve prediction and diagnosis of common diseases.^
Resumo:
En países en vías de desarrollo como Argentina, la sobrevida de prematuros de peso inferior a 1000 gramos dista mucho de los resultados reportados por países desarrolladas. Controles prenatales deficitarios, recursos técnicos limitados y la saturación de los servicios de Neonatología son en parte responsables de estas diferencias. Una de las situaciones frecuentemente asociada a decisiones éticas en neonatología se produce en torno al prematuro extremo. Las preguntas más difíciles de responder son si existe un límite de peso o edad gestacional por debajo del cual no se deban iniciar o agregar terapéuticas encaminadas a salvar la vida, por considerarlas inútiles para el niño, prolongan sin esperanza la vida, hacen sufrir al paciente y su familia y ocupar una unidad que priva de atención a otro niño con mayores posibilidades de sobrevida. En el presente estudio se elaboró un score de riesgo neonatal constituido por variables que caracterizan a muchas poblaciones de nuestros países latinoamericanos y que fue validado estadísticamente.El score es de rápida y fácil realización. Permite predecir si el prematuro grave es recuperable o no, posibilitando tomar decisiones éticas basadas en una técnica validada, que permite actuar en el mayor beneficio del niño y su familia, al mismo tiempo que se hace un uso más equitativo de los recursos.