104 resultados para Framingham risk score


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabecular bone score (TBS) rests on the textural analysis of DXA to reflect the decay in trabecular structure characterising osteoporosis. Yet, its discriminative power in fracture studies remains incomprehensible as prior biomechanical tests found no correlation with vertebral strength. To verify this result possibly due to an unrealistic set-up and to cover a wide range of loading scenarios, the data from three previous biomechanical studies using different experimental settings was used. They involved the compressive failure of 62 human lumbar vertebrae loaded 1) via intervertebral discs to mimic the in vivo situation (“full vertebra”), 2) via the classical endplate embedding (“vertebral body”) or 3) via a ball joint to induce anterior wedge failure (“vertebral section”). HR-pQCT scans acquired prior testing were used to simulate anterior-posterior DXA from which areal bone mineral density (aBMD) and the initial slope of the variogram (ISV), the early definition of TBS, were evaluated. Finally, the relation of aBMD and ISV with failure load (Fexp) and apparent failure stress (σexp) was assessed and their relative contribution to a multi-linear model was quantified via ANOVA. We found that, unlike aBMD, ISV did not significantly correlate with Fexp and σexp, except for the “vertebral body” case (r2 = 0.396, p = 0.028). Aside from the “vertebra section” set-up where it explained only 6.4% of σexp (p = 0.037), it brought no significant improvement to aBMD. These results indicate that ISV, a replica of TBS, is a poor surrogate for vertebral strength no matter the testing set-up, which supports the prior observations and raises a fortiori the question of the deterministic factors underlying the statistical relationship between TBS and vertebral fracture risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective The validity of current ultra-high risk (UHR) criteria is under-examined in help-seeking minors, particularly, in children below the age of 12 years. Thus, the present study investigated predictors of one-year outcome in children and adolescents (CAD) with UHR status. Method Thirty-five children and adolescents (age 9–17 years) meeting UHR criteria according to the Structured Interview for Psychosis-Risk Syndromes were followed-up for 12 months. Regression analyses were employed to detect baseline predictors of conversion to psychosis and of outcome of non-converters (remission and persistence of UHR versus conversion). Results At one-year follow-up, 20% of patients had developed schizophrenia, 25.7% had remitted from their UHR status that, consequently, had persisted in 54.3%. No patient had fully remitted from mental disorders, even if UHR status was not maintained. Conversion was best predicted by any transient psychotic symptom and a disorganized communication score. No prediction model for outcome beyond conversion was identified. Conclusions Our findings provide the first evidence for the predictive utility of UHR criteria in CAD in terms of brief intermittent psychotic symptoms (BIPS) when accompanied by signs of cognitive impairment, i.e. disorganized communication. However, because attenuated psychotic symptoms (APS) related to thought content and perception were indicative of non-conversion at 1-year follow-up, their use in early detection of psychosis in CAD needs further study. Overall, the need for more in-depth studies into developmental peculiarities in the early detection and treatment of psychoses with an onset of illness in childhood and early adolescence was further highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Assessment, whether location of impact causing different facial fracture patterns was associated with diffuse axonal injury in patients with severe closed head injury. METHODS: Retrospectively all patients referred to the Trauma Unit of the University Hospital of Zurich, Switzerland between 1996 and 2002 presenting with severe closed head injuries (Abbreviated Injury Scale (AIS) (face) of 2-4 and an AIS (head and neck) of 3-5) were assessed according to the Glasgow Coma Scale (GCS) and the Injury Severity Score (ISS). Facial fracture patterns were classified as resulting from frontal, oblique or lateral impact. All patients had undergone computed tomography. The association between impact location and diffuse axonal injury when correcting for the level of consciousness (using the Glasgow scale) and severity of injury (using the ISS) was calculated with a multivariate regression analysis. RESULTS: Of 200 screened patients, 61 fulfilled the inclusion criteria for severe closed head injury. The medians (interquartile ranges 25;75) for GCS, AIS(face) AIS(head and neck) and ISS were 3 (3;13), 2 (2;4), 4 (4;5) and 30 (24;41), respectively. A total of 51% patients had frontal, 26% had an oblique and 23% had lateral trauma. A total of 21% patients developed diffuse axonal injury (DAI) when compared with frontal impact, the likelihood of diffuse axonal injury increased 11.0 fold (1.7-73.0) in patients with a lateral impact. CONCLUSIONS: Clinicians should be aware of the substantial increase of diffuse axonal injury related to lateral impact in patients with severe closed head injuries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND This study evaluated whether risk factors for sternal wound infections vary with the type of surgical procedure in cardiac operations. METHODS This was a university hospital surveillance study of 3,249 consecutive patients (28% women) from 2006 to 2010 (median age, 69 years [interquartile range, 60 to 76]; median additive European System for Cardiac Operative Risk Evaluation score, 5 [interquartile range, 3 to 8]) after (1) isolated coronary artery bypass grafting (CABG), (2) isolated valve repair or replacement, or (3) combined valve procedures and CABG. All other operations were excluded. Univariate and multivariate binary logistic regression were conducted to identify independent predictors for development of sternal wound infections. RESULTS We detected 122 sternal wound infections (3.8%) in 3,249 patients: 74 of 1,857 patients (4.0%) after CABG, 19 of 799 (2.4%) after valve operations, and 29 of 593 (4.9%) after combined procedures. In CABG patients, bilateral internal thoracic artery harvest, procedural duration exceeding 300 minutes, diabetes, obesity, chronic obstructive pulmonary disease, and female sex (model 1) were independent predictors for sternal wound infection. A second model (model 2), using the European System for Cardiac Operative Risk Evaluation, revealed bilateral internal thoracic artery harvest, diabetes, obesity, and the second and third quartiles of the European System for Cardiac Operative Risk Evaluation were independent predictors. In valve patients, model 1 showed only revision for bleeding as an independent predictor for sternal infection, and model 2 yielded both revision for bleeding and diabetes. For combined valve and CABG operations, both regression models demonstrated revision for bleeding and duration of operation exceeding 300 minutes were independent predictors for sternal infection. CONCLUSIONS Risk factors for sternal wound infections after cardiac operations vary with the type of surgical procedure. In patients undergoing valve operations or combined operations, procedure-related risk factors (revision for bleeding, duration of operation) independently predict infection. In patients undergoing CABG, not only procedure-related risk factors but also bilateral internal thoracic artery harvest and patient characteristics (diabetes, chronic obstructive pulmonary disease, obesity, female sex) are predictive of sternal wound infection. Preventive interventions may be justified according to the type of operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE In patients with a long life expectancy with high-risk (HR) prostate cancer (PCa), the chance to die from PCa is not negligible and may change significantly according to the time elapsed from surgery. The aim of this study was to evaluate long-term survival patterns in young patients treated with radical prostatectomy (RP) for HRPCa. MATERIALS AND METHODS Within a multiinstitutional cohort, 600 young patients (≤59 years) treated with RP between 1987 and 2012 for HRPCa (defined as at least one of the following adverse characteristics: prostate specific antigen>20, cT3 or higher, biopsy Gleason sum 8-10) were identified. Smoothed cumulative incidence plot was performed to assess cancer-specific mortality (CSM) and other cause mortality (OCM) rates at 10, 15, and 20 years after RP. The same analyses were performed to assess the 5-year probability of CSM and OCM in patients who survived 5, 10, and 15 years after RP. A multivariable competing risk regression model was fitted to identify predictors of CSM and OCM. RESULTS The 10-, 15- and 20-year CSM and OCM rates were 11.6% and 5.5% vs. 15.5% and 13.5% vs. 18.4% and 19.3%, respectively. The 5-year probability of CSM and OCM rates among patients who survived at 5, 10, and 15 years after RP, were 6.4% and 2.7% vs. 4.6% and 9.6% vs. 4.2% and 8.2%, respectively. Year of surgery, pathological stage and Gleason score, surgical margin status and lymph node invasion were the major determinants of CSM (all P≤0.03). Conversely, none of the covariates was significantly associated with OCM (all P≥ 0.09). CONCLUSIONS Very long-term cancer control in young high-risk patients after RP is highly satisfactory. The probability of dying from PCa in young patients is the leading cause of death during the first 10 years of survivorship after RP. Thereafter, mortality not related to PCa became the main cause of death. Consequently, surgery should be consider among young patients with high-risk disease and strict PCa follow-up should enforce during the first 10 years of survivorship after RP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Magnetic resonance imaging (MRI) of the prostate is considered to be the most precise noninvasive staging modality for localized prostate cancer. Multiparametric MRI (mpMRI) dynamic sequences have recently been shown to further increase the accuracy of staging relative to morphological imaging alone. Correct radiological staging, particularly the detection of extraprostatic disease extension, is of paramount importance for target volume definition and dose prescription in highly-conformal curative radiotherapy (RT); in addition, it may affect the risk-adapted duration of additional antihormonal therapy. The purpose of our study was to analyze the impact of mpMRI-based tumor staging in patients undergoing primary RT for prostate cancer. METHODS A total of 122 patients admitted for primary RT for prostate cancer were retrospectively analyzed regarding initial clinical and computed tomography-based staging in comparison with mpMRI staging. Both tumor stage shifts and overall risk group shifts, including prostate-specific antigen (PSA) level and the Gleason score, were assessed. Potential risk factors for upstaging were tested in a multivariate analysis. Finally, the impact of mpMRI-based staging shift on prostate RT and antihormonal therapy was evaluated. RESULTS Overall, tumor stage shift occurred in 55.7% of patients after mpMRI. Upstaging was most prominent in patients showing high-risk serum PSA levels (73%), but was also substantial in patients presenting with low-risk PSA levels (50%) and low-risk Gleason scores (45.2%). Risk group changes occurred in 28.7% of the patients with consequent treatment adaptations regarding target volume delineation and duration of androgen deprivation therapy. High PSA levels were found to be a significant risk factor for tumor upstaging and newly diagnosed seminal vesicle infiltration assessed using mpMRI. CONCLUSIONS Our findings suggest that mpMRI of the prostate leads to substantial tumor upstaging, and can considerably affect treatment decisions in all patient groups undergoing risk-adapted curative RT for prostate cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Increasing evidence suggests that psychosocial factors, including depression predict incident venous thromboembolism (VTE) against a background of genetic and acquired risk factors. The role of psychosocial factors for the risk of recurrent VTE has not previously been examined. We hypothesized that depressive symptoms in patients with prior VTE are associated with an increased risk of recurrent VTE. METHODS In this longitudinal observational study, we investigated 271 consecutive patients, aged 18 years or older, referred for thrombophilia investigation with an objectively diagnosed episode of VTE. Patients completed the depression subscale of the Hospital Anxiety and Depression Scale (HADS-D). During the observation period, they were contacted by phone and information on recurrent VTE, anticoagulation therapy, and thromboprophylaxis in risk situations was collected. RESULTS Clinically relevant depressive symptoms (HADS-D score ≥ 8) were present in 10% of patients. During a median observation period of 13 months (range 5-48), 27 (10%) patients experienced recurrent VTE. After controlling for sociodemographic and clinical factors, a 3-point increase on the HADS-D score was associated with a 44% greater risk of recurrent VTE (OR 1.44, 95% CI 1.02, 2.06). Compared to patients with lower levels of depressive symptoms (HADS-D score: range 0-2), those with higher levels (HADS-D score: range 3-16) had a 4.1-times greater risk of recurrent VTE (OR 4.07, 95% CI 1.55, 10.66). CONCLUSIONS The findings suggest that depressive symptoms might contribute to an increased risk of recurrent VTE independent of other prognostic factors. An increased risk might already be present at subclinical levels of depressive symptoms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data concerning the link between severity of abdominal aortic calcification (AAC) and fracture risk in postmenopausal women are discordant. This association may vary by skeletal site and duration of follow-up. Our aim was to assess the association between the AAC severity and fracture risk in older women over the short- and long term. This is a case-cohort study nested in a large multicenter prospective cohort study. The association between AAC and fracture was assessed using Odds Ratios (OR) and 95% confidence intervals (95%CI) for vertebral fractures and using Hazard Risks (HR) and 95%CI for non-vertebral and hip fractures. AAC severity was evaluated from lateral spine radiographs using Kauppila's semiquantitative score. Severe AAC (AAC score 5+) was associated with higher risk of vertebral fracture during 4 years of follow-up, after adjustment for confounders (age, BMI, walking, smoking, hip bone mineral density, prevalent vertebral fracture, systolic blood pressure, hormone replacement therapy) (OR=2.31, 95%CI: 1.24-4.30, p<0.01). In a similar model, severe AAC was associated with an increase in the hip fracture risk (HR=2.88, 95%CI: 1.00-8.36, p=0.05). AAC was not associated with the risk of any non-vertebral fracture. AAC was not associated with the fracture risk after 15 years of follow-up. In elderly women, severe AAC is associated with higher short-term risk of vertebral and hip fractures, but not with the long-term risk of these fractures. There is no association between AAC and risk of non-vertebral-non-hip fracture in older women. Our findings lend further support to the hypothesis that AAC and skeletal fragility are related.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Predicting long-term survival after admission to hospital is helpful for clinical, administrative and research purposes. The Hospital-patient One-year Mortality Risk (HOMR) model was derived and internally validated to predict the risk of death within 1 year after admission. We conducted an external validation of the model in a large multicentre study. METHODS We used administrative data for all nonpsychiatric admissions of adult patients to hospitals in the provinces of Ontario (2003-2010) and Alberta (2011-2012), and to the Brigham and Women's Hospital in Boston (2010-2012) to calculate each patient's HOMR score at admission. The HOMR score is based on a set of parameters that captures patient demographics, health burden and severity of acute illness. We determined patient status (alive or dead) 1 year after admission using population-based registries. RESULTS The 3 validation cohorts (n = 2,862,996 in Ontario, 210 595 in Alberta and 66,683 in Boston) were distinct from each other and from the derivation cohort. The overall risk of death within 1 year after admission was 8.7% (95% confidence interval [CI] 8.7% to 8.8%). The HOMR score was strongly and significantly associated with risk of death in all populations and was highly discriminative, with a C statistic ranging from 0.89 (95% CI 0.87 to 0.91) to 0.92 (95% CI 0.91 to 0.92). Observed and expected outcome risks were similar (median absolute difference in percent dying in 1 yr 0.3%, interquartile range 0.05%-2.5%). INTERPRETATION The HOMR score, calculated using routinely collected administrative data, accurately predicted the risk of death among adult patients within 1 year after admission to hospital for nonpsychiatric indications. Similar performance was seen when the score was used in geographically and temporally diverse populations. The HOMR model can be used for risk adjustment in analyses of health administrative data to predict long-term survival among hospital patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Multiple scores have been proposed to stratify bleeding risk, but their value to guide dual antiplatelet therapy duration has never been appraised. We compared the performance of the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines), ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy), and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol Concomitantly) scores in 1946 patients recruited in the Prolonging Dual Antiplatelet Treatment After Grading Stent-Induced Intimal Hyperplasia Study (PRODIGY) and assessed hemorrhagic and ischemic events in the 24- and 6-month dual antiplatelet therapy groups. METHODS AND RESULTS Bleeding score performance was assessed with a Cox regression model and C statistics. Discriminative and reclassification power was assessed with net reclassification improvement and integrated discrimination improvement. The C statistic was similar between the CRUSADE score (area under the curve 0.71) and ACUITY (area under the curve 0.68), and higher than HAS-BLED (area under the curve 0.63). CRUSADE, but not ACUITY, improved reclassification (net reclassification index 0.39, P=0.005) and discrimination (integrated discrimination improvement index 0.0083, P=0.021) of major bleeding compared with HAS-BLED. Major bleeding and transfusions were higher in the 24- versus 6-month dual antiplatelet therapy groups in patients with a CRUSADE score >40 (hazard ratio for bleeding 2.69, P=0.035; hazard ratio for transfusions 4.65, P=0.009) but not in those with CRUSADE score ≤40 (hazard ratio for bleeding 1.50, P=0.25; hazard ratio for transfusions 1.37, P=0.44), with positive interaction (Pint=0.05 and Pint=0.01, respectively). The number of patients with high CRUSADE scores needed to treat for harm for major bleeding and transfusion were 17 and 15, respectively, with 24-month rather than 6-month dual antiplatelet therapy; corresponding figures in the overall population were 67 and 71, respectively. CONCLUSIONS Our analysis suggests that the CRUSADE score predicts major bleeding similarly to ACUITY and better than HAS BLED in an all-comer population with percutaneous coronary intervention and potentially identifies patients at higher risk of hemorrhagic complications when treated with a long-term dual antiplatelet therapy regimen. CLINICAL TRIAL REGISTRATION URL: http://clinicaltrials.gov. Unique identifier: NCT00611286.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND We aimed to identify a group of patients with a low risk of seizure after surgery for unruptured intracranial aneurysms (UIA). OBJECTIVE To determine the risk of seizure after discharge from surgery for UIA. METHODS A consecutive prospectively collected cohort database was interrogated for all surgical UIA cases. There were 726 cases of UIA (excluding cases proximal to the superior cerebellar artery on the vertebrobasilar system) identified and analyzed. Cox proportional hazards regression models and Kaplan-Meier life table analyses were generated assessing risk factors. RESULTS Preoperative seizure history and complication of aneurysm repair were the only risk factors found to be significant. The risk of first seizure after discharge from hospital following surgery for patients with neither preoperative seizure, treated middle cerebral artery aneurysm, nor postoperative complications (leading to a modified Rankin Scale score >1) was <0.1% and 1.1% at 12 months and 7 years, respectively. The risk for those with preoperative seizures was 17.3% and 66% at 12 months and 7 years, respectively. The risk for seizures with either complications (leading to a modified Rankin Scale score >1) from surgery or treated middle cerebral artery aneurysm was 1.4% and 6.8% at 12 months and 7 years, respectively. These differences in the 3 Kaplan-Meier curves were significant (log-rank P < .001). CONCLUSION The risk of seizures after discharge from hospital following surgery for UIA is very low when there is no preexisting history of seizures. If this result can be supported by other series, guidelines that restrict returning to driving because of the risk of postoperative seizures should be reconsidered. ABBREVIATIONS MCA, middle cerebral arterymRS, modified Rankin ScaleUIA, unruptured intracranial aneurysms.