808 resultados para Mortality Risk
Resumo:
Blood cholesterol and blood pressure development in childhood and adolescence have important impact on the future adult level of cholesterol and blood pressure, and on increased risk of cardiovascular diseases. The U.S. has higher mortality rates of coronary heart diseases than Japan. A longitudinal comparison in children of risk factor development in the two countries provides more understanding about the causes of cardiovascular disease and its prevention. Such comparisons have not been reported in the past. ^ In Project HeartBeat!, 506 non-Hispanic white, 136 black and 369 Japanese children participated in the study in the U.S. and Japan from 1991 to 1995. A synthetic cohort of ages 8 to 18 years was composed by three cohorts with starting ages at 8, 11, and 14. A multilevel regression model was used for data analysis. ^ The study revealed that the Japanese children had significantly higher slopes of mean total cholesterol (TC) and high density lipoprotein (HDL) cholesterol levels than the U.S. children after adjusting for age and sex. The mean TC level of Japanese children was not significantly different from white and black children. The mean HDL level of Japanese children was significantly higher than white and black children after adjusting for age and sex. The ratio of HDL/TC in Japanese children was significantly higher than in U.S. whites, but not significantly different from the black children. The Japanese group had significantly lower mean diastolic blood pressure phase IV (DBP4) and phase V (DBP5) than the two U.S. groups. The Japanese group also showed significantly higher slopes in systolic blood pressure, DBP5 and DBP4 during the study period than both U.S. groups. The differences were independent from height and body mass index. ^ The study provided the first longitudinal comparison of blood cholesterol and blood pressure between the U.S. and Japanese children and adolescents. It revealed the dynamic process of these factors in the three ethnic groups. ^
Resumo:
BACKGROUND Conventional factors do not fully explain the distribution of cardiovascular outcomes. Biomarkers are known to participate in well-established pathways associated with cardiovascular disease, and may therefore provide further information over and above conventional risk factors. This study sought to determine whether individual and/or combined assessment of 9 biomarkers improved discrimination, calibration and reclassification of cardiovascular mortality. METHODS 3267 patients (2283 men), aged 18-95 years, at intermediate-to-high-risk of cardiovascular disease were followed in this prospective cohort study. Conventional risk factors and biomarkers were included based on forward and backward Cox proportional stepwise selection models. RESULTS During 10-years of follow-up, 546 fatal cardiovascular events occurred. Four biomarkers (interleukin-6, neutrophils, von Willebrand factor, and 25-hydroxyvitamin D) were retained during stepwise selection procedures for subsequent analyses. Simultaneous inclusion of these biomarkers significantly improved discrimination as measured by the C-index (0.78, P = 0.0001), and integrated discrimination improvement (0.0219, P<0.0001). Collectively, these biomarkers improved net reclassification for cardiovascular death by 10.6% (P<0.0001) when added to the conventional risk model. CONCLUSIONS In terms of adverse cardiovascular prognosis, a biomarker panel consisting of interleukin-6, neutrophils, von Willebrand factor, and 25-hydroxyvitamin D offered significant incremental value beyond that conveyed by simple conventional risk factors.
Resumo:
AIMS High-density lipoprotein (HDL) cholesterol is a strong predictor of cardiovascular mortality. This work aimed to investigate whether the presence of coronary artery disease (CAD) impacts on its predictive value. METHODS AND RESULTS We studied 3141 participants (2191 males, 950 females) of the LUdwigshafen RIsk and Cardiovascular health (LURIC) study. They had a mean ± standard deviation age of 62.6 ± 10.6 years, body mass index of 27.5 ± 4.1 kg/m², and HDL cholesterol of 38.9 ± 10.8 mg/dL. The cohort consisted of 699 people without CAD, 1515 patients with stable CAD, and 927 patients with unstable CAD. The participants were prospectively followed for cardiovascular mortality over a median (inter-quartile range) period of 9.9 (8.7-10.7) years. A total of 590 participants died from cardiovascular diseases. High-density lipoprotein cholesterol by tertiles was inversely related to cardiovascular mortality in the entire cohort (P = 0.009). There was significant interaction between HDL cholesterol and CAD in predicting the outcome (P = 0.007). In stratified analyses, HDL cholesterol was strongly associated with cardiovascular mortality in people without CAD [3rd vs. 1st tertile: HR (95% CI) = 0.37 (0.18-0.74), P = 0.005], but not in patients with stable [3rd vs. 1st tertile: HR (95% CI) = 0.81 (0.61-1.09), P = 0.159] and unstable [3rd vs. 1st tertile: HR (95% CI) = 0.91 (0.59-1.41), P = 0.675] CAD. These results were replicated by analyses in 3413 participants of the AtheroGene cohort and 5738 participants of the ESTHER cohort, and by a meta-analysis comprising all three cohorts. CONCLUSION The inverse relationship of HDL cholesterol with cardiovascular mortality is weakened in patients with CAD. The usefulness of considering HDL cholesterol for cardiovascular risk stratification seems limited in such patients.
Resumo:
There is a need to validate risk assessment tools for hospitalised medical patients at risk of venous thromboembolism (VTE). We investigated whether a predefined cut-off of the Geneva Risk Score, as compared to the Padua Prediction Score, accurately distinguishes low-risk from high-risk patients regardless of the use of thromboprophylaxis. In the multicentre, prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study, 1,478 hospitalised medical patients were enrolled of whom 637 (43%) did not receive thromboprophylaxis. The primary endpoint was symptomatic VTE or VTE-related death at 90 days. The study is registered at ClinicalTrials.gov, number NCT01277536. According to the Geneva Risk Score, the cumulative rate of the primary endpoint was 3.2% (95% confidence interval [CI] 2.2-4.6%) in 962 high-risk vs 0.6% (95% CI 0.2-1.9%) in 516 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.5% vs 0.8% (p=0.029), respectively. In comparison, the Padua Prediction Score yielded a cumulative rate of the primary endpoint of 3.5% (95% CI 2.3-5.3%) in 714 high-risk vs 1.1% (95% CI 0.6-2.3%) in 764 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.2% vs 1.5% (p=0.130), respectively. Negative likelihood ratio was 0.28 (95% CI 0.10-0.83) for the Geneva Risk Score and 0.51 (95% CI 0.28-0.93) for the Padua Prediction Score. In conclusion, among hospitalised medical patients, the Geneva Risk Score predicted VTE and VTE-related mortality and compared favourably with the Padua Prediction Score, particularly for its accuracy to identify low-risk patients who do not require thromboprophylaxis.
Resumo:
Medial arterial calcification is accelerated in patients with CKD and strongly associated with increased arterial rigidity and cardiovascular mortality. Recently, a novel in vitro blood test that provides an overall measure of calcification propensity by monitoring the maturation time (T50) of calciprotein particles in serum was described. We used this test to measure serum T50 in a prospective cohort of 184 patients with stages 3 and 4 CKD, with a median of 5.3 years of follow-up. At baseline, the major determinants of serum calcification propensity included higher serum phosphate, ionized calcium, increased bone osteoclastic activity, and lower free fetuin-A, plasma pyrophosphate, and albumin concentrations, which accounted for 49% of the variation in this parameter. Increased serum calcification propensity at baseline independently associated with aortic pulse wave velocity in the complete cohort and progressive aortic stiffening over 30 months in a subgroup of 93 patients. After adjustment for demographic, renal, cardiovascular, and biochemical covariates, including serum phosphate, risk of death among patients in the lowest T50 tertile was more than two times the risk among patients in the highest T50 tertile (adjusted hazard ratio, 2.2; 95% confidence interval, 1.1 to 5.4; P=0.04). This effect was lost, however, after additional adjustment for aortic stiffness, suggesting a shared causal pathway. Longitudinally, serum calcification propensity measurements remained temporally stable (intraclass correlation=0.81). These results suggest that serum T50 may be helpful as a biomarker in designing methods to improve defenses against vascular calcification.
Resumo:
BACKGROUND: There are differences in the literature regarding outcomes of premature small-for-gestational-age (SGA) and appropriate-for gestational-age (AGA) infants, possibly due to failure to take into account gestational age at birth. OBJECTIVE: To compare mortality and respiratory morbidity of SGA and AGA premature newborn infants. DESIGN/METHODS: A retrospective study was done of the 2,487 infants born without congenital anomalies at RESULTS: Controlling for GA, premature SGA infants were at a higher risk for mortality (Odds ratio 3.1, P = 0.001) and at lower risk of respiratory distress syndrome (OR = 0.71, p = 0.02) than AGA infants. However multivariate logistic regression modeling found that the odds of having respiratory distress syndrome (RDS) varied between SGA and AGA infants by GA. There was no change in RDS risk in SGA infants at GA 32 wk (OR = 0.41, 95% CI 0.27 - 0.63; p < 0.01). After controlling for GA, SGA infants were observed to be at a significantly higher risk for developing chronic lung disease as compared to AGA infants (OR = 2.2, 95% CI = 1.2 - 3.9, P = 0.01). There was no significant difference between SGA and AGA infants in total days on ventilator. Among infants who survived, mean length of hospital stay was significantly higher in SGA infants born between 26-36 wks GA than AGA infants. CONCLUSIONS: Premature SGA infants have significantly higher mortality, significantly higher risk of developing chronic lung disease and longer hospital stay as compared to premature AGA infants. Even the reduced risk of RDS in infants born at >/=32 wk GA, (conferred possibly by intra-uterine stress leading to accelerated lung maturation) appears to be of transient effect and is counterbalanced by adverse effects of poor intrauterine growth on long term pulmonary outcomes such as chronic lung disease.
Resumo:
BACKGROUND Elevated resting heart rate is known to be detrimental to morbidity and mortality in cardiovascular disease, though its effect in patients with ischemic stroke is unclear. We analyzed the effect of baseline resting heart rate on myocardial infarction (MI) in patients with a recent noncardioembolic cerebral ischemic event participating in PERFORM. METHODS We compared fatal or nonfatal MI using adjusted Cox proportional hazards models for PERFORM patients with baseline heart rate <70 bpm (n=8178) or ≥70 bpm (n=10,802). In addition, heart rate was analyzed as a continuous variable. Other cerebrovascular and cardiovascular outcomes were also explored. RESULTS Heart rate ≥70 bpm was associated with increased relative risk for fatal or nonfatal MI (HR 1.32, 95% CI 1.03-1.69, P=0.029). For every 5-bpm increase in heart rate, there was an increase in relative risk for fatal and nonfatal MI (11.3%, P=0.0002). Heart rate ≥70 bpm was also associated with increased relative risk for a composite of fatal or nonfatal ischemic stroke, fatal or nonfatal MI, or other vascular death (excluding hemorrhagic death) (P<0001); vascular death (P<0001); all-cause mortality (P<0001); and fatal or nonfatal stroke (P=0.04). For every 5-bpm increase in heart rate, there were increases in relative risk for fatal or nonfatal ischemic stroke, fatal or nonfatal MI, or other vascular death (4.7%, P<0.0001), vascular death (11.0%, P<0.0001), all-cause mortality (8.0%, P<0.0001), and fatal and nonfatal stroke (2.4%, P=0.057). CONCLUSION Elevated heart rate ≥70 bpm places patients with a noncardioembolic cerebral ischemic event at increased risk for MI.
Resumo:
Cirrhotic patients with chronic hepatitis C virus (HCV) infection remain at risk for complications following sustained virological response (SVR). Therefore, we aimed to evaluate treatment efficacy with the number needed to treat (NNT) to prevent clinical endpoints. Mortality and cirrhosis-related morbidity were assessed in an international multicentre cohort of consecutively treated patients with HCV genotype 1 infection and cirrhosis. The NNT to prevent death or clinical disease progression (any cirrhosis-related event or death) in one patient was determined with the adjusted (event-free) survival among patients without SVR and adjusted hazard ratio of SVR. Overall, 248 patients were followed for a median of 8.3 (IQR 6.2-11.1) years. Fifty-nine (24%) patients attained SVR. Among patients without SVR, the adjusted 5-year survival and event-free survival were 94.4% and 80.0%, respectively. SVR was associated with reduced all-cause mortality (HR 0.15, 95% CI 0.05-0.48, P = 0.002) and clinical disease progression (HR 0.16, 95% CI 0.07-0.36, P < 0.001). The NNT to prevent one death in 5 years declined from 1052 (95% CI 937-1755) at 2% SVR (interferon monotherapy) to 61 (95% CI 54-101) at 35% SVR (peginterferon and ribavirin). At 50% SVR, which might be expected with triple therapy, the estimated NNT was 43 (95% CI 38-71). The NNT to prevent clinical disease progression in one patient in 5 years was 302 (95% CI 271-407), 18 (95% CI 16-24) and 13 (95% CI 11-17) at 2%, 35% and 50% SVR, respectively. In conclusion, the NNT to prevent clinical endpoints among cirrhotic patients with HCV genotype 1 has declined enormously with the improvement of antiviral therapy.
Resumo:
BACKGROUND The objective of the present investigation is to assess the baseline mortality-adjusted 10-year survival of rectal cancer patients. METHODS Ten-year survival was analyzed in 771 consecutive American Joint Committee on Cancer (AJCC) stage I-IV rectal cancer patients undergoing open resection between 1991 and 2008 using risk-adjusted Cox proportional hazard regression models adjusting for population-based baseline mortality. RESULTS The median follow-up of patients alive was 8.8 years. The 10-year relative, overall, and cancer-specific survival were 66.5% [95% confidence interval (CI) 61.3-72.1], 48.7% (95% CI 44.9-52.8), and 66.4% (95% CI 62.5-70.5), respectively. In the entire patient sample (stage I-IV) 47.3% and in patients with stage I-III 33.6 % of all deaths were related to rectal cancer during the 10-year period. For patients with AJCC stage I rectal cancer, the 10-year overall survival was 96% and did not significantly differ from an average population after matching for gender, age, and calendar year (p = 0.151). For the more advanced tumor stages, however, survival was significantly impaired (p < 0.001). CONCLUSIONS Retrospective investigations of survival after rectal cancer resection should adjust for baseline mortality because a large fraction of deaths is not cancer related. Stage I rectal cancer patients, compared to patients with more advanced disease stages, have a relative survival close to 100% and can thus be considered cured. Using this relative-survival approach, the real public health burden caused by rectal cancer can reliably be analyzed and reported.
Resumo:
BACKGROUND Treatment of patients with paediatric acute lymphoblastic leukaemia has evolved such that the risk of late effects in survivors treated in accordance with contemporary protocols could be different from that noted in those treated decades ago. We aimed to estimate the risk of late effects in children with standard-risk acute lymphoblastic leukaemia treated with contemporary protocols. METHODS We used data from similarly treated members of the Childhood Cancer Survivor Study cohort. The Childhood Cancer Survivor Study is a multicentre, North American study of 5-year survivors of childhood cancer diagnosed between 1970 and 1986. We included cohort members if they were aged 1·0-9·9 years at the time of diagnosis of acute lymphoblastic leukaemia and had received treatment consistent with contemporary standard-risk protocols for acute lymphoblastic leukaemia. We calculated mortality rates and standardised mortality ratios, stratified by sex and survival time, after diagnosis of acute lymphoblastic leukaemia. We calculated standardised incidence ratios and absolute excess risk for subsequent neoplasms with age-specific, sex-specific, and calendar-year-specific rates from the Surveillance, Epidemiology and End Results Program. Outcomes were compared with a sibling cohort and the general US population. FINDINGS We included 556 (13%) of 4329 cohort members treated for acute lymphoblastic leukaemia. Median follow-up of the survivors from 5 years after diagnosis was 18·4 years (range 0·0-33·0). 28 (5%) of 556 participants had died (standardised mortality ratio 3·5, 95% CI 2·3-5·0). 16 (57%) deaths were due to causes other than recurrence of acute lymphoblastic leukaemia. Six (1%) survivors developed a subsequent malignant neoplasm (standardised incidence ratio 2·6, 95% CI 1·0-5·7). 107 participants (95% CI 81-193) in each group would need to be followed-up for 1 year to observe one extra chronic health disorder in the survivor group compared with the sibling group. 415 participants (376-939) in each group would need to be followed-up for 1 year to observe one extra severe, life-threatening, or fatal disorder in the group of survivors. Survivors did not differ from siblings in their educational attainment, rate of marriage, or independent living. INTERPRETATION The prevalence of adverse long-term outcomes in children treated for standard risk acute lymphoblastic leukaemia according to contemporary protocols is low, but regular care from a knowledgeable primary-care practitioner is warranted. FUNDING National Cancer Institute, American Lebanese-Syrian Associated Charities, Swiss Cancer Research.
Resumo:
Amyotrophic lateral sclerosis (ALS) has been associated with exposures in so-called 'electrical occupations'. It is unclear if this possible link may be explained by exposure to extremely low-frequency magnetic fields (ELF-MF) or by electrical shocks. We evaluated ALS mortality in 2000-2008 and exposure to ELF-MF and electrical shocks in the Swiss National Cohort, using job exposure matrices for occupations at censuses 1990 and 2000. We compared 2.2 million workers with high or medium vs. low exposure to ELF-MF and electrical shocks using Cox proportional hazard models. Results showed that mortality from ALS was higher in people who had medium or high ELF-MF exposure in both censuses (HR 1.55 (95% CI 1.11-2.15)), but closer to unity for electrical shocks (HR 1.17 (95% CI 0.83-1.65)). When both exposures were included in the same model, the HR for ELF-MF changed little (HR 1.56), but the HR for electric shocks was attenuated to 0.97. In conclusion, there was an association between exposure to ELF-MF and mortality from ALS among workers with a higher likelihood of long-term exposure.
Resumo:
BACKGROUND Little is known as to whether negative emotions adversely impact the prognosis of patients who undergo cardiac rehabilitation. We prospectively investigated the predictive value of state negative affect (NA) assessed at discharge from cardiac rehabilitation for prognosis and the moderating role of positive affect (PA) on the effect of NA on outcomes. METHODS A total of 564 cardiac patients (62.49 ± 11.51) completed a comprehensive three-month outpatient cardiac rehabilitation program, filling in the Global Mood Scale (GMS) at discharge. The combined endpoint was cardiovascular disease (CVD)-related hospitalizations plus all-cause mortality at follow-up. Cox regression models estimated the predictive value of NA, as well as the moderating influence of PA on outcomes. Survival models were adjusted for sociodemographic factors, traditional cardiovascular risk factors, and severity of disease. RESULTS During a mean follow-up period of 3.4 years, 71 patients were hospitalized for a CVD-related event and 15 patients died. NA score (range 0-20) was a significant and independent predictor (hazard ratio (HR) 1.091, 95% confidence interval (CI) 1.012-1.175; p = 0.023) with a three-point higher level in NA increasing the relative risk by 9.1%. Furthermore, PA interacted significantly with NA (p < 0.001). The relative risk of poor prognosis with NA was increased in patients with low PA (p = 0.012) but remained unchanged in combination with high PA (p = 0.12). CONCLUSION The combination of NA with low PA was particularly predictive of poor prognosis. Whether reduction of NA and increase of PA, particularly in those with high NA, improves outcome needs to be tested.
Resumo:
BACKGROUND AND AIMS We investigated the association between significant liver fibrosis, determined by AST-to-platelet ratio index (APRI), and all-cause mortality among HIV-infected patients prescribed antiretroviral therapy (ART) in Zambia METHODS: Among HIV-infected adults who initiated ART, we categorized baseline APRI scores according to established thresholds for significant hepatic fibrosis (APRI ≥1.5) and cirrhosis (APRI ≥2.0). Using multivariable logistic regression we identified risk factors for elevated APRI including demographic characteristics, body mass index (BMI), HIV clinical and immunologic status, and tuberculosis. In the subset tested for hepatitis B surface antigen (HBsAg), we investigated the association of hepatitis B virus co-infection with APRI score. Using Kaplan-Meier analysis and Cox proportional hazards regression we determined the association of elevated APRI with death during ART. RESULTS Among 20,308 adults in the analysis cohort, 1,027 (5.1%) had significant liver fibrosis at ART initiation including 616 (3.0%) with cirrhosis. Risk factors for significant fibrosis or cirrhosis included male sex, BMI <18, WHO clinical stage 3 or 4, CD4+ count <200 cells/mm(3) , and tuberculosis. Among the 237 (1.2%) who were tested, HBsAg-positive patients had four times the odds (adjusted odds ratio, 4.15; 95% CI, 1.71-10.04) of significant fibrosis compared HBsAg-negatives. Both significant fibrosis (adjusted hazard ratio 1.41, 95% CI, 1.21-1.64) and cirrhosis (adjusted hazard ratio 1.57, 95% CI, 1.31-1.89) were associated with increased all-cause mortality. CONCLUSION Liver fibrosis may be a risk factor for mortality during ART among HIV-infected individuals in Africa. APRI is an inexpensive and potentially useful test for liver fibrosis in resource-constrained settings. This article is protected by copyright. All rights reserved.
Resumo:
OBJECTIVES This study aimed to update the Logistic Clinical SYNTAX score to predict 3-year survival after percutaneous coronary intervention (PCI) and compare the performance with the SYNTAX score alone. BACKGROUND The SYNTAX score is a well-established angiographic tool to predict long-term outcomes after PCI. The Logistic Clinical SYNTAX score, developed by combining clinical variables with the anatomic SYNTAX score, has been shown to perform better than the SYNTAX score alone in predicting 1-year outcomes after PCI. However, the ability of this score to predict long-term survival is unknown. METHODS Patient-level data (N = 6,304, 399 deaths within 3 years) from 7 contemporary PCI trials were analyzed. We revised the overall risk and the predictor effects in the core model (SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction) using Cox regression analysis to predict mortality at 3 years. We also updated the extended model by combining the core model with additional independent predictors of 3-year mortality (i.e., diabetes mellitus, peripheral vascular disease, and body mass index). RESULTS The revised Logistic Clinical SYNTAX models showed better discriminative ability than the anatomic SYNTAX score for the prediction of 3-year mortality after PCI (c-index: SYNTAX score, 0.61; core model, 0.71; and extended model, 0.73 in a cross-validation procedure). The extended model in particular performed better in differentiating low- and intermediate-risk groups. CONCLUSIONS Risk scores combining clinical characteristics with the anatomic SYNTAX score substantially better predict 3-year mortality than the SYNTAX score alone and should be used for long-term risk stratification of patients undergoing PCI.
Resumo:
OBJECTIVES The purpose of this study was to investigate the survival effects of inferior vena cava filters in patients with venous thromboembolism (VTE) who had a significant bleeding risk. BACKGROUND The effectiveness of inferior vena cava filter use among patients with acute symptomatic VTE and known significant bleeding risk remains unclear. METHODS In this prospective cohort study of patients with acute VTE identified from the RIETE (Computerized Registry of Patients With Venous Thromboembolism), we assessed the association between inferior vena cava filter insertion for known significant bleeding risk and the outcomes of all-cause mortality, pulmonary embolism (PE)-related mortality, and VTE rates through 30 days after the initiation of VTE treatment. Propensity score matching was used to adjust for the likelihood of receiving a filter. RESULTS Of the 40,142 eligible patients who had acute symptomatic VTE, 371 underwent filter placement because of known significant bleeding risk. A total of 344 patients treated with a filter were matched with 344 patients treated without a filter. Propensity score-matched pairs showed a nonsignificant trend toward lower risk of all-cause death for filter insertion compared with no insertion (6.6% vs. 10.2%; p = 0.12). The risk-adjusted PE-related mortality rate was lower for filter insertion than no insertion (1.7% vs. 4.9%; p = 0.03). Risk-adjusted recurrent VTE rates were higher for filter insertion than for no insertion (6.1% vs. 0.6%; p < 0.001). CONCLUSIONS In patients presenting with VTE and with a significant bleeding risk, inferior vena cava filter insertion compared with anticoagulant therapy was associated with a lower risk of PE-related death and a higher risk of recurrent VTE. However, study design limitations do not imply a causal relationship between filter insertion and outcome.