945 resultados para All-cause
Resumo:
Rotavirus is an important cause of childhood diarrhoea. A monovalent rotavirus vaccine (Rotarix®) was introduced into the Immunization Program of Brazil in 2006. In this study, we describe the incidence and burden of disease of rotavirus diarrhoea in two cohorts of children (vaccinated and unvaccinated). We followed two groups of 250 children under one year old, who were enrolled in December 2006 from a low-income residential area in Northeast Brazil. The children were monitored every two weeks for two years. Stool samples from children with diarrhoea were examined for the presence of rotavirus. Rotaviruses were genotyped using real time-polymerase chain reaction. The mean numbers of all-cause diarrhoea episodes/child (adjusted for age) in the first year were 0.87 and 0.84, in vaccinated and unvaccinated children, respectively. During the second year, the number of episodes/child decreased to 0.52 and 0.42. Only 16 (4.9%) of 330 stool samples were rotavirus-positive (10 vaccinated and 6 unvaccinated children) and only P[4]G2 rotaviruses were identified. All-cause diarrhoea episodes were more severe in unvaccinated children in the first year of age (p < 0.05), while vaccinated children had more severe episodes 18 months after vaccination. Rotavirus diarrhoea incidence was very low in both groups.
Resumo:
BACKGROUND Transcatheter aortic valve-in-valve implantation is an emerging therapeutic alternative for patients with a failed surgical bioprosthesis and may obviate the need for reoperation. We evaluated the clinical results of this technique using a large, worldwide registry. METHODS AND RESULTS The Global Valve-in-Valve Registry included 202 patients with degenerated bioprosthetic valves (aged 77.7±10.4 years; 52.5% men) from 38 cardiac centers. Bioprosthesis mode of failure was stenosis (n=85; 42%), regurgitation (n=68; 34%), or combined stenosis and regurgitation (n=49; 24%). Implanted devices included CoreValve (n=124) and Edwards SAPIEN (n=78). Procedural success was achieved in 93.1% of cases. Adverse procedural outcomes included initial device malposition in 15.3% of cases and ostial coronary obstruction in 3.5%. After the procedure, valve maximum/mean gradients were 28.4±14.1/15.9±8.6 mm Hg, and 95% of patients had ≤+1 degree of aortic regurgitation. At 30-day follow-up, all-cause mortality was 8.4%, and 84.1% of patients were at New York Heart Association functional class I/II. One-year follow-up was obtained in 87 patients, with 85.8% survival of treated patients. CONCLUSIONS The valve-in-valve procedure is clinically effective in the vast majority of patients with degenerated bioprosthetic valves. Safety and efficacy concerns include device malposition, ostial coronary obstruction, and high gradients after the procedure.
Resumo:
OBJECTIVE To assess the association between consumption of fried foods and risk of coronary heart disease. DESIGN Prospective cohort study. SETTING Spanish cohort of the European Prospective Investigation into Cancer and Nutrition. PARTICIPANTS 40 757 adults aged 29-69 and free of coronary heart disease at baseline (1992-6), followed up until 2004. MAIN OUTCOME MEASURES Coronary heart disease events and vital status identified by record linkage with hospital discharge registers, population based registers of myocardial infarction, and mortality registers. RESULTS During a median follow-up of 11 years, 606 coronary heart disease events and 1135 deaths from all causes occurred. Compared with being in the first (lowest) quarter of fried food consumption, the multivariate hazard ratio of coronary heart disease in the second quarter was 1.15 (95% confidence interval 0.91 to 1.45), in the third quarter was 1.07 (0.83 to 1.38), and in the fourth quarter was 1.08 (0.82 to 1.43; P for trend 0.74). The results did not vary between those who used olive oil for frying and those who used sunflower oil. Likewise, no association was observed between fried food consumption and all cause mortality: multivariate hazard ratio for the highest versus the lowest quarter of fried food consumption was 0.93 (95% confidence interval 0.77 to 1.14; P for trend 0.98). CONCLUSION In Spain, a Mediterranean country where olive or sunflower oil is used for frying, the consumption of fried foods was not associated with coronary heart disease or with all cause mortality.
Resumo:
BACKGROUND Recently, some US cohorts have shown a moderate association between red and processed meat consumption and mortality supporting the results of previous studies among vegetarians. The aim of this study was to examine the association of red meat, processed meat, and poultry consumption with the risk of early death in the European Prospective Investigation into Cancer and Nutrition (EPIC). METHODS Included in the analysis were 448,568 men and women without prevalent cancer, stroke, or myocardial infarction, and with complete information on diet, smoking, physical activity and body mass index, who were between 35 and 69 years old at baseline. Cox proportional hazards regression was used to examine the association of meat consumption with all-cause and cause-specific mortality. RESULTS As of June 2009, 26,344 deaths were observed. After multivariate adjustment, a high consumption of red meat was related to higher all-cause mortality (hazard ratio (HR) = 1.14, 95% confidence interval (CI) 1.01 to 1.28, 160+ versus 10 to 19.9 g/day), and the association was stronger for processed meat (HR = 1.44, 95% CI 1.24 to 1.66, 160+ versus 10 to 19.9 g/day). After correction for measurement error, higher all-cause mortality remained significant only for processed meat (HR = 1.18, 95% CI 1.11 to 1.25, per 50 g/d). We estimated that 3.3% (95% CI 1.5% to 5.0%) of deaths could be prevented if all participants had a processed meat consumption of less than 20 g/day. Significant associations with processed meat intake were observed for cardiovascular diseases, cancer, and 'other causes of death'. The consumption of poultry was not related to all-cause mortality. CONCLUSIONS The results of our analysis support a moderate positive association between processed meat consumption and mortality, in particular due to cardiovascular diseases, but also to cancer.
Resumo:
BACKGROUND Very few data exist on the clinical impact of permanent pacemaker implantation (PPI) after transcatheter aortic valve implantation. The objective of this study was to assess the impact of PPI after transcatheter aortic valve implantation on late outcomes in a large cohort of patients. METHODS AND RESULTS A total of 1556 consecutive patients without prior PPI undergoing transcatheter aortic valve implantation were included. Of them, 239 patients (15.4%) required a PPI within the first 30 days after transcatheter aortic valve implantation. At a mean follow-up of 22±17 months, no association was observed between the need for 30-day PPI and all-cause mortality (hazard ratio, 0.98; 95% confidence interval, 0.74-1.30; P=0.871), cardiovascular mortality (hazard ratio, 0.81; 95% confidence interval, 0.56-1.17; P=0.270), and all-cause mortality or rehospitalization for heart failure (hazard ratio, 1.00; 95% confidence interval, 0.77-1.30; P=0.980). A lower rate of unexpected (sudden or unknown) death was observed in patients with PPI (hazard ratio, 0.31; 95% confidence interval, 0.11-0.85; P=0.023). Patients with new PPI showed a poorer evolution of left ventricular ejection fraction over time (P=0.017), and new PPI was an independent predictor of left ventricular ejection fraction decrease at the 6- to 12-month follow-up (estimated coefficient, -2.26; 95% confidence interval, -4.07 to -0.44; P=0.013; R(2)=0.121). CONCLUSIONS The need for PPI was a frequent complication of transcatheter aortic valve implantation, but it was not associated with any increase in overall or cardiovascular death or rehospitalization for heart failure after a mean follow-up of ≈2 years. Indeed, 30-day PPI was a protective factor for the occurrence of unexpected (sudden or unknown) death. However, new PPI did have a negative effect on left ventricular function over time.
Resumo:
OBJECTIVE To study the factors associated with choice of therapy and prognosis in octogenarians with severe symptomatic aortic stenosis (AS). STUDY DESIGN Prospective, observational, multicenter registry. Centralized follow-up included survival status and, if possible, mode of death and Katz index. SETTING Transnational registry in Spain. SUBJECTS We included 928 patients aged ≥80 years with severe symptomatic AS. INTERVENTIONS Aortic-valve replacement (AVR), transcatheter aortic-valve implantation (TAVI) or conservative therapy. MAIN OUTCOME MEASURES All-cause death. RESULTS Mean age was 84.2 ± 3.5 years, and only 49.0% were independent (Katz index A). The most frequent planned management was conservative therapy in 423 (46%) patients, followed by TAVI in 261 (28%) and AVR in 244 (26%). The main reason against recommending AVR in 684 patients was high surgical risk [322 (47.1%)], other medical motives [193 (28.2%)], patient refusal [134 (19.6%)] and family refusal in the case of incompetent patients [35 (5.1%)]. The mean time from treatment decision to AVR was 4.8 ± 4.6 months and to TAVI 2.1 ± 3.2 months, P < 0.001. During follow-up (11.2-38.9 months), 357 patients (38.5%) died. Survival rates at 6, 12, 18 and 24 months were 81.8%, 72.6%, 64.1% and 57.3%, respectively. Planned intervention, adjusted for multiple propensity score, was associated with lower mortality when compared with planned conservative treatment: TAVI Hazard ratio (HR) 0.68 (95% confidence interval [CI] 0.49-0.93; P = 0.016) and AVR HR 0.56 (95% CI 0.39-0.8; P = 0.002). CONCLUSION Octogenarians with symptomatic severe AS are frequently managed conservatively. Planned conservative management is associated with a poor prognosis.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
OBJECTIVE To study the factors associated with choice of therapy and prognosis in octogenarians with severe symptomatic aortic stenosis (AS). STUDY DESIGN Prospective, observational, multicenter registry. Centralized follow-up included survival status and, if possible, mode of death and Katz index. SETTING Transnational registry in Spain. SUBJECTS We included 928 patients aged ≥80 years with severe symptomatic AS. INTERVENTIONS Aortic-valve replacement (AVR), transcatheter aortic-valve implantation (TAVI) or conservative therapy. MAIN OUTCOME MEASURES All-cause death. RESULTS Mean age was 84.2 ± 3.5 years, and only 49.0% were independent (Katz index A). The most frequent planned management was conservative therapy in 423 (46%) patients, followed by TAVI in 261 (28%) and AVR in 244 (26%). The main reason against recommending AVR in 684 patients was high surgical risk [322 (47.1%)], other medical motives [193 (28.2%)], patient refusal [134 (19.6%)] and family refusal in the case of incompetent patients [35 (5.1%)]. The mean time from treatment decision to AVR was 4.8 ± 4.6 months and to TAVI 2.1 ± 3.2 months, P < 0.001. During follow-up (11.2-38.9 months), 357 patients (38.5%) died. Survival rates at 6, 12, 18 and 24 months were 81.8%, 72.6%, 64.1% and 57.3%, respectively. Planned intervention, adjusted for multiple propensity score, was associated with lower mortality when compared with planned conservative treatment: TAVI Hazard ratio (HR) 0.68 (95% confidence interval [CI] 0.49-0.93; P = 0.016) and AVR HR 0.56 (95% CI 0.39-0.8; P = 0.002). CONCLUSION Octogenarians with symptomatic severe AS are frequently managed conservatively. Planned conservative management is associated with a poor prognosis.
Resumo:
BACKGROUND AND PURPOSE: In low- and middle-income countries, the total burden of cardiovascular diseases is expected to increase due to demographic and epidemiological transitions. However, data on cause-specific mortality are lacking in sub-Saharan Africa. Seychelles is one of the few countries in the region where all deaths are registered and medically certified. In this study, we examine trends in mortality for stroke and myocardial infarction (MI) between 1989 and 2010. METHODS: Based on vital statistics, we ascertained stroke and MI as the cause of death if appearing in any of the 4 fields for immediate, intermediate, underlying, and contributory causes in death certificates. RESULTS: Mortality rates (per 100 000, age-standardized to World Health Organization standard population) decreased from 1669/710 (men/women) in 1989 to 1991 to 1113/535 in 2008-10 for all causes, from 250/140 to 141/86 for stroke, and from 117/51 to 59/24 for MI, corresponding to proportionate decreases of 33%/25% for all-cause mortality, 44%/39% for stroke, and 50%/53% for MI over 22 years. The absolute number of stroke and MI deaths did not increase over time. In 2008 to 2010, the median age of death was 65/78 years (men/women) for all causes, 68/78 for stroke, and 66/73 for MI. CONCLUSIONS: Between 1989 and 2010, age-standardized stroke and MI mortality decreased markedly and more rapidly than all-cause mortality. The absolute number of cardiovascular disease deaths did not increase over time because the impact of population aging was fully compensated by the decline in cardiovascular disease mortality. Stroke mortality remained high, emphasizing the need to strengthen cardiovascular disease prevention and control.
Resumo:
PRINCIPLES: International guidelines for heart failure (HF) care recommend the implementation of inter-professional disease management programmes. To date, no such programme has been tested in Switzerland. The aim of this randomised controlled trial (RCT) was to test the effect on hospitalisation, mortality and quality of life of an adult ambulatory disease management programme for patients with HF in Switzerland.METHODS: Consecutive patients admitted to internal medicine in a Swiss university hospital were screened for decompensated HF. A total of 42 eligible patients were randomised to an intervention (n = 22) or usual care group (n = 20). Medical treatment was optimised and lifestyle recommendations were given to all patients. Intervention patients additionally received a home visit by a HF-nurse, followed by 17 telephone calls of decreasing frequency over 12 months, focusing on self-care. Calls from the HF nurse to primary care physicians communicated health concerns and identified goals of care. Data were collected at baseline, 3, 6, 9 and 12 months. Mixed regression analysis (quality of life) was used. Outcome assessment was conducted by researchers blinded to group assignment.RESULTS: After 12 months, 22 (52%) patients had an all-cause re-admission or died. Only 3 patients were hospitalised with HF decompensation. No significant effect of the intervention was found on HF related to quality of life.CONCLUSIONS: An inter-professional disease management programme is possible in the Swiss healthcare setting but effects on outcomes need to be confirmed in larger studies.
Resumo:
PURPOSE: The recent increase in drug-resistant micro-organisms complicates the management of hospital-acquired bloodstream infections (HA-BSIs). We investigated the epidemiology of HA-BSI and evaluated the impact of drug resistance on outcomes of critically ill patients, controlling for patient characteristics and infection management. METHODS: A prospective, multicentre non-representative cohort study was conducted in 162 intensive care units (ICUs) in 24 countries. RESULTS: We included 1,156 patients [mean ± standard deviation (SD) age, 59.5 ± 17.7 years; 65 % males; mean ± SD Simplified Acute Physiology Score (SAPS) II score, 50 ± 17] with HA-BSIs, of which 76 % were ICU-acquired. Median time to diagnosis was 14 [interquartile range (IQR), 7-26] days after hospital admission. Polymicrobial infections accounted for 12 % of cases. Among monomicrobial infections, 58.3 % were gram-negative, 32.8 % gram-positive, 7.8 % fungal and 1.2 % due to strict anaerobes. Overall, 629 (47.8 %) isolates were multidrug-resistant (MDR), including 270 (20.5 %) extensively resistant (XDR), and 5 (0.4 %) pan-drug-resistant (PDR). Micro-organism distribution and MDR occurrence varied significantly (p < 0.001) by country. The 28-day all-cause fatality rate was 36 %. In the multivariable model including micro-organism, patient and centre variables, independent predictors of 28-day mortality included MDR isolate [odds ratio (OR), 1.49; 95 % confidence interval (95 %CI), 1.07-2.06], uncontrolled infection source (OR, 5.86; 95 %CI, 2.5-13.9) and timing to adequate treatment (before day 6 since blood culture collection versus never, OR, 0.38; 95 %CI, 0.23-0.63; since day 6 versus never, OR, 0.20; 95 %CI, 0.08-0.47). CONCLUSIONS: MDR and XDR bacteria (especially gram-negative) are common in HA-BSIs in critically ill patients and are associated with increased 28-day mortality. Intensified efforts to prevent HA-BSIs and to optimize their management through adequate source control and antibiotic therapy are needed to improve outcomes.
Resumo:
BACKGROUND: The SYNTAX score (SXscore), an angiographic score reflecting coronary lesion complexity, predicts clinical outcomes in patients with left main or multivessel disease, and in patients with ST-segment elevation myocardial infarction undergoing primary PCI. The clinical SXscore (CSS) integrates the SXscore and clinical variables (age, ejection fraction, serum creatinine) into a single score. We analyzed these scores in elderly patients with acute coronary syndrome (ACS) undergoing primary PCI. The purpose of this analysis was not to decide which patients should undergo PCI, but to predict clinical outcomes in this population. METHODS: The SXscore was determined in a consecutive series of 114 elderly patients (mean age, 79.6 ± 4.1 years) undergoing primary PCI for ACS. Outcomes were stratified according to SXscore tertiles: SXLOW ≤15 (n = 39), 15< SXMID <23 (n = 40), and SXHIGH ≥23 (n = 35). The primary endpoint was all-cause mortality at 30 days. Secondary endpoints were nonfatal major adverse cardiac and cerebrovascular events (MACCE) at 30 days, and 1-year outcomes in patients discharged alive. RESULTS: Mortality at 30 days was higher in the SXHIGH group compared with the aggregate SXLOW+MID group (37.1% vs 5.1%; P<.0001), and in the CSSHIGH group compared with the aggregate CSSLOW+MID group (25.5% vs 1.4%; P=.0001). MACCE rates at 30 days were similar among SXscore tertiles. The CSS predicted 1-year MACCE rates (12.1% for CSSHIGH vs 3.1% for CSSLOW+MID; P=.03). CONCLUSIONS: The SXscore predicts 30-day mortality in elderly patients with ACS undergoing primary PCI. In patients discharged alive, the CSS predicts risk of MACCE at 1 year.
Resumo:
BACKGROUND: Physicians need a specific risk-stratification tool to facilitate safe and cost-effective approaches to the management of patients with cancer and acute pulmonary embolism (PE). The objective of this study was to develop a simple risk score for predicting 30-day mortality in patients with PE and cancer by using measures readily obtained at the time of PE diagnosis. METHODS: Investigators randomly allocated 1,556 consecutive patients with cancer and acute PE from the international multicenter Registro Informatizado de la Enfermedad TromboEmbólica to derivation (67%) and internal validation (33%) samples. The external validation cohort for this study consisted of 261 patients with cancer and acute PE. Investigators compared 30-day all-cause mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. RESULTS: In the derivation sample, multivariable analyses produced the risk score, which contained six variables: age > 80 years, heart rate ≥ 110/min, systolic BP < 100 mm Hg, body weight < 60 kg, recent immobility, and presence of metastases. In the internal validation cohort (n = 508), the 22.2% of patients (113 of 508) classified as low risk by the prognostic model had a 30-day mortality of 4.4% (95% CI, 0.6%-8.2%) compared with 29.9% (95% CI, 25.4%-34.4%) in the high-risk group. In the external validation cohort, the 18% of patients (47 of 261) classified as low risk by the prognostic model had a 30-day mortality of 0%, compared with 19.6% (95% CI, 14.3%-25.0%) in the high-risk group. CONCLUSIONS: The developed clinical prediction rule accurately identifies low-risk patients with cancer and acute PE.
Resumo:
BACKGROUND: Most clinical guidelines recommend that AIDS-free, HIV-infected persons with CD4 cell counts below 0.350 × 10(9) cells/L initiate combined antiretroviral therapy (cART), but the optimal CD4 cell count at which cART should be initiated remains a matter of debate. OBJECTIVE: To identify the optimal CD4 cell count at which cART should be initiated. DESIGN: Prospective observational data from the HIV-CAUSAL Collaboration and dynamic marginal structural models were used to compare cART initiation strategies for CD4 thresholds between 0.200 and 0.500 × 10(9) cells/L. SETTING: HIV clinics in Europe and the Veterans Health Administration system in the United States. PATIENTS: 20, 971 HIV-infected, therapy-naive persons with baseline CD4 cell counts at or above 0.500 × 10(9) cells/L and no previous AIDS-defining illnesses, of whom 8392 had a CD4 cell count that decreased into the range of 0.200 to 0.499 × 10(9) cells/L and were included in the analysis. MEASUREMENTS: Hazard ratios and survival proportions for all-cause mortality and a combined end point of AIDS-defining illness or death. RESULTS: Compared with initiating cART at the CD4 cell count threshold of 0.500 × 10(9) cells/L, the mortality hazard ratio was 1.01 (95% CI, 0.84 to 1.22) for the 0.350 threshold and 1.20 (CI, 0.97 to 1.48) for the 0.200 threshold. The corresponding hazard ratios were 1.38 (CI, 1.23 to 1.56) and 1.90 (CI, 1.67 to 2.15), respectively, for the combined end point of AIDS-defining illness or death. Limitations: CD4 cell count at cART initiation was not randomized. Residual confounding may exist. CONCLUSION: Initiation of cART at a threshold CD4 count of 0.500 × 10(9) cells/L increases AIDS-free survival. However, mortality did not vary substantially with the use of CD4 thresholds between 0.300 and 0.500 × 10(9) cells/L.
Resumo:
BACKGROUND: Disease-management programs may enhance the quality of care provided to patients with chronic diseases, such as chronic obstructive pulmonary disease (COPD). The aim of this systematic review was to assess the effectiveness of COPD disease-management programs. METHODS: We conducted a computerized search of MEDLINE, EMBASE, CINAHL, PsychINFO, and the Cochrane Library (CENTRAL) for studies evaluating interventions meeting our operational definition of disease management: patient education, 2 or more different intervention components, 2 or more health care professionals actively involved in patients' care, and intervention lasting 12 months or more. Programs conducted in hospital only and those targeting patients receiving palliative care were excluded. Two reviewers evaluated 12,749 titles and fully reviewed 139 articles; among these, data from 13 studies were included and extracted. Clinical outcomes considered were all-cause mortality, lung function, exercise capacity (walking distance), health-related quality of life, symptoms, COPD exacerbations, and health care use. A meta-analysis of exercise capacity and all-cause mortality was performed using random-effects models. RESULTS: The studies included were 9 randomized controlled trials, 1 controlled trial, and 3 uncontrolled before-after trials. Results indicate that the disease-management programs studied significantly improved exercise capacity (32.2 m, 95% confidence interval [CI], 4.1-60.3), decreased risk of hospitalization, and moderately improved health-related quality of life. All-cause mortality did not differ between groups (pooled odds ratio 0.84, 95% CI, 0.54-1.40). CONCLUSION: COPD disease-management programs modestly improved exercise capacity, health-related quality of life, and hospital admissions, but not all-cause mortality. Future studies should explore the specific elements or characteristics of these programs that bring the greatest benefit.