867 resultados para Attributable Mortality
Resumo:
BACKGROUND: Exposure to intermittent magnetic fields of 16 Hz has been shown to reduce heart rate variability, and decreased heart rate variability predicts cardiovascular mortality. We examined mortality from cardiovascular causes in railway workers exposed to varying degrees to intermittent 16.7 Hz magnetic fields. METHODS: We studied a cohort of 20,141 Swiss railway employees between 1972 and 2002, including highly exposed train drivers (median lifetime exposure 120.5 muT-years), and less or little exposed shunting yard engineers (42.1 muT-years), train attendants (13.3 muT-years) and station masters (5.7 muT-years). During 464,129 person-years of follow up, 5,413 deaths were recorded and 3,594 deaths were attributed to cardio-vascular diseases. We analyzed data using Cox proportional hazards models. RESULTS: For all cardiovascular mortality the hazard ratio compared to station masters was 0.99 (95%CI: 0.91, 1.08) in train drivers, 1.13 (95%CI: 0.98, 1.30) in shunting yard engineers, and 1.09 (95%CI: 1.00, 1.19) in train attendants. Corresponding hazard ratios for arrhythmia related deaths were 1.04 (95%CI: 0.68, 1.59), 0.58 (95%CI: 0.24, 1.37) and 10 (95%CI: 0.87, 1.93) and for acute myocardial infarction 1.00 (95%CI: 0.73, 1.36), 1.56 (95%CI: 1.04, 2.32), and 1.14 (95%CI: 0.85, 1.53). The hazard ratio for arrhythmia related deaths per 100 muT-years of cumulative exposure was 0.94 (95%CI: 0.71, 1.24) and 0.91 (95%CI: 0.75, 1.11) for acute myocardial infarction. CONCLUSION: This study provides evidence against an association between long-term occupational exposure to intermittent 16.7 Hz magnetic fields and cardiovascular mortality.
Resumo:
Landscape structure and heterogeneity play a potentially important, but little understood role in predator-prey interactions and behaviourally-mediated habitat selection. For example, habitat complexity may either reduce or enhance the efficiency of a predator's efforts to search, track, capture, kill and consume prey. For prey, structural heterogeneity may affect predator detection, avoidance and defense, escape tactics, and the ability to exploit refuges. This study, investigates whether and how vegetation and topographic structure influence the spatial patterns and distribution of moose (Alces alces) mortality due to predation and malnutrition at the local and landscape levels on Isle Royale National Park. 230 locations where wolves (Canis lupus) killed moose during the winters between 2002 and 2010, and 182 moose starvation death sites for the period 1996-2010, were selected from the extensive Isle Royale Wolf-Moose Project carcass database. A variety of LiDAR-derived metrics were generated and used in an algorithm model (Random Forest) to identify, characterize, and classify three-dimensional variables significant to each of the mortality classes. Furthermore, spatial models to predict and assess the likelihood at the landscape scale of moose mortality were developed. This research found that the patterns of moose mortality by predation and malnutrition across the landscape are non-random, have a high degree of spatial variability, and that both mechanisms operate in contexts of comparable physiographic and vegetation structure. Wolf winter hunting locations on Isle Royale are more likely to be a result of its prey habitat selection, although they seem to prioritize the overall areas with higher moose density in the winter. Furthermore, the findings suggest that the distribution of moose mortality by predation is habitat-specific to moose, and not to wolves. In addition, moose sex, age, and health condition also affect mortality site selection, as revealed by subtle differences between sites in vegetation heights, vegetation density, and topography. Vegetation density in particular appears to differentiate mortality locations for distinct classes of moose. The results also emphasize the significance of fine-scale landscape and habitat features when addressing predator-prey interactions. These finer scale findings would be easily missed if analyses were limited to the broader landscape scale alone.
Resumo:
To demonstrate not only prevention of vision loss but also improvement in best-corrected visual acuity (BCVA) after treatment with ranibizumab on a variable-dosing regimen over 24 months in patients with age-related macular degeneration (AMD).
Resumo:
In the present report the prevalence, severity, and risk factors of tricuspid valve regurgitation (TR) in 251 heart transplant recipients have been analyzed retrospectively. Tricuspid valve function was studied by color-flow Doppler echocardiogram and annual heart catheterization. The presence or severity of TR was graded on a scale from 0 (no TR) to 4 (severe). Additional postoperative data included rate of rejection, number of endomyocardial biopsies, incidence of transplant vasculopathy, and preoperative and postoperative hemodynamics. The incidence of grade 3 TR increases from 5% at 1 year to 50% at 4 years after transplantation. Multivariate analysis showed rate of rejection and donor heart weight to be significant risk factors. The ischemic intervals as well as the preoperative and postoperative pulmonary hemodynamics did not affect the severity or prevalence of TR. These results indicate that various factors appear to have an impact on the development of TR and that the prevalence might be lowered by a reduction of the number of biopsies performed and when possible, oversizing of donor hearts.
Resumo:
While studies from other countries have shown an excess mortality in diabetic individuals when compared with the general population, comparable long-term data is not available for Switzerland.
Resumo:
BACKGROUND: Erythropoiesis-stimulating agents reduce anaemia in patients with cancer and could improve their quality of life, but these drugs might increase mortality. We therefore did a meta-analysis of randomised controlled trials in which these drugs plus red blood cell transfusions were compared with transfusion alone for prophylaxis or treatment of anaemia in patients with cancer. METHODS: Data for patients treated with epoetin alfa, epoetin beta, or darbepoetin alfa were obtained and analysed by independent statisticians using fixed-effects and random-effects meta-analysis. Analyses were by intention to treat. Primary endpoints were mortality during the active study period and overall survival during the longest available follow-up, irrespective of anticancer treatment, and in patients given chemotherapy. Tests for interactions were used to identify differences in effects of erythropoiesis-stimulating agents on mortality across prespecified subgroups. FINDINGS: Data from a total of 13 933 patients with cancer in 53 trials were analysed. 1530 patients died during the active study period and 4993 overall. Erythropoiesis-stimulating agents increased mortality during the active study period (combined hazard ratio [cHR] 1.17, 95% CI 1.06-1.30) and worsened overall survival (1.06, 1.00-1.12), with little heterogeneity between trials (I(2) 0%, p=0.87 for mortality during the active study period, and I(2) 7.1%, p=0.33 for overall survival). 10 441 patients on chemotherapy were enrolled in 38 trials. The cHR for mortality during the active study period was 1.10 (0.98-1.24), and 1.04 (0.97-1.11) for overall survival. There was little evidence for a difference between trials of patients given different anticancer treatments (p for interaction=0.42). INTERPRETATION: Treatment with erythropoiesis-stimulating agents in patients with cancer increased mortality during active study periods and worsened overall survival. The increased risk of death associated with treatment with these drugs should be balanced against their benefits. FUNDING: German Federal Ministry of Education and Research, Medical Faculty of University of Cologne, and Oncosuisse (Switzerland).
Resumo:
BACKGROUND: The retention of patients in antiretroviral therapy (ART) programmes is an important issue in resource-limited settings. Loss to follow up can be substantial, but it is unclear what the outcomes are in patients who are lost to programmes. METHODS AND FINDINGS: We searched the PubMed, EMBASE, Latin American and Caribbean Health Sciences Literature (LILACS), Indian Medlars Centre (IndMed) and African Index Medicus (AIM) databases and the abstracts of three conferences for studies that traced patients lost to follow up to ascertain their vital status. Main outcomes were the proportion of patients traced, the proportion found to be alive and the proportion that had died. Where available, we also examined the reasons why some patients could not be traced, why patients found to be alive did not return to the clinic, and the causes of death. We combined mortality data from several studies using random-effects meta-analysis. Seventeen studies were eligible. All were from sub-Saharan Africa, except one study from India, and none were conducted in children. A total of 6420 patients (range 44 to 1343 patients) were included. Patients were traced using telephone calls, home visits and through social networks. Overall the vital status of 4021 patients could be ascertained (63%, range across studies: 45% to 86%); 1602 patients had died. The combined mortality was 40% (95% confidence interval 33%-48%), with substantial heterogeneity between studies (P<0.0001). Mortality in African programmes ranged from 12% to 87% of patients lost to follow-up. Mortality was inversely associated with the rate of loss to follow up in the programme: it declined from around 60% to 20% as the percentage of patients lost to the programme increased from 5% to 50%. Among patients not found, telephone numbers and addresses were frequently incorrect or missing. Common reasons for not returning to the clinic were transfer to another programme, financial problems and improving or deteriorating health. Causes of death were available for 47 deaths: 29 (62%) died of an AIDS defining illness. CONCLUSIONS: In ART programmes in resource-limited settings a substantial minority of adults lost to follow up cannot be traced, and among those traced 20% to 60% had died. Our findings have implications both for patient care and the monitoring and evaluation of programmes.
Resumo:
BACKGROUND: Mortality in HIV-infected patients who have access to highly active antiretroviral therapy (ART) has declined in sub-Saharan Africa, but it is unclear how mortality compares to the non-HIV-infected population. We compared mortality rates observed in HIV-1-infected patients starting ART with non-HIV-related background mortality in four countries in sub-Saharan Africa. METHODS AND FINDINGS: Patients enrolled in antiretroviral treatment programmes in Côte d'Ivoire, Malawi, South Africa, and Zimbabwe were included. We calculated excess mortality rates and standardised mortality ratios (SMRs) with 95% confidence intervals (CIs). Expected numbers of deaths were obtained using estimates of age-, sex-, and country-specific, HIV-unrelated, mortality rates from the Global Burden of Disease project. Among 13,249 eligible patients 1,177 deaths were recorded during 14,695 person-years of follow-up. The median age was 34 y, 8,831 (67%) patients were female, and 10,811 of 12,720 patients (85%) with information on clinical stage had advanced disease when starting ART. The excess mortality rate was 17.5 (95% CI 14.5-21.1) per 100 person-years SMR in patients who started ART with a CD4 cell count of less than 25 cells/microl and World Health Organization (WHO) stage III/IV, compared to 1.00 (0.55-1.81) per 100 person-years in patients who started with 200 cells/microl or above with WHO stage I/II. The corresponding SMRs were 47.1 (39.1-56.6) and 3.44 (1.91-6.17). Among patients who started ART with 200 cells/microl or above in WHO stage I/II and survived the first year of ART, the excess mortality rate was 0.27 (0.08-0.94) per 100 person-years and the SMR was 1.14 (0.47-2.77). CONCLUSIONS: Mortality of HIV-infected patients treated with combination ART in sub-Saharan Africa continues to be higher than in the general population, but for some patients excess mortality is moderate and reaches that of the general population in the second year of ART. Much of the excess mortality might be prevented by timely initiation of ART.
Resumo:
BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in <50 patients were grouped together to form a "rare ADEs" category. RESULTS: During a median follow-up period of 43 months (interquartile range, 19-70 months), 2880 ADEs were diagnosed in 2262 patients; 1146 patients died. The most common ADEs were esophageal candidiasis (in 360 patients), Pneumocystis jiroveci pneumonia (320 patients), and Kaposi sarcoma (308 patients). The greatest mortality hazard ratio was associated with non-Hodgkin's lymphoma (hazard ratio, 17.59; 95% confidence interval, 13.84-22.35) and progressive multifocal leukoencephalopathy (hazard ratio, 10.0; 95% confidence interval, 6.70-14.92). Three groups of ADEs were identified on the basis of the ranked hazard ratios with bootstrapped confidence intervals: severe (non-Hodgkin's lymphoma and progressive multifocal leukoencephalopathy [hazard ratio, 7.26; 95% confidence interval, 5.55-9.48]), moderate (cryptococcosis, cerebral toxoplasmosis, AIDS dementia complex, disseminated Mycobacterium avium complex, and rare ADEs [hazard ratio, 2.35; 95% confidence interval, 1.76-3.13]), and mild (all other ADEs [hazard ratio, 1.47; 95% confidence interval, 1.08-2.00]). CONCLUSIONS: In the combination antiretroviral therapy era, mortality rates subsequent to an ADE depend on the specific diagnosis. The proposed classification of ADEs may be useful in clinical end point trials, prognostic studies, and patient management.
Resumo:
BACKGROUND: Studies continue to identify percutaneous coronary intervention procedural volume both at the institutional level and at the operator level as being strongly correlated with outcome. High-volume centers have been defined as those that perform >400 percutaneous coronary intervention procedures per year. The relationship between drug-eluting stent procedural volume and outcome is unknown. We investigated this relationship in the German Cypher Registry. METHODS AND RESULTS: The present analysis included 8201 patients treated with sirolimus-eluting stents between April 2002 and September 2005 in 51 centers. Centers that recruited >400 sirolimus-eluting stent patients in this time period were considered high-volume centers; those with 150 to 400 patients were considered intermediate-volume centers; and those with <150 patients were designated as low-volume centers. The primary end point was all death, myocardial infarction, and target-vessel revascularization at 6 months. This end point occurred in 11.3%, 12.1%, and 9.0% of patients in the low-, intermediate-, and high-volume center groups, respectively (P=0.0001). There was no difference between groups in the rate of target-vessel revascularization (P=0.2) or cerebrovascular accidents (P=0.5). The difference in death/myocardial infarction remained significant after adjustment for baseline factors (odds ratio 1.85, 95% confidence interval 1.31 to 2.59, P<0.001 for low-volume centers; odds ratio 1.69, 95% confidence interval 1.29 to 2.21, P<0.001 for intermediate-volume centers). Patient and lesion selection, procedural features, and postprocedural medications differed significantly between groups. CONCLUSIONS: The volume of sirolimus-eluting stent procedures performed on an institutional level was inversely related to death and myocardial infarction but not to target-vessel revascularization at 6-month follow-up. Safety issues are better considered in high-volume centers. These findings have important public health policy implications.
Resumo:
AIMS: It is unclear whether transcatheter aortic valve implantation (TAVI) addresses an unmet clinical need for those currently rejected for surgical aortic valve replacement (SAVR) and whether there is a subgroup of high-risk patients benefiting more from TAVI compared to SAVR. In this two-centre, prospective cohort study, we compared baseline characteristics and 30-day mortality between TAVI and SAVR in consecutive patients undergoing invasive treatment for aortic stenosis. METHODS AND RESULTS: We pre-specified different adjustment methods to examine the effect of TAVI as compared with SAVR on overall 30-day mortality: crude univariable logistic regression analysis, multivariable analysis adjusted for baseline characteristics, analysis adjusted for propensity scores, propensity score matched analysis, and weighted analysis using the inverse probability of treatment (IPT) as weights. A total of 1,122 patients were included in the study: 114 undergoing TAVI and 1,008 patients undergoing SAVR. The crude mortality rate was greater in the TAVI group (9.6% vs. 2.3%) yielding an odds ratio [OR] of 4.57 (95%-CI 2.17-9.65). Compared to patients undergoing SAVR, patients with TAVI were older, more likely to be in NYHA class III and IV, and had a considerably higher logistic EuroSCORE and more comorbid conditions. Adjusted OR depended on the method used to control for confounding and ranged from 0.60 (0.11-3.36) to 7.57 (0.91-63.0). We examined the distribution of propensity scores and found scores to overlap sufficiently only in a narrow range. In patients with sufficient overlap of propensity scores, adjusted OR ranged from 0.35 (0.04-2.72) to 3.17 (0.31 to 31.9). In patients with insufficient overlap, we consistently found increased odds of death associated with TAVI compared with SAVR irrespective of the method used to control confounding, with adjusted OR ranging from 5.88 (0.67-51.8) to 25.7 (0.88-750). Approximately one third of patients undergoing TAVI were found to be potentially eligible for a randomised comparison of TAVI versus SAVR. CONCLUSIONS: Both measured and unmeasured confounding limit the conclusions that can be drawn from observational comparisons of TAVI versus SAVR. Our study indicates that TAVI could be associated with either substantial benefits or harms. Randomised comparisons of TAVI versus SAVR are warranted.