82 resultados para toxicological mortality data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: It is unclear to which level mean arterial blood pressure (MAP) should be increased during septic shock in order to improve outcome. In this study we investigated the association between MAP values of 70 mmHg or higher, vasopressor load, 28-day mortality and disease-related events in septic shock. METHODS: This is a post hoc analysis of data of the control group of a multicenter trial and includes 290 septic shock patients in whom a mean MAP > or = 70 mmHg could be maintained during shock. Demographic and clinical data, MAP, vasopressor requirements during the shock period, disease-related events and 28-day mortality were documented. Logistic regression models adjusted for the geographic region of the study center, age, presence of chronic arterial hypertension, simplified acute physiology score (SAPS) II and the mean vasopressor load during the shock period was calculated to investigate the association between MAP or MAP quartiles > or = 70 mmHg and mortality or the frequency and occurrence of disease-related events. RESULTS: There was no association between MAP or MAP quartiles and mortality or the occurrence of disease-related events. These associations were not influenced by age or pre-existent arterial hypertension (all P > 0.05). The mean vasopressor load was associated with mortality (relative risk (RR), 1.83; confidence interval (CI) 95%, 1.4-2.38; P < 0.001), the number of disease-related events (P < 0.001) and the occurrence of acute circulatory failure (RR, 1.64; CI 95%, 1.28-2.11; P < 0.001), metabolic acidosis (RR, 1.79; CI 95%, 1.38-2.32; P < 0.001), renal failure (RR, 1.49; CI 95%, 1.17-1.89; P = 0.001) and thrombocytopenia (RR, 1.33; CI 95%, 1.06-1.68; P = 0.01). CONCLUSIONS: MAP levels of 70 mmHg or higher do not appear to be associated with improved survival in septic shock. Elevating MAP >70 mmHg by augmenting vasopressor dosages may increase mortality. Future trials are needed to identify the lowest acceptable MAP level to ensure tissue perfusion and avoid unnecessary high catecholamine infusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Despite the key role of hemodynamic goals, there are few data addressing the question as to which hemodynamic variables are associated with outcome or should be targeted in cardiogenic shock patients. The aim of this study was to investigate the association between hemodynamic variables and cardiogenic shock mortality. METHODS: Medical records and the patient data management system of a multidisciplinary intensive care unit (ICU) were reviewed for patients admitted because of cardiogenic shock. In all patients, the hourly variable time integral of hemodynamic variables during the first 24 hours after ICU admission was calculated. If hemodynamic variables were associated with 28-day mortality, the hourly variable time integral of drops below clinically relevant threshold levels was computed. Regression models and receiver operator characteristic analyses were calculated. All statistical models were adjusted for age, admission year, mean catecholamine doses and the Simplified Acute Physiology Score II (excluding hemodynamic counts) in order to account for the influence of age, changes in therapies during the observation period, the severity of cardiovascular failure and the severity of the underlying disease on 28-day mortality. RESULTS: One-hundred and nineteen patients were included. Cardiac index (CI) (P = 0.01) and cardiac power index (CPI) (P = 0.03) were the only hemodynamic variables separately associated with mortality. The hourly time integral of CI drops <3, 2.75 (both P = 0.02) and 2.5 (P = 0.03) L/min/m2 was associated with death but not that of CI drops <2 L/min/m2 or lower thresholds (all P > 0.05). The hourly time integral of CPI drops <0.5-0.8 W/m2 (all P = 0.04) was associated with 28-day mortality but not that of CPI drops <0.4 W/m2 or lower thresholds (all P > 0.05). CONCLUSIONS: During the first 24 hours after intensive care unit admission, CI and CPI are the most important hemodynamic variables separately associated with 28-day mortality in patients with cardiogenic shock. A CI of 3 L/min/m2 and a CPI of 0.8 W/m2 were most predictive of 28-day mortality. Since our results must be considered hypothesis-generating, randomized controlled trials are required to evaluate whether targeting these levels as early resuscitation endpoints can improve mortality in cardiogenic shock.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Improved survival among HIV-infected individuals on antiretroviral therapy (ART) has focused attention on AIDS-related cancers including Kaposi sarcoma (KS). However, the effect of KS on response to ART is not well-described in Southern Africa. We assessed the effect of KS on survival and immunologic and virologic treatment responses at 6- and 12-months after initiation of ART. METHODS We analyzed prospectively collected data from a cohort of HIV-infected adults initiating ART in South Africa. Differences in mortality between those with and without KS at ART initiation were estimated with Cox proportional hazard models. Log-binomial models were used to assess differences in CD4 count response and HIV virologic suppression within a year of initiating treatment. RESULTS Between January 2001-January 2008, 13,847 HIV-infected adults initiated ART at the study clinics. Those with KS at ART initiation (n = 247, 2%) were similar to those without KS (n = 13600,98%) with respect to age (35 vs. 35yrs), presenting CD4 count (74 vs. 85cells/mm³) and proportion on TB treatment (37% vs. 30%). In models adjusted for sex, baseline CD4 count, age, treatment site, tuberculosis and year of ART initiation, KS patients were over three times more likely to have died at any time after ART initiation (hazard ratio[HR]: 3.62; 95% CI: 2.71-4.84) than those without KS. The increased risk was highest within the first year on ART (HR: 4.05; 95% CI: 2.95-5.55) and attenuated thereafter (HR: 2.30; 95% CI: 1.08-4.89). Those with KS also gained, on average, 29 fewer CD4 cells (95% CI: 7-52cells/mm³) and were less likely to increase their CD4 count by 50 cells from baseline (RR: 1.43; 95% CI: 0.99-2.06) within the first 6-months of treatment. CONCLUSIONS HIV-infected adults presenting with KS have increased risk of mortality even after initiation of ART with the greatest risk in the first year. Among those who survive the first year on therapy, subjects with KS demonstrated a poorer immunologic response to ART than those without KS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND In many resource-limited settings monitoring of combination antiretroviral therapy (cART) is based on the current CD4 count, with limited access to HIV RNA tests or laboratory diagnostics. We examined whether the CD4 count slope over 6 months could provide additional prognostic information. METHODS We analyzed data from a large multicohort study in South Africa, where HIV RNA is routinely monitored. Adult HIV-positive patients initiating cART between 2003 and 2010 were included. Mortality was analyzed in Cox models; CD4 count slope by HIV RNA level was assessed using linear mixed models. RESULTS About 44,829 patients (median age: 35 years, 58% female, median CD4 count at cART initiation: 116 cells/mm) were followed up for a median of 1.9 years, with 3706 deaths. Mean CD4 count slopes per week ranged from 1.4 [95% confidence interval (CI): 1.2 to 1.6] cells per cubic millimeter when HIV RNA was <400 copies per milliliter to -0.32 (95% CI: -0.47 to -0.18) cells per cubic millimeter with >100,000 copies per milliliter. The association of CD4 slope with mortality depended on current CD4 count: the adjusted hazard ratio (aHRs) comparing a >25% increase over 6 months with a >25% decrease was 0.68 (95% CI: 0.58 to 0.79) at <100 cells per cubic millimeter but 1.11 (95% CI: 0.78 to 1.58) at 201-350 cells per cubic millimeter. In contrast, the aHR for current CD4 count, comparing >350 with <100 cells per cubic millimeter, was 0.10 (95% CI: 0.05 to 0.20). CONCLUSIONS Absolute CD4 count remains a strong risk for mortality with a stable effect size over the first 4 years of cART. However, CD4 count slope and HIV RNA provide independently added to the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Loss to follow-up (LTFU) is a common problem in many epidemiological studies. In antiretroviral treatment (ART) programs for patients with human immunodeficiency virus (HIV), mortality estimates can be biased if the LTFU mechanism is non-ignorable, that is, mortality differs between lost and retained patients. In this setting, routine procedures for handling missing data may lead to biased estimates. To appropriately deal with non-ignorable LTFU, explicit modeling of the missing data mechanism is needed. This can be based on additional outcome ascertainment for a sample of patients LTFU, for example, through linkage to national registries or through survey-based methods. In this paper, we demonstrate how this additional information can be used to construct estimators based on inverse probability weights (IPW) or multiple imputation. We use simulations to contrast the performance of the proposed estimators with methods widely used in HIV cohort research for dealing with missing data. The practical implications of our approach are illustrated using South African ART data, which are partially linkable to South African national vital registration data. Our results demonstrate that while IPWs and proper imputation procedures can be easily constructed from additional outcome ascertainment to obtain valid overall estimates, neglecting non-ignorable LTFU can result in substantial bias. We believe the proposed estimators are readily applicable to a growing number of studies where LTFU is appreciable, but additional outcome data are available through linkage or surveys of patients LTFU. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Low systolic blood pressure (SBP) is an important secondary insult following traumatic brain injury (TBI), but its exact relationship with outcome is not well characterised. Although a SBP of <90mmHg represents the threshold for hypotension in consensus TBI treatment guidelines, recent studies suggest redefining hypotension at higher levels. This study therefore aimed to fully characterise the association between admission SBP and mortality to further inform resuscitation endpoints. METHODS We conducted a multicentre cohort study using data from the largest European trauma registry. Consecutive adult patients with AIS head scores >2 admitted directly to specialist neuroscience centres between 2005 and July 2012 were studied. Multilevel logistic regression models were developed to examine the association between admission SBP and 30 day inpatient mortality. Models were adjusted for confounders including age, severity of injury, and to account for differential quality of hospital care. RESULTS 5057 patients were included in complete case analyses. Admission SBP demonstrated a smooth u-shaped association with outcome in a bivariate analysis, with increasing mortality at both lower and higher values, and no evidence of any threshold effect. Adjusting for confounding slightly attenuated the association between mortality and SBP at levels <120mmHg, and abolished the relationship for higher SBP values. Case-mix adjusted odds of death were 1.5 times greater at <120mmHg, doubled at <100mmHg, tripled at <90mmHg, and six times greater at SBP<70mmHg, p<0.01. CONCLUSIONS These findings indicate that TBI studies should model SBP as a continuous variable and may suggest that current TBI treatment guidelines, using a cut-off for hypotension at SBP<90mmHg, should be reconsidered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Data on temporal trends in outcomes, gender differences, and adherence to evidence-based therapy (EBT) of diabetic patients with ST-segment elevation myocardial infarction (STEMI) are sparse. METHODS We performed a retrospective analysis of prospectively acquired data on 3565 diabetic (2412 males and 1153 females) STEMI patients enrolled in the Swiss AMIS Plus registry between 1997 and 2010 and compared in-hospital outcomes and adherence to EBT with the nondiabetic population (n=15,531). RESULTS In-hospital mortality dramatically decreased in diabetic patients, from 19.9% in 1997 to 9.0% in 2010 (p trend<0.001) with an age-adjusted decrease of 6% per year of admission. Similar trends were observed for age-adjusted reinfarction (OR 0.86, p<0.001), cardiogenic shock (OR 0.88, p<0.001), as well as death, reinfarction, or stroke (OR 0.92, p<0.001). However, the mortality benefit over time was observed in diabetic males (p trend=0.006) but not females (p trend=0.082). In addition, mortality remained twice as high in diabetic patients compared with nondiabetic ones (12.1 vs. 6.1%, p<0.001) and diabetes was identified as independent predictor of mortality (OR 1.23, p=0.022). Within the diabetic cohort, females had higher mortality than males (16.1 vs. 10.2%, p<0.001) and female gender independently predicted in-hospital mortality (OR 1.45, p=0.015). Adherence to EBT significantly improved over time in diabetic patients (p trend<0.001) but remained inferior - especially in women - to the one of nondiabetic individuals. CONCLUSIONS In-hospital mortality and morbidity of diabetic STEMI patients in Switzerland improved dramatically over time but, compared with nondiabetic counterparts, gaps in outcomes as well as EBT use persisted, especially in women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Outcome data are limited in patients with ST-segment elevation acute myocardial infarction (STEMI) or other acute coronary syndromes (ACSs) who receive a drug-eluting stent (DES). Data suggest that first generation DES is associated with an increased risk of stent thrombosis when used in STEMI. Whether this observation persists with newer generation DES is unknown. The study objective was to analyze the two-year safety and effectiveness of Resolute™ zotarolimus-eluting stents (R-ZESs) implanted for STEMI, ACS without ST segment elevation (non-STEACS), and stable angina (SA). METHODS Data from the Resolute program (Resolute All Comers and Resolute International) were pooled and patients with R-ZES implantation were categorized by indication: STEMI (n=335), non-STEACS (n=1416), and SA (n=1260). RESULTS Mean age was 59.8±11.3 years (STEMI), 63.8±11.6 (non-STEACS), and 64.9±10.1 (SA). Fewer STEMI patients had diabetes (19.1% vs. 28.5% vs. 29.2%; P<0.001), prior MI (11.3% vs. 27.2% vs. 29.4%; P<0.001), or previous revascularization (11.3% vs. 27.9% vs. 37.6%; P<0.001). Two-year definite/probable stent thrombosis occurred in 2.4% (STEMI), 1.2% (non-STEACS) and 1.1% (SA) of patients with late/very late stent thrombosis (days 31-720) rates of 0.6% (STEMI and non-STEACS) and 0.4% (SA) (P=NS). The two-year mortality rate was 2.1% (STEMI), 4.8% (non-STEACS) and 3.7% (SA) (P=NS). Death or target vessel re-infarction occurred in 3.9% (STEMI), 8.7% (non-STEACS) and 7.3% (SA) (P=0.012). CONCLUSION R-ZES in STEMI and in other clinical presentations is effective and safe. Long term outcomes are favorable with an extremely rare incidence of late and very late stent thrombosis following R-ZES implantation across indications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The early hemodynamic normalization of polytrauma patients may lead to better survival outcomes. The aim of this study was to assess the diagnostic quality of trauma and physiological scores from widely used scoring systems in polytrauma patients. METHODS: In total, 770 patients with ISS > 16 who were admitted to a trauma center within the first 24 hours after injury were included in this retrospective study. The patients were subdivided into three groups: those who died on the day of admission, those who died within the first three days, and those who survived for longer than three days. ISS, NISS, APACHE II score, and prothrombin time were recorded at admission. RESULTS: The descriptive statistics for early death in polytrauma patients who died on the day of admission, 1--3 days after admission, and > 3 days after admission were: ISS of 41.0, 34.0, and 29.0, respectively; NISS of 50.0, 50.0, and 41.0, respectively; APACHE II score of 30.0, 25.0, and 15.0, respectively; and prothrombin time of 37.0%, 56.0%, and 84%, respectively. These data indicate that prothrombin time (AUC: 0.89) and APACHE II (AUC: 0.88) have the greatest prognostic utility for early death. CONCLUSION: The estimated densities of the scores may suggest a direction for resuscitative procedures in polytrauma patients.Trial registration: "Retrospektive Analysen in der Chirurgischen Intensivmedizin" StV01-2008.http://www.kek.zh.ch/internet/gesundheitsdirektion/kek/de/home.html.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PRINCIPALS The liver plays an important role in glucose metabolism, in terms of glucolysis and gluconeogenesis. Several studies have shown that hyperglycemia in patients with liver cirrhosis is associated with progression of the liver disease and increased mortality. However, no study has ever targeted the influence of hypoglycemia. The aim of this study was to assess the association of glucose disturbances with outcome in patients presenting to the emergency department with acute decompensated liver cirrhosis. METHODS Our retrospective data analysis comprised adult (≥16 years) patients admitted to our emergency department between January 1, 2002, and December 31, 2012, with the primary diagnosis of decompensated liver cirrhosis. RESULTS A total of 312 patients were eligible for study inclusion. Two hundred thirty-one (74.0%) patients were male; 81 (26.0%) were female. The median age was 57 years (range, 51-65 years). Overall, 89 (28.5%) of our patients had acute glucose disturbances; 49 (15.7%) of our patients were hypoglycemic and 40 (12.8%) were hyperglycemic. Patients with hypoglycemia were significantly more often admitted to the intensive care unit than hyperglycemic patients (20.4% vs 10.8%, P < .015) or than normoglycemic patients (20.4% vs 10.3%, P < .011), and they significantly more often died in the hospital (28.6% hypoglycemic vs 7.5% hyperglycemic, P < .024; 28.6% hypoglycemic vs 10.3% normoglycemic P < .049). Survival analysis showed a significantly lower estimated survival for hypoglycemic patients (36 days) than for normoglycemic patients (54 days) or hyperglycemic patients (45 days; hypoglycemic vs hyperglycemic, P < .019; hypoglycemic vs normoglycemic, P < .007; hyperglycemic vs normoglycemic, P < .477). CONCLUSION Hypoglycemia is associated with increased mortality in patients with acute decompensated liver cirrhosis. It is not yet clear whether hypoglycemia is jointly responsible for the increased short-term mortality of patients with acute decompensated liver cirrhosis or is only a consequence of the severity of the disease or the complications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Screening tests for drugs of abuse are regularly used in the clinical routine. These tests identify the targeted substances very differently if tests from different manufacturers are used and sometimes also react positive after the intake of drugs which are not intended to be detected. Therefore, implausible results have to be questioned. A test result can be falsely negative, if a patient has taken a compound which is not detected by the antibody used in the test system. Chromatographic confirmation and screening assays are more laborious to perform and more demanding for the interpretation and are therefore only offered by several specialized clinical laboratories. However, their specificity is excellent and many different compounds can be detected depending on the number of compounds which are part of the mass spectra library used. If the clinical evaluation results in the differential diagnosis of an acute intoxication, screening tests for drugs of abuse can help to identify a single compound or a group of substances. The clinical picture, however, can usually not been explained by a qualitative test result. In addition, there are no published data demonstrating that these tests meaningfully influence triage, treatment, diagnosis or further therapy of a poisoned patient. The quantitative determination of specific compounds in the blood allows for example an appraisal of the prognosis and helps to indicate a specific therapy after intake of acetaminophen or methanol. New designer drugs can not at all be detected by the classic screening tests for drugs of abuse. The have to be identified by chromatographic methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND This review is an update of the first Cochrane publication on selenium for preventing cancer (Dennert 2011).Selenium is a metalloid with both nutritional and toxicological properties. Higher selenium exposure and selenium supplements have been suggested to protect against several types of cancers. OBJECTIVES Two research questions were addressed in this review: What is the evidence for:1. an aetiological relation between selenium exposure and cancer risk in humans? and2. the efficacy of selenium supplementation for cancer prevention in humans? SEARCH METHODS We conducted electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL, 2013, Issue 1), MEDLINE (Ovid, 1966 to February 2013 week 1), EMBASE (1980 to 2013 week 6), CancerLit (February 2004) and CCMed (February 2011). As MEDLINE now includes the journals indexed in CancerLit, no further searches were conducted in this database after 2004. SELECTION CRITERIA We included prospective observational studies (cohort studies including sub-cohort controlled studies and nested case-control studies) and randomised controlled trials (RCTs) with healthy adult participants (18 years of age and older). DATA COLLECTION AND ANALYSIS For observational studies, we conducted random effects meta-analyses when five or more studies were retrieved for a specific outcome. For RCTs, we performed random effects meta-analyses when two or more studies were available. The risk of bias in observational studies was assessed using forms adapted from the Newcastle-Ottawa Quality Assessment Scale for cohort and case-control studies; the criteria specified in the Cochrane Handbook for Systematic Reviews of Interventions were used to evaluate the risk of bias in RCTs. MAIN RESULTS We included 55 prospective observational studies (including more than 1,100,000 participants) and eight RCTs (with a total of 44,743 participants). For the observational studies, we found lower cancer incidence (summary odds ratio (OR) 0.69, 95% confidence interval (CI) 0.53 to 0.91, N = 8) and cancer mortality (OR 0.60, 95% CI 0.39 to 0.93, N = 6) associated with higher selenium exposure. Gender-specific subgroup analysis provided no clear evidence of different effects in men and women (P value 0.47), although cancer incidence was lower in men (OR 0.66, 95% CI 0.42 to 1.05, N = 6) than in women (OR 0.90, 95% CI 0.45 to 1.77, N = 2). The most pronounced decreases in risk of site-specific cancers were seen for stomach, bladder and prostate cancers. However, these findings have limitations due to study design, quality and heterogeneity that complicate interpretation of the summary statistics. Some studies suggested that genetic factors may modify the relation between selenium and cancer risk-a hypothesis that deserves further investigation.In RCTs, we found no clear evidence that selenium supplementation reduced the risk of any cancer (risk ratio (RR) 0.90, 95% CI 0.70 to 1.17, two studies, N = 4765) or cancer-related mortality (RR 0.81, 95% CI 0.49 to 1.32, two studies, N = 18,698), and this finding was confirmed when the analysis was restricted to studies with low risk of bias. The effect on prostate cancer was imprecise (RR 0.90, 95% CI 0.71 to 1.14, four studies, N = 19,110), and when the analysis was limited to trials with low risk of bias, the interventions showed no effect (RR 1.02, 95% CI 0.90 to 1.14, three studies, N = 18,183). The risk of non-melanoma skin cancer was increased (RR 1.44, 95% CI 0.95 to 1.17, three studies, N = 1900). Results of two trials-the Nutritional Prevention of Cancer Trial (NPCT) and the Selenium and Vitamin E Cancer Trial (SELECT)-also raised concerns about possible increased risk of type 2 diabetes, alopecia and dermatitis due to selenium supplements. An early hypothesis generated by NPCT that individuals with the lowest blood selenium levels at baseline could reduce their risk of cancer, particularly of prostate cancer, by increasing selenium intake has not been confirmed by subsequent trials. As the RCT participants were overwhelmingly male (94%), gender differences could not be systematically assessed. AUTHORS' CONCLUSIONS Although an inverse association between selenium exposure and the risk of some types of cancer was found in some observational studies, this cannot be taken as evidence of a causal relation, and these results should be interpreted with caution. These studies have many limitations, including issues with assessment of exposure to selenium and to its various chemical forms, heterogeneity, confounding and other biases. Conflicting results including inverse, null and direct associations have been reported for some cancer types.RCTs assessing the effects of selenium supplementation on cancer risk have yielded inconsistent results, although the most recent studies, characterised by a low risk of bias, found no beneficial effect on cancer risk, more specifically on risk of prostate cancer, as well as little evidence of any influence of baseline selenium status. Rather, some trials suggest harmful effects of selenium exposure. To date, no convincing evidence suggests that selenium supplements can prevent cancer in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:Erythropoiesis-stimulating agents (ESAs) reduce the need for red blood cell transfusions; however, they increase the risk of thromboembolic events and mortality. The impact of ESAs on quality of life (QoL) is controversial and led to different recommendations of medical societies and authorities in the USA and Europe. We aimed to critically evaluate and quantify the effects of ESAs on QoL in cancer patients.Methods:We included data from randomised controlled trials (RCTs) on the effects of ESAs on QoL in cancer patients. Randomised controlled trials were identified by searching electronic data bases and other sources up to January 2011. To reduce publication and outcome reporting biases, we included unreported results from clinical study reports. We conducted meta-analyses on fatigue- and anaemia-related symptoms measured with the Functional Assessment of Cancer Therapy-Fatigue (FACT-F) and FACT-Anaemia (FACT-An) subscales (primary outcomes) or other validated instruments.Results:We identified 58 eligible RCTs. Clinical study reports were available for 27% (4 out of 15) of the investigator-initiated trials and 95% (41 out of 43) of the industry-initiated trials. We excluded 21 RTCs as we could not use their QoL data for meta-analyses, either because of incomplete reporting (17 RCTs) or because of premature closure of the trial (4 RCTs). We included 37 RCTs with 10 581 patients; 21 RCTs were placebo controlled. Chemotherapy was given in 27 of the 37 RCTs. The median baseline haemoglobin (Hb) level was 10.1 g dl(-1); in 8 studies ESAs were stopped at Hb levels below 13 g dl(-1) and in 27 above 13 g dl(-1). For FACT-F, the mean difference (MD) was 2.41 (95% confidence interval (95% CI) 1.39-3.43; P<0.0001; 23 studies, n=6108) in all cancer patients and 2.81 (95% CI 1.73-3.90; P<0.0001; 19 RCTs, n=4697) in patients receiving chemotherapy, which was below the threshold (⩾3) for a clinically important difference (CID). Erythropoiesis-stimulating agents had a positive effect on anaemia-related symptoms (MD 4.09; 95% CI 2.37-5.80; P=0.001; 14 studies, n=2765) in all cancer patients and 4.50 (95% CI 2.55-6.45; P<0.0001; 11 RCTs, n=2436) in patients receiving chemotherapy, which was above the threshold (⩾4) for a CID. Of note, this effect persisted when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy. There was some evidence that the MDs for FACT-F were above the threshold for a CID in RCTs including cancer patients receiving chemotherapy with Hb levels below 12 g dl(-1) at baseline and in RCTs stopping ESAs at Hb levels above 13 g dl(-1). However, these findings for FACT-F were not confirmed when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy.Conclusions:In cancer patients, particularly those receiving chemotherapy, we found that ESAs provide a small but clinically important improvement in anaemia-related symptoms (FACT-An). For fatigue-related symptoms (FACT-F), the overall effect did not reach the threshold for a CID.British Journal of Cancer advance online publication, 17 April 2014; doi:10.1038/bjc.2014.171 www.bjcancer.com.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Copper and its main transport protein ceruloplasmin have been suggested to promote the development of atherosclerosis. Most of the data come from experimental and animal model studies. Copper and mortality have not been simultaneously evaluated in patients undergoing coronary angiography. METHODS AND RESULTS We examined whether serum copper and ceruloplasmin concentrations are associated with angiographic coronary artery disease (CAD) and mortality from all causes and cardiovascular causes in 3253 participants of the Ludwigshafen Risk and Cardiovascular Health Study. Age and sex-adjusted hazard ratios (HR) for death from any cause were 2.23 (95% CI, 1.85-2.68) for copper and 2.63 (95% CI, 2.17-3.20) for ceruloplasmin when we compared the highest with the lowest quartiles. Corresponding hazard ratios (HR) for death from cardiovascular causes were 2.58 (95% CI, 2.05-3.25) and 3.02 (95% CI, 2.36-3.86), respectively. Further adjustments for various risk factors and clinical variables considerably attenuated these associations, which, however, were still statistically significant and the results remained consistent across subgroups. CONCLUSIONS The elevated concentrations of both copper and ceruloplasmin are independently associated with increased risk of mortality from all causes and from cardiovascular causes.