936 resultados para toxicological mortality data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Low systolic blood pressure (SBP) is an important secondary insult following traumatic brain injury (TBI), but its exact relationship with outcome is not well characterised. Although a SBP of <90mmHg represents the threshold for hypotension in consensus TBI treatment guidelines, recent studies suggest redefining hypotension at higher levels. This study therefore aimed to fully characterise the association between admission SBP and mortality to further inform resuscitation endpoints. METHODS We conducted a multicentre cohort study using data from the largest European trauma registry. Consecutive adult patients with AIS head scores >2 admitted directly to specialist neuroscience centres between 2005 and July 2012 were studied. Multilevel logistic regression models were developed to examine the association between admission SBP and 30 day inpatient mortality. Models were adjusted for confounders including age, severity of injury, and to account for differential quality of hospital care. RESULTS 5057 patients were included in complete case analyses. Admission SBP demonstrated a smooth u-shaped association with outcome in a bivariate analysis, with increasing mortality at both lower and higher values, and no evidence of any threshold effect. Adjusting for confounding slightly attenuated the association between mortality and SBP at levels <120mmHg, and abolished the relationship for higher SBP values. Case-mix adjusted odds of death were 1.5 times greater at <120mmHg, doubled at <100mmHg, tripled at <90mmHg, and six times greater at SBP<70mmHg, p<0.01. CONCLUSIONS These findings indicate that TBI studies should model SBP as a continuous variable and may suggest that current TBI treatment guidelines, using a cut-off for hypotension at SBP<90mmHg, should be reconsidered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Data on temporal trends in outcomes, gender differences, and adherence to evidence-based therapy (EBT) of diabetic patients with ST-segment elevation myocardial infarction (STEMI) are sparse. METHODS We performed a retrospective analysis of prospectively acquired data on 3565 diabetic (2412 males and 1153 females) STEMI patients enrolled in the Swiss AMIS Plus registry between 1997 and 2010 and compared in-hospital outcomes and adherence to EBT with the nondiabetic population (n=15,531). RESULTS In-hospital mortality dramatically decreased in diabetic patients, from 19.9% in 1997 to 9.0% in 2010 (p trend<0.001) with an age-adjusted decrease of 6% per year of admission. Similar trends were observed for age-adjusted reinfarction (OR 0.86, p<0.001), cardiogenic shock (OR 0.88, p<0.001), as well as death, reinfarction, or stroke (OR 0.92, p<0.001). However, the mortality benefit over time was observed in diabetic males (p trend=0.006) but not females (p trend=0.082). In addition, mortality remained twice as high in diabetic patients compared with nondiabetic ones (12.1 vs. 6.1%, p<0.001) and diabetes was identified as independent predictor of mortality (OR 1.23, p=0.022). Within the diabetic cohort, females had higher mortality than males (16.1 vs. 10.2%, p<0.001) and female gender independently predicted in-hospital mortality (OR 1.45, p=0.015). Adherence to EBT significantly improved over time in diabetic patients (p trend<0.001) but remained inferior - especially in women - to the one of nondiabetic individuals. CONCLUSIONS In-hospital mortality and morbidity of diabetic STEMI patients in Switzerland improved dramatically over time but, compared with nondiabetic counterparts, gaps in outcomes as well as EBT use persisted, especially in women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Outcome data are limited in patients with ST-segment elevation acute myocardial infarction (STEMI) or other acute coronary syndromes (ACSs) who receive a drug-eluting stent (DES). Data suggest that first generation DES is associated with an increased risk of stent thrombosis when used in STEMI. Whether this observation persists with newer generation DES is unknown. The study objective was to analyze the two-year safety and effectiveness of Resolute™ zotarolimus-eluting stents (R-ZESs) implanted for STEMI, ACS without ST segment elevation (non-STEACS), and stable angina (SA). METHODS Data from the Resolute program (Resolute All Comers and Resolute International) were pooled and patients with R-ZES implantation were categorized by indication: STEMI (n=335), non-STEACS (n=1416), and SA (n=1260). RESULTS Mean age was 59.8±11.3 years (STEMI), 63.8±11.6 (non-STEACS), and 64.9±10.1 (SA). Fewer STEMI patients had diabetes (19.1% vs. 28.5% vs. 29.2%; P<0.001), prior MI (11.3% vs. 27.2% vs. 29.4%; P<0.001), or previous revascularization (11.3% vs. 27.9% vs. 37.6%; P<0.001). Two-year definite/probable stent thrombosis occurred in 2.4% (STEMI), 1.2% (non-STEACS) and 1.1% (SA) of patients with late/very late stent thrombosis (days 31-720) rates of 0.6% (STEMI and non-STEACS) and 0.4% (SA) (P=NS). The two-year mortality rate was 2.1% (STEMI), 4.8% (non-STEACS) and 3.7% (SA) (P=NS). Death or target vessel re-infarction occurred in 3.9% (STEMI), 8.7% (non-STEACS) and 7.3% (SA) (P=0.012). CONCLUSION R-ZES in STEMI and in other clinical presentations is effective and safe. Long term outcomes are favorable with an extremely rare incidence of late and very late stent thrombosis following R-ZES implantation across indications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The early hemodynamic normalization of polytrauma patients may lead to better survival outcomes. The aim of this study was to assess the diagnostic quality of trauma and physiological scores from widely used scoring systems in polytrauma patients. METHODS: In total, 770 patients with ISS > 16 who were admitted to a trauma center within the first 24 hours after injury were included in this retrospective study. The patients were subdivided into three groups: those who died on the day of admission, those who died within the first three days, and those who survived for longer than three days. ISS, NISS, APACHE II score, and prothrombin time were recorded at admission. RESULTS: The descriptive statistics for early death in polytrauma patients who died on the day of admission, 1--3 days after admission, and > 3 days after admission were: ISS of 41.0, 34.0, and 29.0, respectively; NISS of 50.0, 50.0, and 41.0, respectively; APACHE II score of 30.0, 25.0, and 15.0, respectively; and prothrombin time of 37.0%, 56.0%, and 84%, respectively. These data indicate that prothrombin time (AUC: 0.89) and APACHE II (AUC: 0.88) have the greatest prognostic utility for early death. CONCLUSION: The estimated densities of the scores may suggest a direction for resuscitative procedures in polytrauma patients.Trial registration: "Retrospektive Analysen in der Chirurgischen Intensivmedizin" StV01-2008.http://www.kek.zh.ch/internet/gesundheitsdirektion/kek/de/home.html.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PRINCIPALS The liver plays an important role in glucose metabolism, in terms of glucolysis and gluconeogenesis. Several studies have shown that hyperglycemia in patients with liver cirrhosis is associated with progression of the liver disease and increased mortality. However, no study has ever targeted the influence of hypoglycemia. The aim of this study was to assess the association of glucose disturbances with outcome in patients presenting to the emergency department with acute decompensated liver cirrhosis. METHODS Our retrospective data analysis comprised adult (≥16 years) patients admitted to our emergency department between January 1, 2002, and December 31, 2012, with the primary diagnosis of decompensated liver cirrhosis. RESULTS A total of 312 patients were eligible for study inclusion. Two hundred thirty-one (74.0%) patients were male; 81 (26.0%) were female. The median age was 57 years (range, 51-65 years). Overall, 89 (28.5%) of our patients had acute glucose disturbances; 49 (15.7%) of our patients were hypoglycemic and 40 (12.8%) were hyperglycemic. Patients with hypoglycemia were significantly more often admitted to the intensive care unit than hyperglycemic patients (20.4% vs 10.8%, P < .015) or than normoglycemic patients (20.4% vs 10.3%, P < .011), and they significantly more often died in the hospital (28.6% hypoglycemic vs 7.5% hyperglycemic, P < .024; 28.6% hypoglycemic vs 10.3% normoglycemic P < .049). Survival analysis showed a significantly lower estimated survival for hypoglycemic patients (36 days) than for normoglycemic patients (54 days) or hyperglycemic patients (45 days; hypoglycemic vs hyperglycemic, P < .019; hypoglycemic vs normoglycemic, P < .007; hyperglycemic vs normoglycemic, P < .477). CONCLUSION Hypoglycemia is associated with increased mortality in patients with acute decompensated liver cirrhosis. It is not yet clear whether hypoglycemia is jointly responsible for the increased short-term mortality of patients with acute decompensated liver cirrhosis or is only a consequence of the severity of the disease or the complications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Screening tests for drugs of abuse are regularly used in the clinical routine. These tests identify the targeted substances very differently if tests from different manufacturers are used and sometimes also react positive after the intake of drugs which are not intended to be detected. Therefore, implausible results have to be questioned. A test result can be falsely negative, if a patient has taken a compound which is not detected by the antibody used in the test system. Chromatographic confirmation and screening assays are more laborious to perform and more demanding for the interpretation and are therefore only offered by several specialized clinical laboratories. However, their specificity is excellent and many different compounds can be detected depending on the number of compounds which are part of the mass spectra library used. If the clinical evaluation results in the differential diagnosis of an acute intoxication, screening tests for drugs of abuse can help to identify a single compound or a group of substances. The clinical picture, however, can usually not been explained by a qualitative test result. In addition, there are no published data demonstrating that these tests meaningfully influence triage, treatment, diagnosis or further therapy of a poisoned patient. The quantitative determination of specific compounds in the blood allows for example an appraisal of the prognosis and helps to indicate a specific therapy after intake of acetaminophen or methanol. New designer drugs can not at all be detected by the classic screening tests for drugs of abuse. The have to be identified by chromatographic methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports a comparison of three modeling strategies for the analysis of hospital mortality in a sample of general medicine inpatients in a Department of Veterans Affairs medical center. Logistic regression, a Markov chain model, and longitudinal logistic regression were evaluated on predictive performance as measured by the c-index and on accuracy of expected numbers of deaths compared to observed. The logistic regression used patient information collected at admission; the Markov model was comprised of two absorbing states for discharge and death and three transient states reflecting increasing severity of illness as measured by laboratory data collected during the hospital stay; longitudinal regression employed Generalized Estimating Equations (GEE) to model covariance structure for the repeated binary outcome. Results showed that the logistic regression predicted hospital mortality as well as the alternative methods but was limited in scope of application. The Markov chain provides insights into how day to day changes of illness severity lead to discharge or death. The longitudinal logistic regression showed that increasing illness trajectory is associated with hospital mortality. The conclusion is reached that for standard applications in modeling hospital mortality, logistic regression is adequate, but for new challenges facing health services research today, alternative methods are equally predictive, practical, and can provide new insights. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND This review is an update of the first Cochrane publication on selenium for preventing cancer (Dennert 2011).Selenium is a metalloid with both nutritional and toxicological properties. Higher selenium exposure and selenium supplements have been suggested to protect against several types of cancers. OBJECTIVES Two research questions were addressed in this review: What is the evidence for:1. an aetiological relation between selenium exposure and cancer risk in humans? and2. the efficacy of selenium supplementation for cancer prevention in humans? SEARCH METHODS We conducted electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL, 2013, Issue 1), MEDLINE (Ovid, 1966 to February 2013 week 1), EMBASE (1980 to 2013 week 6), CancerLit (February 2004) and CCMed (February 2011). As MEDLINE now includes the journals indexed in CancerLit, no further searches were conducted in this database after 2004. SELECTION CRITERIA We included prospective observational studies (cohort studies including sub-cohort controlled studies and nested case-control studies) and randomised controlled trials (RCTs) with healthy adult participants (18 years of age and older). DATA COLLECTION AND ANALYSIS For observational studies, we conducted random effects meta-analyses when five or more studies were retrieved for a specific outcome. For RCTs, we performed random effects meta-analyses when two or more studies were available. The risk of bias in observational studies was assessed using forms adapted from the Newcastle-Ottawa Quality Assessment Scale for cohort and case-control studies; the criteria specified in the Cochrane Handbook for Systematic Reviews of Interventions were used to evaluate the risk of bias in RCTs. MAIN RESULTS We included 55 prospective observational studies (including more than 1,100,000 participants) and eight RCTs (with a total of 44,743 participants). For the observational studies, we found lower cancer incidence (summary odds ratio (OR) 0.69, 95% confidence interval (CI) 0.53 to 0.91, N = 8) and cancer mortality (OR 0.60, 95% CI 0.39 to 0.93, N = 6) associated with higher selenium exposure. Gender-specific subgroup analysis provided no clear evidence of different effects in men and women (P value 0.47), although cancer incidence was lower in men (OR 0.66, 95% CI 0.42 to 1.05, N = 6) than in women (OR 0.90, 95% CI 0.45 to 1.77, N = 2). The most pronounced decreases in risk of site-specific cancers were seen for stomach, bladder and prostate cancers. However, these findings have limitations due to study design, quality and heterogeneity that complicate interpretation of the summary statistics. Some studies suggested that genetic factors may modify the relation between selenium and cancer risk-a hypothesis that deserves further investigation.In RCTs, we found no clear evidence that selenium supplementation reduced the risk of any cancer (risk ratio (RR) 0.90, 95% CI 0.70 to 1.17, two studies, N = 4765) or cancer-related mortality (RR 0.81, 95% CI 0.49 to 1.32, two studies, N = 18,698), and this finding was confirmed when the analysis was restricted to studies with low risk of bias. The effect on prostate cancer was imprecise (RR 0.90, 95% CI 0.71 to 1.14, four studies, N = 19,110), and when the analysis was limited to trials with low risk of bias, the interventions showed no effect (RR 1.02, 95% CI 0.90 to 1.14, three studies, N = 18,183). The risk of non-melanoma skin cancer was increased (RR 1.44, 95% CI 0.95 to 1.17, three studies, N = 1900). Results of two trials-the Nutritional Prevention of Cancer Trial (NPCT) and the Selenium and Vitamin E Cancer Trial (SELECT)-also raised concerns about possible increased risk of type 2 diabetes, alopecia and dermatitis due to selenium supplements. An early hypothesis generated by NPCT that individuals with the lowest blood selenium levels at baseline could reduce their risk of cancer, particularly of prostate cancer, by increasing selenium intake has not been confirmed by subsequent trials. As the RCT participants were overwhelmingly male (94%), gender differences could not be systematically assessed. AUTHORS' CONCLUSIONS Although an inverse association between selenium exposure and the risk of some types of cancer was found in some observational studies, this cannot be taken as evidence of a causal relation, and these results should be interpreted with caution. These studies have many limitations, including issues with assessment of exposure to selenium and to its various chemical forms, heterogeneity, confounding and other biases. Conflicting results including inverse, null and direct associations have been reported for some cancer types.RCTs assessing the effects of selenium supplementation on cancer risk have yielded inconsistent results, although the most recent studies, characterised by a low risk of bias, found no beneficial effect on cancer risk, more specifically on risk of prostate cancer, as well as little evidence of any influence of baseline selenium status. Rather, some trials suggest harmful effects of selenium exposure. To date, no convincing evidence suggests that selenium supplements can prevent cancer in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:Erythropoiesis-stimulating agents (ESAs) reduce the need for red blood cell transfusions; however, they increase the risk of thromboembolic events and mortality. The impact of ESAs on quality of life (QoL) is controversial and led to different recommendations of medical societies and authorities in the USA and Europe. We aimed to critically evaluate and quantify the effects of ESAs on QoL in cancer patients.Methods:We included data from randomised controlled trials (RCTs) on the effects of ESAs on QoL in cancer patients. Randomised controlled trials were identified by searching electronic data bases and other sources up to January 2011. To reduce publication and outcome reporting biases, we included unreported results from clinical study reports. We conducted meta-analyses on fatigue- and anaemia-related symptoms measured with the Functional Assessment of Cancer Therapy-Fatigue (FACT-F) and FACT-Anaemia (FACT-An) subscales (primary outcomes) or other validated instruments.Results:We identified 58 eligible RCTs. Clinical study reports were available for 27% (4 out of 15) of the investigator-initiated trials and 95% (41 out of 43) of the industry-initiated trials. We excluded 21 RTCs as we could not use their QoL data for meta-analyses, either because of incomplete reporting (17 RCTs) or because of premature closure of the trial (4 RCTs). We included 37 RCTs with 10 581 patients; 21 RCTs were placebo controlled. Chemotherapy was given in 27 of the 37 RCTs. The median baseline haemoglobin (Hb) level was 10.1 g dl(-1); in 8 studies ESAs were stopped at Hb levels below 13 g dl(-1) and in 27 above 13 g dl(-1). For FACT-F, the mean difference (MD) was 2.41 (95% confidence interval (95% CI) 1.39-3.43; P<0.0001; 23 studies, n=6108) in all cancer patients and 2.81 (95% CI 1.73-3.90; P<0.0001; 19 RCTs, n=4697) in patients receiving chemotherapy, which was below the threshold (⩾3) for a clinically important difference (CID). Erythropoiesis-stimulating agents had a positive effect on anaemia-related symptoms (MD 4.09; 95% CI 2.37-5.80; P=0.001; 14 studies, n=2765) in all cancer patients and 4.50 (95% CI 2.55-6.45; P<0.0001; 11 RCTs, n=2436) in patients receiving chemotherapy, which was above the threshold (⩾4) for a CID. Of note, this effect persisted when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy. There was some evidence that the MDs for FACT-F were above the threshold for a CID in RCTs including cancer patients receiving chemotherapy with Hb levels below 12 g dl(-1) at baseline and in RCTs stopping ESAs at Hb levels above 13 g dl(-1). However, these findings for FACT-F were not confirmed when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy.Conclusions:In cancer patients, particularly those receiving chemotherapy, we found that ESAs provide a small but clinically important improvement in anaemia-related symptoms (FACT-An). For fatigue-related symptoms (FACT-F), the overall effect did not reach the threshold for a CID.British Journal of Cancer advance online publication, 17 April 2014; doi:10.1038/bjc.2014.171 www.bjcancer.com.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Copper and its main transport protein ceruloplasmin have been suggested to promote the development of atherosclerosis. Most of the data come from experimental and animal model studies. Copper and mortality have not been simultaneously evaluated in patients undergoing coronary angiography. METHODS AND RESULTS We examined whether serum copper and ceruloplasmin concentrations are associated with angiographic coronary artery disease (CAD) and mortality from all causes and cardiovascular causes in 3253 participants of the Ludwigshafen Risk and Cardiovascular Health Study. Age and sex-adjusted hazard ratios (HR) for death from any cause were 2.23 (95% CI, 1.85-2.68) for copper and 2.63 (95% CI, 2.17-3.20) for ceruloplasmin when we compared the highest with the lowest quartiles. Corresponding hazard ratios (HR) for death from cardiovascular causes were 2.58 (95% CI, 2.05-3.25) and 3.02 (95% CI, 2.36-3.86), respectively. Further adjustments for various risk factors and clinical variables considerably attenuated these associations, which, however, were still statistically significant and the results remained consistent across subgroups. CONCLUSIONS The elevated concentrations of both copper and ceruloplasmin are independently associated with increased risk of mortality from all causes and from cardiovascular causes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the general population, HDL cholesterol (HDL-C) is associated with reduced cardiovascular events. However, recent experimental data suggest that the vascular effects of HDL can be heterogeneous. We examined the association of HDL-C with all-cause and cardiovascular mortality in the Ludwigshafen Risk and Cardiovascular Health study comprising 3307 patients undergoing coronary angiography. Patients were followed for a median of 9.9 years. Estimated GFR (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration eGFR creatinine-cystatin C (eGFRcreat-cys) equation. The effect of increasing HDL-C serum levels was assessed using Cox proportional hazard models. In participants with normal kidney function (eGFR>90 ml/min per 1.73 m(2)), higher HDL-C was associated with reduced risk of all-cause and cardiovascular mortality and coronary artery disease severity (hazard ratio [HR], 0.51, 95% confidence interval [95% CI], 0.26-0.92 [P=0.03]; HR, 0.30, 95% CI, 0.13-0.73 [P=0.01]). Conversely, in patients with mild (eGFR=60-89 ml/min per 1.73 m(2)) and more advanced reduced kidney function (eGFR<60 ml/min per 1.73 m(2)), higher HDL-C did not associate with lower risk for mortality (eGFR=60-89 ml/min per 1.73 m(2): HR, 0.68, 95% CI, 0.45-1.04 [P=0.07]; HR, 0.84, 95% CI, 0.50-1.40 [P=0.50]; eGFR<60 ml/min per 1.73 m(2): HR, 1.18, 95% CI, 0.60-1.81 [P=0.88]; HR, 0.82, 95% CI, 0.40-1.69 [P=0.60]). Moreover, Cox regression analyses revealed interaction between HDL-C and eGFR in predicting all-cause and cardiovascular mortality (P=0.04 and P=0.02, respectively). We confirmed a lack of association between higher HDL-C and lower mortality in an independent cohort of patients with definite CKD (P=0.63). In summary, higher HDL-C levels did not associate with reduced mortality risk and coronary artery disease severity in patients with reduced kidney function. Indeed, abnormal HDL function might confound the outcome of HDL-targeted therapies in these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CONTEXT Subclinical hypothyroidism has been associated with increased risk of coronary heart disease (CHD), particularly with thyrotropin levels of 10.0 mIU/L or greater. The measurement of thyroid antibodies helps predict the progression to overt hypothyroidism, but it is unclear whether thyroid autoimmunity independently affects CHD risk. OBJECTIVE The objective of the study was to compare the CHD risk of subclinical hypothyroidism with and without thyroid peroxidase antibodies (TPOAbs). DATA SOURCES AND STUDY SELECTION A MEDLINE and EMBASE search from 1950 to 2011 was conducted for prospective cohorts, reporting baseline thyroid function, antibodies, and CHD outcomes. DATA EXTRACTION Individual data of 38 274 participants from six cohorts for CHD mortality followed up for 460 333 person-years and 33 394 participants from four cohorts for CHD events. DATA SYNTHESIS Among 38 274 adults (median age 55 y, 63% women), 1691 (4.4%) had subclinical hypothyroidism, of whom 775 (45.8%) had positive TPOAbs. During follow-up, 1436 participants died of CHD and 3285 had CHD events. Compared with euthyroid individuals, age- and gender-adjusted risks of CHD mortality in subclinical hypothyroidism were similar among individuals with and without TPOAbs [hazard ratio (HR) 1.15, 95% confidence interval (CI) 0.87-1.53 vs HR 1.26, CI 1.01-1.58, P for interaction = .62], as were risks of CHD events (HR 1.16, CI 0.87-1.56 vs HR 1.26, CI 1.02-1.56, P for interaction = .65). Risks of CHD mortality and events increased with higher thyrotropin, but within each stratum, risks did not differ by TPOAb status. CONCLUSIONS CHD risk associated with subclinical hypothyroidism did not differ by TPOAb status, suggesting that biomarkers of thyroid autoimmunity do not add independent prognostic information for CHD outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BackgroundThe aim of the present study was to evaluate the feasibility of using a telephone survey in gaining an understanding of the possible herd and management factors influencing the performance (i.e. safety and efficacy) of a vaccine against porcine circovirus type 2 (PCV2) in a large number of herds and to estimate customers¿ satisfaction.ResultsDatasets from 227 pig herds that currently applied or have applied a PCV2 vaccine were analysed. Since 1-, 2- and 3-site production systems were surveyed, the herds were allocated in one of two subsets, where only applicable variables out of 180 were analysed. Group 1 was comprised of herds with sows, suckling pigs and nursery pigs, whereas herds in Group 2 in all cases kept fattening pigs. Overall 14 variables evaluating the subjective satisfaction with one particular PCV2 vaccine were comingled to an abstract dependent variable for further models, which was characterized by a binary outcome from a cluster analysis: good/excellent satisfaction (green cluster) and moderate satisfaction (red cluster). The other 166 variables comprised information about diagnostics, vaccination, housing, management, were considered as independent variables. In Group 1, herds using the vaccine due to recognised PCV2 related health problems (wasting, mortality or porcine dermatitis and nephropathy syndrome) had a 2.4-fold increased chance (1/OR) of belonging to the green cluster. In the final model for Group 1, the diagnosis of diseases other than PCV2, the reason for vaccine administration being other than PCV2-associated diseases and using a single injection of iron had significant influence on allocating into the green cluster (P¿<¿0.05). In Group 2, only unchanged time or delay of time of vaccination influenced the satisfaction (P¿<¿0.05).ConclusionThe methodology and statistical approach used in this study were feasible to scientifically assess ¿satisfaction¿, and to determine factors influencing farmers¿ and vets¿ opinion about the safety and efficacy of a new vaccine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Peripheral artery disease (PAD) is a major cause of cardiovascular ischemic events and amputation. Knowledge gaps exist in defining and measuring key factors that predict these events. The objective of this study was to assess whether duration of limb ischemia would serve as a major predictor of limb and patient survival. METHODS The FReedom from Ischemic Events: New Dimensions for Survival (FRIENDS) registry enrolled consecutive patients with limb-threatening peripheral artery disease at a single tertiary care hospital. Demographic information, key clinical care time segments, functional status and use of revascularization, and pharmacotherapy data were collected at baseline, and vascular ischemic events, cardiovascular mortality, and all-cause mortality were recorded at 30 days and 1 year. RESULTS A total of 200 patients with median (interquartile range) age of 76 years (65-84 years) were enrolled in the registry. Median duration of limb ischemia was 0.75 days for acute limb ischemia (ALI) and 61 days for chronic critical limb ischemia (CLI). Duration of limb ischemia of <12, 12 to 24, and >24 hours in patients with ALI was associated with much higher rates of first amputation (P = .0002) and worse amputation-free survival (P = .037). No such associations were observed in patients with CLI. CONCLUSIONS For individuals with ischemic symptoms <14 days, prolonged limb ischemia is associated with higher 30-day and 1-year amputation, systemic ischemic event rates, and worse amputation-free survival. No such associations are evident for individuals with chronic CLI. These data imply that prompt diagnosis and revascularization might improve outcomes for patients with ALI.