35 resultados para 860[729.1].07[Sarduy]


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the association between preoperative quality of life (QoL) and postoperative adverse events in women treated for endometrial cancer. Methods: 760 women with apparent Stage I endometrial cancer were randomised into a clinical trial evaluating laparoscopic versus open surgery. This analysis includes women with preoperative QoL measurements, from the Functional Assessment of Cancer Therapy- General (FACT-G) questionnaire, and who were followed up for at least 6 weeks after surgery (n=684). The outcomes for this study were defined as (1) the occurrence of moderate to severe AEs adverse events within 6 months (Common Toxicology Criteria (CTC) grade ≥3); and (2) any Serious Adverse Event (SAE). The association between preoperative QoL and the occurrence of AE was examined, after controlling for baseline comorbidity and other factors. Results: After adjusting for other factors, odds of occurrence of AE of CTC grade ≥3 were significantly increased with each unit decrease in baseline FACT-G score (OR=1.02, 95% CI 1.00-1.03, p=0.030), which was driven by physical well-being (PWB) (OR=1.09, 95% CI 1.04-1.13, p=0.0002) and functional well-being subscales (FWB) (OR=1.04, 95% CI 1.00-1.07, p=0.035). Similarly, odds of SAE occurrence were significantly increased with each unit decrease in baseline FACT-G score (OR=1.02, 95% CI 1.01-1.04, p=0.011), baseline PWB (OR=1.11, 95% CI 1.06-1.16, p<0.0001) or baseline FWB subscales (OR=1.05, 95% CI 1.01-1.10, p=0.0077). Conclusion: Women with early endometrial cancer presenting with lower QoL prior to surgery are at higher risk of developing a serious adverse event following surgery. Funding: Cancer Council Queensland, Cancer Council New South Wales, Cancer Council Victoria, Cancer Council, Western Australia; NHMRC project grant 456110; Cancer Australia project grant 631523; The Women and Infants Research Foundation, Western Australia; Royal Brisbane and Women’s Hospital Foundation; Wesley Research Institute; Gallipoli Research Foundation; Gynetech; TYCO Healthcare, Australia; Johnson and Johnson Medical, Australia; Hunter New England Centre for Gynaecological Cancer; Genesis Oncology Trust; and Smart Health Research Grant QLD Health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To quantify and compare the treatment effect and risk of bias of trials reporting biomarkers or intermediate outcomes (surrogate outcomes) versus trials using final patient relevant primary outcomes. Design Meta-epidemiological study. Data sources All randomised clinical trials published in 2005 and 2006 in six high impact medical journals: Annals of Internal Medicine, BMJ, Journal of the American Medical Association, Lancet, New England Journal of Medicine, and PLoS Medicine. Study selection Two independent reviewers selected trials. Data extraction Trial characteristics, risk of bias, and outcomes were recorded according to a predefined form. Two reviewers independently checked data extraction. The ratio of odds ratios was used to quantify the degree of difference in treatment effects between the trials using surrogate outcomes and those using patient relevant outcomes, also adjusted for trial characteristics. A ratio of odds ratios >1.0 implies that trials with surrogate outcomes report larger intervention effects than trials with patient relevant outcomes. Results 84 trials using surrogate outcomes and 101 using patient relevant outcomes were considered for analyses. Study characteristics of trials using surrogate outcomes and those using patient relevant outcomes were well balanced, except for median sample size (371 v 741) and single centre status (23% v 9%). Their risk of bias did not differ. Primary analysis showed trials reporting surrogate endpoints to have larger treatment effects (odds ratio 0.51, 95% confidence interval 0.42 to 0.60) than trials reporting patient relevant outcomes (0.76, 0.70 to 0.82), with an unadjusted ratio of odds ratios of 1.47 (1.07 to 2.01) and adjusted ratio of odds ratios of 1.46 (1.05 to 2.04). This result was consistent across sensitivity and secondary analyses. Conclusions Trials reporting surrogate primary outcomes are more likely to report larger treatment effects than trials reporting final patient relevant primary outcomes. This finding was not explained by differences in the risk of bias or characteristics of the two groups of trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Studies of mid-aged adults provide evidence of a relationship between sitting-time and all-cause mortality, but evidence in older adults is limited. The aim is to examine the relationship between total sitting-time and all-cause mortality in older women. Methods The prospective cohort design involved 6656 participants in the Australian Longitudinal Study on Women's Health who were followed for up to 9 years (2002, age 76–81, to 2011, age 85–90). Self-reported total sitting-time was linked to all-cause mortality data from the National Death Index from 2002 to 2011. Cox proportional hazard models were used to examine the relationship between sitting-time and all-cause mortality, with adjustment for potential sociodemographic, behavioural and health confounders. Results There were 2003 (30.1%) deaths during a median follow-up of 6 years. Compared with participants who sat <4 h/day, those who sat 8–11 h/day had a 1.45 times higher risk of death and those who sat ≥11 h/day had a 1.65 times higher risk of death. These risks remained after adding sociodemographic and behavioural covariates, but were attenuated after adjustment for health covariates. A significant interaction (p=0.02) was found between sitting-time and physical activity (PA), with increased mortality risk for prolonged sitting only among participants not meeting PA guidelines (HR for sitting ≥8 h/day: 1.31, 95% CI 1.07 to 1.61); HR for sitting ≥11 h/day: 1.47, CI 1.15 to 1.93). Conclusions Prolonged sitting-time was positively associated with all-cause mortality. Women who reported sitting for more than 8 h/day and did not meet PA guidelines had an increased risk of dying within the next 9 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitrogen fertiliser is a major source of atmospheric N2O and over recent years there is growing evidence for a non-linear, exponential relationship between N fertiliser application rate and N2O emissions. However, there is still high uncertainty around the relationship of N fertiliser rate and N2O emissions for many cropping systems. We conducted year-round measurements of N2O emission and lint yield in four N rate treatments (0, 90, 180 and 270 kg N ha-1) in a cotton-fallow rotation on a black vertosol in Australia. We observed a nonlinear exponential response of N2O emissions to increasing N fertiliser rates with cumulative annual N2O emissions of 0.55 kg N ha-1, 0.67kg N ha-1, 1.07 kg N ha-1 and 1.89 kg N ha-1 for the four respective N fertiliser rates while no N response to yield occurred above 180N. The N fertiliser induced annual N2O EF factors increased from 0.13% to 0.29% and 0.50% for the 90N, 180N and 270N treatments respectively, significantly lower than the IPCC Tier 1 default value (1.0 %). This non-linear response suggests that an exponential N2O emissions model may be more appropriate for use in estimating emission of N2O from soils cultivated to cotton in Australia. It also demonstrates that improved agricultural N management practices can be adopted in cotton to substantially reduce N2O emissions without affecting yield potential.