747 resultados para LIFE-STYLE FACTORS
Resumo:
Introduction The benefits of physical activity are established and numerous; not the least of which is reduced risk of negative cardiovascular events. While sedentary lifestyles are having negative impacts across populations, people with musculoskeletal disorders may face additional challenges to becoming physically active. Unfortunately, interventions in ambulatory hospital clinics for people with musculoskeletal disorders primarily focus on their presenting musculoskeletal complaint with cursory attention given to lifestyle risk factors; including physical inactivity. This missed opportunity is likely to have both personal costs for patients and economic costs for downstream healthcare funders. Objectives The objective of this study was to investigate the presence of obesity, diabetes, diagnosed cardiac conditions, and previous stroke (CVA) among insufficiently physically active patients accessing (non-surgical) ambulatory hospital clinics for musculoskeletal disorders to indicate whether a targeted risk-reducing intervention is warranted. Methods A sub-group analysis of patients (n=110) who self-reported undertaking insufficient physical activity level to meet national (Australian) minimum recommended guidelines was conducted. Responses to the Active Australia Survey were used to identify insufficiently active patients from a larger cohort study being undertaken across three (non-surgical) ambulatory hospital clinics for musculoskeletal disorders. Outcomes of interest included body mass index, Type-II diabetes, diagnosed cardiac conditions, previous CVA and patients’ current health-related quality of life (Euroqol-5D). Results The mean (standard deviation) age of inactive patients was 56 (14) years. Body mass index values indicated that n=80 (73%) were overweight n=26 (24%), or obese n=45 (49%). In addition to their presenting condition, a substantial number of patients reported comorbid diabetes n=23 (21%), hypertension n=25 (23%) or an existing heart condition n=14 (13%); 4 (3%) had previously experienced a CVA as well as other comorbid conditions. Health-related quality of life was also substantially impacted, with a mean (standard deviation) multi-attribute utility score of 0.51 (0.32). Conclusion A range of health conditions and risk factors for further negative health events, including cardiovascular complications, consistent with physically inactive lifestyles were evident. A targeted risk-reducing intervention is warranted for this high risk clinical group.
Resumo:
Australia is undergoing a critical demographic transition: the population is ageing. By 2050, one in four Australians will be older than 65 years and by 2031, the number of older Australians requiring residential aged care will increase 63%, to 1.4 million (ABS, 2005). In anticipation of this global demographic transition, the World Health Organisation has advocated ‘active ageing’, identifying health, participation and security as the three key factors that enhance quality of life for people as they age (WHO, 2002). While there is considerable discussion and acceptance of active ageing principles, little is known about the experience of ‘active ageing’ for older Australians who live in Residential Aged Care Facilities (RACF). This research addresses this knowledge gap by exploring the key facilitators and barriers to quality of life and active ageing in aged care from the perspective of aged care residents (n=12). To do this, the project documented the initial expectations and daily life experience of new residents living in a RACF over a one-year period. Combined with in-depth interviews and surveys, the project utilised Photovoice methodology - where participants used photography to record their lived experiences. The initial findings suggest satisfaction with living in aged care centers around five key themes; resident’s mental attitude to living in aged care, forming positive peer and staff relationships, self-determination and maintaining independence, opportunities to participate in interesting activities, and living in a safe and comfortable physical environment. This paper reports on the last of these five key themes, focusing on the role of design in facilitating quality of life, specifically: “living within these walls” – safety, comfort and the physical environment.
Resumo:
The US National Institute of Standards and Technology (NIST) showed that, in 2004, owners and operations managers bore two thirds of the total industry cost burden from inadequate interoperability in construction projects from inception to operation, amounting to USD10.6 billion. Building Information Modelling (BIM) and similar tools were identified by Engineers Australia in 2005 as potential instruments to significantly reduce this sum, which in Australia could amount to total industry-wide cost burden of AUD12 billion. Public sector road authorities in Australia have a key responsibility in driving initiatives to reduce greenhouse gas emissions from the construction and operations of transport infrastructure. However, as previous research has shown the Environmental Impact Assessment process, typically used for project approvals and permitting based on project designs available at the consent stage, lacks Key Performance Indicators (KPIs) that include long-term impact factors and transfer of information throughout the project life cycle. In the building construction industry, BIM is widely used to model sustainability KPIs such as energy consumption, and integrated with facility management systems. This paper proposes that a similar use of BIM in early design phases of transport infrastructure could provide: (i) productivity gains through improved interoperability and documentation; (ii) the opportunity to carry out detailed cost-benefit analyses leading to significant operational cost savings; (iii) coordinated planning of street and highway lighting with other energy and environmental considerations; iv) measurable KPIs that include long-term impact factors which are transferable throughout the project life cycle; and (v) the opportunity for integrating design documentation with sustainability whole-of-life targets.
Resumo:
Objectives To examine the level of knowledge of doctors about the law on withholding and withdrawing life-sustaining treatment from adults who lack decision-making capacity, and factors associated with a higher level of knowledge. Design, setting and participants Postal survey of all specialists in emergency medicine, geriatric medicine, intensive care, medical oncology, palliative medicine, renal medicine and respiratory medicine on the AMPCo Direct database in New South Wales, Victoria and Queensland. Survey initially posted to participants on 18 July 2012 and closed on 31 January 2013. Main outcome measures Medical specialists’ levels of knowledge about the law, based on their responses to two survey questions. Results Overall response rate was 32%. For the seven statements contained in the two questions about the law, the mean knowledge score was 3.26 out of 7. State and specialty were the strongest predictors of legal knowledge. Conclusions Among doctors who practise in the end-of-life field, there are some significant knowledge gaps about the law on withholding and withdrawing life-sustaining treatment from adults who lack decision-making capacity. Significant consequences for both patients and doctors can flow from a failure to comply with the law. Steps should be taken to improve doctors’ legal knowledge in this area and to harmonise the law across Australia.
Resumo:
Background: Advance Care Planning is an iterative process of discussion, decision-making and documentation about end-of-life care. Advance Care Planning is highly relevant in palliative care due to intersecting clinical needs. To enhance the implementation of Advance Care Planning, the contextual factors influencing its uptake need to be better understood. Aim: To identify the contextual factors influencing the uptake of Advance Care Planning in palliative care as published between January 2008 and December 2012. Methods: Databases were systematically searched for studies about Advance Care Planning in palliative care published between January 2008 and December 2012. This yielded 27 eligible studies, which were appraised using National Institute of Health and Care Excellence Quality Appraisal Checklists. Iterative thematic synthesis was used to group results. Results: Factors associated with greater uptake included older age, a college degree, a diagnosis of cancer, greater functional impairment, being white, greater understanding of poor prognosis and receiving or working in specialist palliative care. Barriers included having non-malignant diagnoses, having dependent children, being African American, and uncertainty about Advance Care Planning and its legal status. Individuals’ previous illness experiences, preferences and attitudes also influenced their participation. Conclusion: Factors influencing the uptake of Advance Care Planning in palliative care are complex and multifaceted reflecting the diverse and often competing needs of patients, health professionals, legislature and health systems. Large population-based studies of palliative care patients are required to develop the sound theoretical and empirical foundation needed to improve uptake of Advance Care Planning in this setting.
Resumo:
Oncogene-induced senescence (OIS) is a potent tumor-suppressive mechanism that is thought to come at the cost of aging. The Forkhead box O (FOXO) transcription factors are regulators of life span and tumor suppression. However, whether and how FOXOs function in OIS have been unclear. Here, we show a role for FOXO4 in mediating senescence by the human BRAFV600E oncogene, which arises commonly in melanoma. BRAFV600E signaling through mitogen-activated protein kinase/extracellular signal-regulated kinase kinase resulted in increased reactive oxygen species levels and c-Jun NH 2 terminal kinase-mediated activation of FOXO4 via its phosphorylation on Thr223, Ser226, Thr447, and Thr451. BRAFV600E-induced FOXO4 phosphorylation resulted in p21cip1-mediated cell senescence independent of p16 ink4a or p27kip1. Importantly, melanocyte-specific activation of BRAFV600E in vivo resulted in the formation of skin nevi expressing Thr223/Ser226-phosphorylated FOXO4 and elevated p21cip1. Together, these findings support a model in which FOXOs mediate a trade-off between cancer and aging.
Resumo:
PURPOSE/OBJECTIVES: To identify latent classes of individuals with distinct quality-of-life (QOL) trajectories, to evaluate for differences in demographic characteristics between the latent classes, and to evaluate for variations in pro- and anti-inflammatory cytokine genes between the latent classes. DESIGN: Descriptive, longitudinal study. SETTING: Two radiation therapy departments located in a comprehensive cancer center and a community-based oncology program in northern California. SAMPLE: 168 outpatients with prostate, breast, brain, or lung cancer and 85 of their family caregivers (FCs). METHODS: Growth mixture modeling (GMM) was employed to identify latent classes of individuals based on QOL scores measured prior to, during, and for four months following completion of radiation therapy. Single nucleotide polymorphisms (SNPs) and haplotypes in 16 candidate cytokine genes were tested between the latent classes. Logistic regression was used to evaluate the relationships among genotypic and phenotypic characteristics and QOL GMM group membership. MAIN RESEARCH VARIABLES: QOL latent class membership and variations in cytokine genes. FINDINGS: Two latent QOL classes were found: higher and lower. Patients and FCs who were younger, identified with an ethnic minority group, had poorer functional status, or had children living at home were more likely to belong to the lower QOL class. After controlling for significant covariates, between-group differences were found in SNPs in interleukin 1 receptor 2 (IL1R2) and nuclear factor kappa beta 2 (NFKB2). For IL1R2, carrying one or two doses of the rare C allele was associated with decreased odds of belonging to the lower QOL class. For NFKB2, carriers with two doses of the rare G allele were more likely to belong to the lower QOL class. CONCLUSIONS: Unique genetic markers in cytokine genes may partially explain interindividual variability in QOL. IMPLICATIONS FOR NURSING: Determination of high-risk characteristics and unique genetic markers would allow for earlier identification of patients with cancer and FCs at higher risk for poorer QOL. Knowledge of these risk factors could assist in the development of more targeted clinical or supportive care interventions for those identified.
Resumo:
Social marketers and governments have often targeted hard to reach or vulnerable groups (Gordon et al., 2006) such as young adults and low income earners. Past research has shown that low-income earners are often at risk of poor health outcomes and diminished lifestyle (Hampson et al., 2009; Scott et al., 2012). Young adults (aged 18 to 35) are in a transition phase of their life where lifestyle preferences are still being formed and are thus a useful target for long-term sustainable change. An area of focus for all levels of government is the use of energy with an aim to reduce consumption. There is little research to date that combines both of these groups and in particular in the context of household energy usage. Research into financially disadvantaged consumers is challenging the notion that that low income consumer purchasing and usage of products and services is based upon economic status (Sharma et al., 2012). Prior research shows higher income earners view items such as televisions and computers as necessities rather than non-essential (Karlsson et al., 2004). Consistent with this is growing evidence that low income earners purchase non-essential, energy intensive electronic appliances such as multiple big screen TV sets and additional refrigerators. With this in mind, there is a need for knowledge about how psychological and economic factors influence the energy consumption habits (e.g. appliances on standby power, leaving appliances turned on, running multiple devices at one time) of low income earners. Thus, our study sought to address the research question of: What are the factors that influence young adult low-income earners energy habits?
Resumo:
Background Multi attribute utility instruments (MAUIs) are preference-based measures that comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility value to each health state in the HSCS. When developing a MAUI from a health-related quality of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting a subset of domains and items because HRQOL questionnaires typically have too many items to be amendable to the valuation task required to develop the scoring algorithm for a MAUI. Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for deriving a MAUI from a HRQOL measure. Aim To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient than EFA to derive a HSCS from the European Organisation for the Research and Treatment of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its well-established domain structure. Methods QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQ-C30 structure and views of both patients and clinicians on which are the most relevant items. Dimensions determined by EFA or CFA were then subjected to Rasch analysis. Results CFA results generally supported the proposed QLQ-C30 structure (comparative fit index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA revealed fewer factors and some items cross-loaded on multiple factors. Further assessment of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with those detected by CFA. Conclusion CFA was more appropriate and efficient than EFA in producing clinically interpretable results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest that CFA should be recommended generally when deriving a preference-based measure from a HRQOL measure that has an established domain structure.
Resumo:
Background The Global Burden of Disease Study 2010 (GBD 2010) identified mental and substance use disorders as the 5th leading contributor of burden in 2010, measured by disability adjusted life years (DALYs). This estimate was incomplete as it excluded burden resulting from the increased risk of suicide captured elsewhere in GBD 2010's mutually exclusive list of diseases and injuries. Here, we estimate suicide DALYs attributable to mental and substance use disorders. Methods Relative-risk estimates of suicide due to mental and substance use disorders and the global prevalence of each disorder were used to estimate population attributable fractions. These were adjusted for global differences in the proportion of suicide due to mental and substance use disorders compared to other causes then multiplied by suicide DALYs reported in GBD 2010 to estimate attributable DALYs (with 95% uncertainty). Results Mental and substance use disorders were responsible for 22.5 million (14.8-29.8 million) of the 36.2 million (26.5-44.3 million) DALYs allocated to suicide in 2010. Depression was responsible for the largest proportion of suicide DALYs (46.1% (28.0%-60.8%)) and anorexia nervosa the lowest (0.2% (0.02%-0.5%)). DALYs occurred throughout the lifespan, with the largest proportion found in Eastern Europe and Asia, and males aged 20-30 years. The inclusion of attributable suicide DALYs would have increased the overall burden of mental and substance use disorders (assigned to them in GBD 2010 as a direct cause) from 7.4% (6.2%-8.6%) to 8.3% (7.1%-9.6%) of global DALYs, and would have changed the global ranking from 5th to 3rd leading cause of burden. Conclusions Capturing the suicide burden attributable to mental and substance use disorders allows for more accurate estimates of burden. More consideration needs to be given to interventions targeted to populations with, or at risk for, mental and substance use disorders as an effective strategy for suicide prevention.
Resumo:
Head and neck cancers are some of the leading cancers in the coloured and black South African male population and the perception exists that the incidence rates are rising. Aims: To determine the standardised morbidity rates and some of the risk factors for oral cancer in South Africa. Methods: Using histologically verified data from the National Cancer Registry, the age standardised incidence rates (ASIR) and life-time risks (LR) of oral cancer in South Africa were calculated for 1988-1991.2. In an ongoing case control study (1995 +) among black patients in Johannesburg/Soweto, adjusted odds ratios for developing oral cancers in relation to tobacco and alcohol consumption were calculated. Results: Coloured males vs. females: ASIR 13.13 vs. 3.5 (/100,000/year), LR 1:65 vs. 1:244. Black males vs. females: ASIR 9.06 vs. 1.75, LR 1:86 and 1:455. White males vs. females: ASIR 8.06 vs. 3.18, LR 1:104 vs. 1:278. Asian males vs. females: ASIR 5.24 vs. 6.66, LR 1:161 vs. 1:125. The odds ratio for oral cancer in black males in relation to smoking was 7.0 (95% CI 3.0-14.6) and daily alcohol consumption 1.3 (95% CI 0.6-2.8). In black females the odds ratios in relation to smoking were 3.9 (95% CI 1.7 8.9) and daily alcohol consumption 1.7(95% CI 0.7-4.1). Conclusions: The risk factors for oral cancer in South Africa are multiple and gender discrepancies in ASIR and LR signal differences in exposure to carcinogens. It is unclear whether the incidence of oral cancers will rise in the future.
Resumo:
INTRODUCTION: The first South African National Burden of Disease study quantified the underlying causes of premature mortality and morbidity experienced in South Africa in the year 2000. This was followed by a Comparative Risk Assessment to estimate the contributions of 17 selected risk factors to burden of disease in South Africa. This paper describes the health impact of exposure to four selected environmental risk factors: unsafe water, sanitation and hygiene; indoor air pollution from household use of solid fuels; urban outdoor air pollution and lead exposure. METHODS: The study followed World Health Organization comparative risk assessment methodology. Population-attributable fractions were calculated and applied to revised burden of disease estimates (deaths and disability adjusted life years, [DALYs]) from the South African Burden of Disease study to obtain the attributable burden for each selected risk factor. The burden attributable to the joint effect of the four environmental risk factors was also estimated taking into account competing risks and common pathways. Monte Carlo simulation-modeling techniques were used to quantify sampling, uncertainty. RESULTS: Almost 24 000 deaths were attributable to the joint effect of these four environmental risk factors, accounting for 4.6% (95% uncertainty interval 3.8-5.3%) of all deaths in South Africa in 2000. Overall the burden due to these environmental risks was equivalent to 3.7% (95% uncertainty interval 3.4-4.0%) of the total disease burden for South Africa, with unsafe water sanitation and hygiene the main contributor to joint burden. The joint attributable burden was especially high in children under 5 years of age, accounting for 10.8% of total deaths in this age group and 9.7% of burden of disease. CONCLUSION: This study highlights the public health impact of exposure to environmental risks and the significant burden of preventable disease attributable to exposure to these four major environmental risk factors in South Africa. Evidence-based policies and programs must be developed and implemented to address these risk factors at individual, household, and community levels.
Resumo:
Background Chronic leg ulcers, remaining unhealed after 4–6 weeks, affect 1-3% of the population, with treatment costly and health service resource intensive. Venous disease contributes to approximately 70% of all chronic leg ulcers and these ulcers are often associated with pain, reduced mobility and a decreased quality of life. Despite evidence-based care, 30% of these ulcers are unlikely to heal within a 24-week period and therefore the recognition and identification of risk factors for delayed healing of venous leg ulcers would be beneficial. Aim To review the available evidence on risk factors for delayed healing of venous leg ulcers. Methods: A review of the literature in regard to risk factors for delayed healing in venous leg ulcers was conducted from January 2000 to December 2013. Evidence was sourced through searches of relevant databases and websites for resources addressing risk factors for delayed healing in venous leg ulcers specifically. Results Twenty-seven studies, of mostly low-level evidence (Level III and IV), identified risk factors associated with delayed healing. Risk factors that were consistently identified included: larger ulcer area, longer ulcer duration, a previous history of ulceration, venous abnormalities and lack of high compression. Additional potential predictors with inconsistent or varying evidence to support their influence on delayed healing of venous leg ulcers included decreased mobility and/or ankle range of movement, poor nutrition and increased age. Discussion Findings from this review indicate that a number of physiological risk factors are asso- ciated with delayed healing in venous leg ulcers and that social and/or psychological risk factors should also be considered and examined further. Conclusion The findings from this review can assist health professionals to identify prognostic indicators or risk factors significantly associated with delayed healing in venous leg ulcers. This will facilitate realistic outcome planning and inform implementation of appropriate early strategies to promote healing.
Resumo:
Background The high recurrence rate of chronic venous leg ulcers has a significant impact on an individual’s quality of life and healthcare costs. Objectives This study aimed to identify risk and protective factors for recurrence of venous leg ulcers using a theoretical approach by applying a framework of self and family management of chronic conditions to underpin the study. Design Secondary analysis of combined data collected from three previous prospective longitudinal studies. Setting The contributing studies’ participants were recruited from two metropolitan hospital outpatient wound clinics and three community-based wound clinics. Participants Data were available on a sample of 250 adults, with a leg ulcer of primarily venous aetiology, who were followed after ulcer healing for a median follow-up time of 17 months after healing (range: 3 to 36 months). Methods Data from the three studies were combined. The original participant data were collected through medical records and self-reported questionnaires upon healing and every 3 months thereafter. A Cox proportion-hazards regression analysis was undertaken to determine the influential factors on leg ulcer recurrence based on the proposed conceptual framework. Results The median time to recurrence was 42 weeks (95% CI 31.9–52.0), with an incidence of 22% (54 of 250 participants) recurrence within three months of healing, 39% (91 of 235 participants) for those who were followed for six months, 57% (111 of 193) by 12 months, 73% (53 of 72) by two years and 78% (41 of 52) of those who were followed up for three years. A Cox proportional-hazards regression model revealed that the risk factors for recurrence included a history of deep vein thrombosis (HR 1.7, 95% CI 1.07–2.67, p=0.024), history of multiple previous leg ulcers (HR 4.4, 95% CI 1.84–10.5, p=0.001), and longer duration (in weeks) of previous ulcer (HR 1.01, 95% CI 1.003–1.01, p<0.001); while the protective factors were elevating legs for at least 30 minutes per day (HR 0.33, 95% CI 0.19–0.56, p<0.001), higher levels of self-efficacy (HR 0.95, 95% CI 0.92–0.99, p=0.016), and walking around for at least three hours/day (HR 0.66, 95% CI 0.44–0.98, p=0.040). Conclusions Results from this study provide a comprehensive examination of risk and protective factors associated with leg ulcer recurrence based on the chronic disease self and family management framework. These results in turn provide essential steps towards developing and testing interventions to promote optimal prevention strategies for venous leg ulcer recurrence.
Resumo:
Driver cognitions about aggressive driving of others are potentially important to the development of evidence-based interventions. Previous research has suggested that perceptions that other drivers are intentionally aggressive may influence recipient driver anger and subsequent aggressive responses. Accordingly, recent research on aggressive driving has attempted to distinguish between intentional and unintentional motives in relation to problem driving behaviours. This study assessed driver cognitive responses to common potentially provocative hypothetical driving scenarios to explore the role of attributions in driver aggression. A convenience sample of 315 general drivers 16–64 yrs (M = 34) completed a survey measuring trait aggression (Aggression Questionnaire AQ), driving anger (Driving Anger Scale, DAS), and a proxy measure of aggressive driving behaviour (Australian Propensity for Angry Driving AusPADS). Purpose designed items asked for drivers’ ‘most likely’ thought in response to AusPADS scenarios. Response options were equivalent to causal attributions about the other driver. Patterns in endorsements of attribution responses to the scenarios suggested that drivers tended to adopt a particular perception of the driving of others regardless of the depicted circumstances: a driving attributional style. No gender or age differences were found for attributional style. Significant differences were detected between attributional styles for driving anger and endorsement of aggressive responses to driving situations. Drivers who attributed the on-road event to the other being an incompetent or dangerous driver had significantly higher driving anger scores and endorsed significantly more aggressive driving responses than those drivers who attributed other driver’s behaviour to mistakes. In contrast, drivers who gave others the ‘benefit of the doubt’ endorsed significantly less aggressive driving responses than either of these other two groups, suggesting that this style is protective.