719 resultados para Health Outcomes
Resumo:
Objective - this study examined the clinical utility and precision of routine screening for alcohol and other drug use among women attending a public antenatal service. Study design - a survey of clients and audit of clinical charts. Participants and setting - clients attending an antenatal clinic of a large tertiary hospital in Queensland, Australia, from October to December 2009. Measurements and findings - data were collected from two sources. First, 32 women who reported use of alcohol or other drugs during pregnancy at initial screening were then asked to complete a full substance use survey. Second, data were collected from charts of 349 new clients who attended the antenatal clinic during the study period. Both sensitivity (86%, 67%) and positive predictive value (100%, 92%) for alcohol and other drug use respectively, were high. Only 15% of surveyed women were uncomfortable about being screened for substance use in pregnancy, yet the chart audit revealed poor staff compliance. During the study period, 25% of clients were either not screened adequately or not at all. Key conclusions and implications for practise - despite recommended universal screening in pregnancy and the apparent acceptance by our participants, alcohol and other drug (A&OD) screening in the antenatal setting remains problematic. Investigation into the reasons behind, and ways to overcome, the low screening rate could improve health outcomes for mothers and children in this at-risk group. Targeted education and training for midwives may form part of the solution as these clinicians have a key role in implementing prevention and early intervention strategies.
Resumo:
Conventional training methods for nurses involve many physical factors that place limits on potential class sizes. Alternate training methods with lower physical requirements may support larger class sizes, but given the tactile quality of nurse training, are most appropriately applied to supplement the conventional methods. However, where the importance of physical factors are periphery, such alternate training methods can provide an important way to increase upper class-size limits and therefore the rate of trained nurses entering the important role of critical care. A major issue in ICU training is that the trainee can be released into a real-life intensive care scenario with sub optimal preparation and therefore a level of anxiety for the student concerned, and some risk for the management level nurses, as patient safety is paramount. This lack of preparation places a strain on the allocation of human and non-human resources to teaching, as students require greater levels of supervision. Such issues are a concern to ICU management, as they relate to nursing skill development and patient health outcomes, as nursing training is potentially dangerous for patients who are placed in the care of inexperienced staff. As a solution to this problem, we present a prototype ICU handover training environment that has been developed in a socially interactive virtual world. Nurses in training can connect remotely via the Internet to this environment and engage in collaborative ICU handover training classes.
Resumo:
Background and significance: Older adults with chronic diseases are at increasing risk of hospital admission and readmission. Approximately 75% of adults have at least one chronic condition, and the odds of developing a chronic condition increases with age. Chronic diseases consume about 70% of the total Australian health expenditure, and about 59% of hospital events for chronic conditions are potentially preventable. These figures have brought to light the importance of the management of chronic disease among the growing older population. Many studies have endeavoured to develop effective chronic disease management programs by applying social cognitive theory. However, limited studies have focused on chronic disease self-management in older adults at high risk of hospital readmission. Moreover, although the majority of studies have covered wide and valuable outcome measures, there is scant evidence on examining the fundamental health outcomes such as nutritional status, functional status and health-related quality of life. Aim: The aim of this research was to test social cognitive theory in relation to self-efficacy in managing chronic disease and three health outcomes, namely nutritional status, functional status, and health-related quality of life, in older adults at high risk of hospital readmission. Methods: A cross-sectional study design was employed for this research. Three studies were undertaken. Study One examined the nutritional status and validation of a nutritional screening tool; Study Two explored the relationships between participants. characteristics, self-efficacy beliefs, and health outcomes based on the study.s hypothesized model; Study Three tested a theoretical model based on social cognitive theory, which examines potential mechanisms of the mediation effects of social support and self-efficacy beliefs. One hundred and fifty-seven patients aged 65 years and older with a medical admission and at least one risk factor for readmission were recruited. Data were collected from medical records on demographics, medical history, and from self-report questionnaires. The nutrition data were collected by two registered nurses. For Study One, a contingency table and the kappa statistic was used to determine the validity of the Malnutrition Screening Tool. In Study Two, standard multiple regression, hierarchical multiple regression and logistic regression were undertaken to determine the significant influential predictors for the three health outcome measures. For Study Three, a structural equation modelling approach was taken to test the hypothesized self-efficacy model. Results: The findings of Study One suggested that a high prevalence of malnutrition continues to be a concern in older adults as the prevalence of malnutrition was 20.6% according to the Subjective Global Assessment. Additionally, the findings confirmed that the Malnutrition Screening Tool is a valid nutritional screening tool for hospitalized older adults at risk of readmission when compared to the Subjective Global Assessment with high sensitivity (94%), and specificity (89%) and substantial agreement between these two methods (k = .74, p < .001; 95% CI .62-.86). Analysis data for Study Two found that depressive symptoms and perceived social support were the two strongest influential factors for self-efficacy in managing chronic disease in a hierarchical multiple regression. Results of multivariable regression models suggested advancing age, depressive symptoms and less tangible support were three important predictors for malnutrition. In terms of functional status, a standard regression model found that social support was the strongest predictor for the Instrumental Activities of Daily Living, followed by self-efficacy in managing chronic disease. The results of standard multiple regression revealed that the number of hospital readmission risk factors adversely affected the physical component score, while depressive symptoms and self-efficacy beliefs were two significant predictors for the mental component score. In Study Three, the results of the structural equation modelling found that self-efficacy partially mediated the effect of health characteristics and depression on health-related quality of life. The health characteristics had strong direct effects on functional status and body mass index. The results also indicated that social support partially mediated the relationship between health characteristics and functional status. With regard to the joint effects of social support and self-efficacy, social support fully mediated the effect of health characteristics on self-efficacy, and self-efficacy partially mediated the effect of social support on functional status and health-related quality of life. The results also demonstrated that the models fitted the data well with relative high variance explained by the models, implying the hypothesized constructs under discussion were highly relevant, and hence the application for social cognitive theory in this context was supported. Conclusion: This thesis highlights the applicability of social cognitive theory on chronic disease self-management in older adults at risk of hospital readmission. Further studies are recommended to validate and continue to extend the development of social cognitive theory on chronic disease self-management in older adults to improve their nutritional and functional status, and health-related quality of life.
Resumo:
Background: Physical activity is a key modifiable behavior impacting a number of important health outcomes. The path to developing chronic diseases commonly commences with lifestyle patterns developed during childhood and adolescence. This study examined whether parent physical activity and other factors correlated with physical activity amongst children are associated with self-reported physical activity in adolescents. Methods: A total of 115 adolescents (aged 12-14) and their parents completed questionnaire assessments. Self-reported physical activity was measured amongst adolescents and their parents using the International Physical Activity Questionnaire for Adolescents (IPAQ-A), and the International Physical Activity Questionnaire (IPAQ) respectively. Adolescents also completed the Children’s Physical Activity Correlates (CPAC), which measured factors that have previously demonstrated association with physical activity amongst children. To examine whether parent physical activity or items from the CPAC were associated with self-reported adolescent physical activity, backward step-wise regression was undertaken. One item was removed at each step in descending order of significance (until two tailed item alpha=0.05 was achieved). Results: A total of 93 (80.9%) adolescents and their parents had complete data sets and were included in the analysis. Independent variables were removed in the order: perceptions of parental role modeling; importance of exercise; perceptions of parental encouragement; peer acceptance; fun of physical exertion; perceived competence; parent physical activity; self-esteem; liking of exercise; and parental influence. The only variable remaining in the model was ‘liking of games and sport’ (p=0.003, adjusted r-squared=0.085). Discussion: These findings indicate that factors associated with self-reported physical activity in adolescents are not necessarily the same as younger children (aged 8-11). While ‘liking of games and sport’ was included in the final model, the r-squared value did not indicate a strong association. Interestingly, parent self-reported physical activity was not included in the final model. It is likely that adolescent physical activity may be influenced by a variety of direct and indirect forms of socialization. These findings do support the view that intrinsically motivated themes such as the liking of games and sport take precedence over outside influences, like those presented by parents, in determining youth physical activity behaviors. These findings do not suggest that parents have no influence on adolescent physical activity patterns, but rather, the influence is likely to be more complex than physical activity behavior modeling perceived by the adolescent. Further research in this field is warranted in order to better understand potential contributors to successful physical activity promotion interventions amongst young adolescents.
Resumo:
Background Cohort studies can provide valuable evidence of cause and effect relationships but are subject to loss of participants over time, limiting the validity of findings. Computerised record linkage offers a passive and ongoing method of obtaining health outcomes from existing routinely collected data sources. However, the quality of record linkage is reliant upon the availability and accuracy of common identifying variables. We sought to develop and validate a method for linking a cohort study to a state-wide hospital admissions dataset with limited availability of unique identifying variables. Methods A sample of 2000 participants from a cohort study (n = 41 514) was linked to a state-wide hospitalisations dataset in Victoria, Australia using the national health insurance (Medicare) number and demographic data as identifying variables. Availability of the health insurance number was limited in both datasets; therefore linkage was undertaken both with and without use of this number and agreement tested between both algorithms. Sensitivity was calculated for a sub-sample of 101 participants with a hospital admission confirmed by medical record review. Results Of the 2000 study participants, 85% were found to have a record in the hospitalisations dataset when the national health insurance number and sex were used as linkage variables and 92% when demographic details only were used. When agreement between the two methods was tested the disagreement fraction was 9%, mainly due to "false positive" links when demographic details only were used. A final algorithm that used multiple combinations of identifying variables resulted in a match proportion of 87%. Sensitivity of this final linkage was 95%. Conclusions High quality record linkage of cohort data with a hospitalisations dataset that has limited identifiers can be achieved using combinations of a national health insurance number and demographic data as identifying variables.
Resumo:
Background In Australia and other developed countries, there are consistent and marked socioeconomic inequalities in health. Diet is a major contributing factor to the poorer health of lower socioeconomic groups: the dietary patterns of disadvantaged groups are least consistent with dietary recommendations for the prevention of diet-related chronic diseases compared with their more advantaged counterparts. Part of the reason that lower socioeconomic groups have poorer diets may be their consumption of takeaway foods. These foods typically have nutrient contents that fail to comply with the dietary recommendations for the prevention of chronic disease and associated risk factors. A high level of takeaway food consumption, therefore, may negatively influence overall dietary intakes and, consequently, lead to adverse health outcomes. Despite this, little attention has focused on the association between socioeconomic position (SEP) and takeaway food consumption, with the limited number of studies showing mixed results. Additionally, studies have been limited by only considering a narrow range of takeaway foods and not examining how different socioeconomic groups make choices that are more (or less) consistent with dietary recommendations. While a large number of earlier studies have consistently reported socioeconomically disadvantaged groups consume a lesser amount of fruit and vegetables, there is limited knowledge about the role of takeaway food in socioeconomic variations in fruit and vegetable intake. Furthermore, no known studies have investigated why there are socioeconomic differences in takeaway food consumption. The aims of this study are to: examine takeaway food consumption and the types of takeaway food consumed (healthy and less healthy) by different socioeconomic groups, to determine whether takeaway food consumption patterns explain socioeconomic variations in fruit and vegetable intake, and investigate the role of a range of psychosocial factors in explaining the association between SEP and takeaway food consumption and the choice of takeaway food. Methods This study used two cross-sectional population-based datasets: 1) the 1995 Australian National Nutrition Survey (NNS) which was conducted among a nationally representative sample of adults aged between 25.64 years (N = 7319, 61% response rate); and 2) the Food and Lifestyle Survey (FLS) which was conducted by the candidate and was undertaken among randomly selected adults aged between 25.64 years residing in Brisbane, Australia in 2009 (N = 903, 64% response rate). The FLS extended the NNS in several ways by describing current socioeconomic differences in takeaway food consumption patterns, formally assessing the mediated effect of takeaway food consumption to socioeconomic inequalities in fruit and vegetable intake, and also investigating whether (and which) psychosocial factors contributed to the observed socioeconomic variations in takeaway food consumption patterns. Results Approximately 32% of the NNS participants consumed takeaway food in the previous 24 hours and 38% of the FLS participants reported consuming takeaway food once a week or more. The results from analyses of the NNS and the FLS were somewhat mixed; however, disadvantaged groups were likely to consume a high level of �\less healthy. takeaway food compared with their more advantaged counterparts. The lower fruit and vegetable intake among lower socioeconomic groups was partly mediated by their high consumption of �\less healthy. takeaway food. Lower socioeconomic groups were more likely to have negative meal preparation behaviours and attitudes, and weaker health and nutrition-related beliefs and knowledge. Socioeconomic differences in takeaway food consumption were partly explained by meal preparation behaviours and attitudes, and these factors along with health and nutrition-related beliefs and knowledge appeared to contribute to the socioeconomic variations in choice of takeaway foods. Conclusion This thesis enhances our understanding of socioeconomic differences in dietary behaviours and the potential pathways by describing takeaway food consumption patterns by SEP, explaining the role of takeaway food consumption in socioeconomic inequalities in fruit and vegetable intake, and identifying the potential impact of psychosocial factors on socioeconomic differences in takeaway food consumption and the choice of takeaway food. Some important evidence is also provided for developing policies and effective intervention programs to improve the diet quality of the population, especially among lower socioeconomic groups. This thesis concludes with a discussion of a number of recommendations about future research and strategies to improve the dietary intake of the whole population, and especially among disadvantaged groups.
Resumo:
The health effects of environmental hazards are often examined using time series of the association between a daily response variable (e.g., death) and a daily level of exposure (e.g., temperature). Exposures are usually the average from a network of stations. This gives each station equal importance, and negates the opportunity for some stations to be better measures of exposure. We used a Bayesian hierarchical model that weighted stations using random variables between zero and one. We compared the weighted estimates to the standard model using data on health outcomes (deaths and hospital admissions) and exposures (air pollution and temperature) in Brisbane, Australia. The improvements in model fit were relatively small, and the estimated health effects of pollution were similar using either the standard or weighted estimates. Spatial weighted exposures would be probably more worthwhile when there is either greater spatial detail in the health outcome, or a greater spatial variation in exposure.
Resumo:
Exposures to traffic-related air pollution (TRAP) can be particularly high in transport microenvironments (i.e. in and around vehicles) despite the short durations typically spent there. There is a mounting body of evidence that suggests that this is especially true for fine (b2.5 μm) and ultrafine (b100 nm, UF) particles. Professional drivers, who spend extended periods of time in transport microenvironments due to their job, may incur exposures markedly higher than already elevated non-occupational exposures. Numerous epidemiological studies have shown a raised incidence of adverse health outcomes among professional drivers, and exposure to TRAP has been suggested as one of the possible causal factors. Despite this, data describing the range and determinants of occupational exposures to fine and UF particles are largely conspicuous in their absence. Such information could strengthen attempts to define the aetiology of professional drivers' illnesses as it relates to traffic combustion-derived particles. In this article, we suggest that the drivers' occupational fine and UF particle exposures are an exemplar case where opportunities exist to better link exposure science and epidemiology in addressing questions of causality. The nature of the hazard is first introduced, followed by an overview of the health effects attributable to exposures typical of transport microenvironments. Basic determinants of exposure and reduction strategies are also described, and finally the state of knowledge is briefly summarised along with an outline of the main unanswered questions in the topic area.
Resumo:
Quantifying spatial and/or temporal trends in environmental modelling data requires that measurements be taken at multiple sites. The number of sites and duration of measurement at each site must be balanced against costs of equipment and availability of trained staff. The split panel design comprises short measurement campaigns at multiple locations and continuous monitoring at reference sites [2]. Here we present a modelling approach for a spatio-temporal model of ultrafine particle number concentration (PNC) recorded according to a split panel design. The model describes the temporal trends and background levels at each site. The data were measured as part of the “Ultrafine Particles from Transport Emissions and Child Health” (UPTECH) project which aims to link air quality measurements, child health outcomes and a questionnaire on the child’s history and demographics. The UPTECH project involves measuring aerosol and particle counts and local meteorology at each of 25 primary schools for two weeks and at three long term monitoring stations, and health outcomes for a cohort of students at each school [3].
Resumo:
Airborne particulate matter pollution is of concern for a number of reasons and has been widely recognised as an important risk factor to human health. A number of toxicological and epidemiological studies reported negative health effects on both respiratory and cardiovascular system. Despite the availability of a huge body of research, the underlying toxicological mechanisms by which particles induce adverse health effects are not yet entirely understood. The production of reactive oxygen species (ROS) has been shown to induce oxidative stress, which is proposed as a mechanism for many of the adverse health outcomes associated with exposure to particulate matter (PM). Therefore, it is crucial to introduce a technique that will allow rapid and routine screenings of the oxidative potential of PM.
Resumo:
Recommendations to improve national diabetes-related foot disease (DRFD) care • National data collection on incidence and outcomes of DRFD. • Improved access to care, through the Medicare Benefits Schedule, for people with diabetes who have a current or past foot complication. • Standardised national model for interdisciplinary DRFD care. • National accreditation of interdisciplinary foot clinics and staff. • Subsidies for evidence-based treatments for DRFD, including medical-grade footwear and pressure off-loading devices. • Holistic diabetes care initiatives to “close the gap” on inequities in health outcomes for Aboriginal and Torres Strait Islander peoples.
Resumo:
Background Total hip arthroplasty (THA) is a commonly performed procedure and numbers are increasing with ageing populations. One of the most serious complications in THA are surgical site infections (SSIs), caused by pathogens entering the wound during the procedure. SSIs are associated with a substantial burden for health services, increased mortality and reduced functional outcomes in patients. Numerous approaches to preventing these infections exist but there is no gold standard in practice and the cost-effectiveness of alternate strategies is largely unknown. Objectives The aim of this project was to evaluate the cost-effectiveness of strategies claiming to reduce deep surgical site infections following total hip arthroplasty in Australia. The objectives were: 1. Identification of competing strategies or combinations of strategies that are clinically relevant to the control of SSI related to hip arthroplasty 2. Evidence synthesis and pooling of results to assess the volume and quality of evidence claiming to reduce the risk of SSI following total hip arthroplasty 3. Construction of an economic decision model incorporating cost and health outcomes for each of the identified strategies 4. Quantification of the effect of uncertainty in the model 5. Assessment of the value of perfect information among model parameters to inform future data collection Methods The literature relating to SSI in THA was reviewed, in particular to establish definitions of these concepts, understand mechanisms of aetiology and microbiology, risk factors, diagnosis and consequences as well as to give an overview of existing infection prevention measures. Published economic evaluations on this topic were also reviewed and limitations for Australian decision-makers identified. A Markov state-transition model was developed for the Australian context and subsequently validated by clinicians. The model was designed to capture key events related to deep SSI occurring within the first 12 months following primary THA. Relevant infection prevention measures were selected by reviewing clinical guideline recommendations combined with expert elicitation. Strategies selected for evaluation were the routine use of pre-operative antibiotic prophylaxis (AP) versus no use of antibiotic prophylaxis (No AP) or in combination with antibiotic-impregnated cement (AP & ABC) or laminar air operating rooms (AP & LOR). The best available evidence for clinical effect size and utility parameters was harvested from the medical literature using reproducible methods. Queensland hospital data were extracted to inform patients’ transitions between model health states and related costs captured in assigned treatment codes. Costs related to infection prevention were derived from reliable hospital records and expert opinion. Uncertainty of model input parameters was explored in probabilistic sensitivity analyses and scenario analyses and the value of perfect information was estimated. Results The cost-effectiveness analysis was performed from a health services perspective using a hypothetical cohort of 30,000 THA patients aged 65 years. The baseline rate of deep SSI was 0.96% within one year of a primary THA. The routine use of antibiotic prophylaxis (AP) was highly cost-effective and resulted in cost savings of over $1.6m whilst generating an extra 163 QALYs (without consideration of uncertainty). Deterministic and probabilistic analysis (considering uncertainty) identified antibiotic prophylaxis combined with antibiotic-impregnated cement (AP & ABC) to be the most cost-effective strategy. Using AP & ABC generated the highest net monetary benefit (NMB) and an incremental $3.1m NMB compared to only using antibiotic prophylaxis. There was a very low error probability that this strategy might not have the largest NMB (<5%). Not using antibiotic prophylaxis (No AP) or using both antibiotic prophylaxis combined with laminar air operating rooms (AP & LOR) resulted in worse health outcomes and higher costs. Sensitivity analyses showed that the model was sensitive to the initial cohort starting age and the additional costs of ABC but the best strategy did not change, even for extreme values. The cost-effectiveness improved for a higher proportion of cemented primary THAs and higher baseline rates of deep SSI. The value of perfect information indicated that no additional research is required to support the model conclusions. Conclusions Preventing deep SSI with antibiotic prophylaxis and antibiotic-impregnated cement has shown to improve health outcomes among hospitalised patients, save lives and enhance resource allocation. By implementing a more beneficial infection control strategy, scarce health care resources can be used more efficiently to the benefit of all members of society. The results of this project provide Australian policy makers with key information about how to efficiently manage risks of infection in THA.
Resumo:
Background: Greater research utilisation in cancer nursing practice is needed, in order to provide well-informed and effective nursing care to people affected by cancer. This paper aims to report on the implementation of evidence-based practice in a tertiary cancer centre. Methods: Using a case report design, this paper reports on the use of the Collaborative Model for Evidence Based Practice (CMEBP) in an Australian tertiary cancer centre. The clinical case is the uptake of routine application of chlorhexidine-impregnated sponge dressings for preventing centrally inserted catheter-related bloodstream infections. In this case report, a number of processes that resulted in a service-wide practice change are described. Results: This model was considered a feasible method for successful research utilisation. In this case report, chlorhexidine-impregnated sponge dressings were proposed and implemented in the tertiary cancer centre with an aim of reducing the incidence of centrally inserted catheter-related bloodstream infections and potentially improving patient health outcomes. Conclusion: The CMEBP is feasible and effective for implementing clinical evidence into cancer nursing practice. Cancer nurses and health administrators need to ensure a supportive infrastructure and environment for clinical inquiry and research utilisation exists, in order to enable successful implementation of evidence-based practice in their cancer centres.