923 resultados para Evaluate Risk
Resumo:
Australian governments face the twin challenges of dealing with extreme weather-related disasters (such as floods and bushfires) and adapting to the impacts of climate change. These challenges are connected, so any response would benefit from a more integrated approach across and between the different levels of government.This report summarises the findings of an NCCARF-funded project that addresses this problem. The project undertook a three-way comparative case study of the 2009 Victorian bushfires, the 2011 Perth Hills bushfires, and the 2011 Brisbane floods. It collected data from the official inquiry reports into each of these events, and conducted new interviews and workshops with key stakeholders. The findings of this project included recommendations that range from the conceptual to the practical. First, it was argued that a reconceptualization of terms such as ‘community’ and ‘resilience’ was necessary to allow for more tailored responses to varying circumstances. Second, it was suggested that the high level of uncertainty inherent in disaster risk management and climate change adaptation requires a more iterative approach to policymaking and planning. Third, some specific institutional reforms were proposed that included: 1) a new funding mechanism that would encourage collaboration between and across different levels of government, as well as promoting partnerships with business and the community; 2) improving community engagement through new resilience grants run by local councils; 3) embedding climate change researchers within disaster risk management agencies to promote institutional learning, and; 4) creating an inter-agency network that encourages collaboration between organisations.
Resumo:
Emergency management and climate change adaptation will increasingly challenge all levels of government because of three main factors. First, Australia is extremely vulnerable to the impacts of climate change, particularly through the increasing frequency, duration and/or intensity of disasters such as floods and bushfires. Second, the system of government that divides powers by function and level can often act as a barrier to a well-integrated response. Third, policymaking processes struggle to cope with such complex inter-jurisdictional issues. This paper discusses these factors and explores the nature of the challenge for Australian governments. Investigations into the 2009 Victorian bushfires, the 2011 Perth Hills bushfires, and the 2011 Brisbane floods offer an indication of the challenges ahead and it is argued that there is a need to: improve community engagement and communication; refocus attention on resilience; improve interagency communication and collaboration; and, develop institutional arrangements that support continual improvement and policy learning. These findings offer an opportunity for improving responses as well as a starting point for integrating disaster risk management and climate change adaptation policies. The paper is based on the preliminary findings of an NCCARF funded research project: The Right Tool for the Job: Achieving climate change adaptation outcomes through improved disaster management policies, planning and risk management strategies involving Griffith University and RMIT. It should be noted from the outset that the purpose of this research project is not to criticise the actions of emergency service workers and volunteers who do an incredible job under extreme circumstances, often risking their own lives in the process. The aim is simply to offer emergency management agencies the opportunity to step back and rethink their overall approach to the challenge they face in the light of the impacts of climate change.
Resumo:
Physical activity (PA) has many beneficial physical and mental health effects. Physical inactivity is considered the fourth leading risk factor for global mortality. At present there are no systematic reviews on PA patterns among South Asian adults residing in the region. The present study aims to systematically evaluate studies on PA patterns in South Asian countries. A five-staged comprehensive search of the literature was conducted in Medline, Web of Science and SciVerse Scopus using keywords ‘Exercise’, ‘Walking’, ‘Physical activity’, ‘Inactivity’, ‘Physical Activity Questionnaire’, ‘International Physical Activity Questionnaire’, ‘IPAQ’, ‘Global Physical Activity Questionnaire’ and ‘GPAQ’, combined with individual country names. The search was restricted to English language articles conducted in humans and published before 31st December 2012. To obtain additional data a manual search of the reference lists of articles was performed. Data were also retrieved from the search of relevant web sites and online resources. The total number of hits obtained from the initial search was 1,771. The total number of research articles included in the present review is eleven (India-8, Sri Lanka-2, Pakistan-1). In addition, eleven country reports (Nepal-3, Bangladesh-2, India-2, Sri Lanka-2, Bhutan-1, Maldives-1) of World Health Organization STEPS survey from the South-Asian countries were retrieved online. In the research articles the overall prevalence of inactivity was as follows; India (18.5%-88.4%), Pakistan (60.1%) and Sri Lanka (11.0%-31.8%). STEPS survey reports were available from all countries except Pakistan. Overall in majority of STEPS surveys females were more inactive compared to males. Furthermore, leisure related inactivity was >75% in studies reporting inactivity in this domain and people were more active in transport domain when compared with the other domains. In conclusion, our results show that there is a wide variation in the prevalence of physical inactivity among South-Asian adults within and between countries. Furthermore, physical inactivity in South Asian adults was associated with several socio-demographic characteristics. Majority of South Asian adults were inactive during their leisure time. These Factors need to be considered when planning future interventions and research aimed at improving PA in the region.
Resumo:
The need to address on-road motorcycle safety in Australia is important due to the disproportionately high percentage of riders and pillions killed and injured each year. One approach to preventing motorcycle-related injury is through training and education. However, motorcycle rider training lacks empirical support as an effective road safety countermeasure to reduce crash involvement. Previous reviews have highlighted that risk-taking is a contributing factor in many motorcycle crashes, rather than merely a lack of vehicle-control skills (Haworth & Mulvihill, 2005; Jonah, Dawson & Bragg, 1982; Watson et al, 1996). Hence, though the basic vehicle-handling skills and knowledge of road rules that are taught in most traditional motorcycle licence training programs may be seen as an essential condition of safe riding, they do not appear to be sufficient in terms of crash reduction. With this in mind there is considerable scope for the improvement of program focus and content for rider training and education. This program of research examined an existing traditional pre-licence motorcycle rider training program and formatively evaluated the addition of a new classroom-based module to address risky riding; the Three Steps to Safer Riding program. The pilot program was delivered in the real world context of the Q-Ride motorcycle licensing system in the state of Queensland, Australia. Three studies were conducted as part of the program of research: Study 1, a qualitative investigation of delivery practices and student learning needs in an existing rider training course; Study 2, an investigation of the extent to which an existing motorcycle rider training course addressed risky riding attitudes and motives; and Study 3, a formative evaluation of the new program. A literature review as well as the investigation of learning needs for motorcyclists in Study 1 aimed to inform the initial planning and development of the Three Steps to Safer Riding program. Findings from Study 1 suggested that the training delivery protocols used by the industry partner training organisation were consistent with a learner-centred approach and largely met the learning needs of trainee riders. However, it also found that information from the course needs to be reinforced by on-road experiences for some riders once licensed and that personal meaning for training information was not fully gained until some riding experience had been obtained. While this research informed the planning and development of the new program, a project team of academics and industry experts were responsible for the formulation of the final program. Study 2 and Study 3 were conducted for the purpose of formative evaluation and program refinement. Study 2 served primarily as a trial to test research protocols and data collection methods with the industry partner organisation and, importantly, also served to gather comparison data for the pilot program which was implemented with the same rider training organisation. Findings from Study 2 suggested that the existing training program of the partner organisation generally had a positive (albeit small) effect on safety in terms of influencing attitudes to risk taking, the propensity for thrill seeking, and intentions to engage in future risky riding. However, maintenance of these effects over time and the effects on riding behaviour remain unclear due to a low response rate upon follow-up 24 months after licensing. Study 3 was a formative evaluation of the new pilot program to establish program effects and possible areas for improvement. Study 3a examined the short term effects of the intervention pilot on psychosocial factors underpinning risky riding compared to the effects of the standard traditional training program (examined in Study 2). It showed that the course which included the Three Steps to Safer Riding program elicited significantly greater positive attitude change towards road safety than the existing standard licensing course. This effect was found immediately following training, and mean scores for attitudes towards safety were also maintained at the 12 month follow-up. The pilot program also had an immediate effect on other key variables such as risky riding intentions and the propensity for thrill seeking, although not significantly greater than the traditional standard training. A low response rate at the 12 month follow-up unfortunately prevented any firm conclusions being drawn regarding the impact of the pilot program on self-reported risky riding once licensed. Study 3a further showed that the use of intermediate outcomes such as self-reported attitudes and intentions for evaluation purposes provides insights into the mechanisms underpinning risky riding that can be changed by education and training. A multifaceted process evaluation conducted in Study 3b confirmed that the intervention pilot was largely delivered as designed, with course participants also rating most aspects of training delivery highly. The complete program of research contributed to the overall body of knowledge relating to motorcycle rider training, with some potential implications for policy in the area of motorcycle rider licensing. A key finding of the research was that psychosocial influences on risky riding can be shaped by structured education that focuses on awareness raising at a personal level and provides strategies to manage future riding situations. However, the formative evaluation was mainly designed to identify areas of improvement for the Three Steps to Safer Riding program and found several areas of potential refinement to improve future efficacy of the program. This included aspects of program content, program delivery, resource development, and measurement tools. The planned future follow-up of program participants' official crash and traffic offence records over time may lend further support for the application of the program within licensing systems. The findings reported in this thesis offer an initial indication that the Three Steps to Safer Riding is a useful resource to accompany skills-based training programs.
Resumo:
BACKGROUND: The prevalence of protein-energy malnutrition in older adults is reported to be as high as 60% and is associated with poor health outcomes. Inadequate feeding assistance and mealtime interruptions may contribute to malnutrition and poor nutritional intake during hospitalisation. Despite being widely implemented in practice in the United Kingdom and increasingly in Australia, there have been few studies examining the impact of strategies such as Protected Mealtimes and dedicated feeding assistant roles on nutritional outcomes of elderly inpatients. AIMS: The aim of this research was to implement and compare three system-level interventions designed to specifically address mealtime barriers and improve energy intakes of medical inpatients aged ≥65 years. This research also aimed to evaluate the sustainability of any changes to mealtime routines six months post-intervention and to gain an understanding of staff perceptions of the post-intervention mealtime experience. METHODS: Three mealtime assistance interventions were implemented in three medical wards at Royal Brisbane and Women's Hospital: AIN-only: Additional assistant-in-nursing (AIN) with dedicated nutrition role. PM-only: Multidisciplinary approach to meals, including Protected Mealtimes. PM+AIN: Combined intervention: AIN + multidisciplinary approach to meals. An action research approach was used to carefully design and implement the three interventions in partnership with ward staff and managers. Significant time was spent in consultation with staff throughout the implementation period to facilitate ownership of the interventions and increase likelihood of successful implementation. A pre-post design was used to compare the implementation and nutritional outcomes of each intervention to a pre-intervention group. Using the same wards, eligible participants (medical inpatients aged ≥65 years) were recruited to the preintervention group between November 2007 and March 2008 and to the intervention groups between January and June 2009. The primary nutritional outcome was daily energy and protein intake, which was determined by visually estimating plate waste at each meal and mid-meal on Day 4 of admission. Energy and protein intakes were compared between the pre and post intervention groups. Data were collected on a range of covariates (demographics, nutritional status and known risk factors for poor food intake), which allowed for multivariate analysis of the impact of the interventions on nutritional intake. The provision of mealtime assistance to participants and activities of ward staff (including mealtime interruptions) were observed in the pre-intervention and intervention groups, with staff observations repeated six months post-intervention. Focus groups were conducted with nursing and allied health staff in June 2009 to explore their attitudes and behaviours in response to the three mealtime interventions. These focus group discussions were analysed using thematic analysis. RESULTS: A total of 254 participants were recruited to the study (pre-intervention: n=115, AIN-only: n=58, PM-only: n=39, PM+AIN: n=42). Participants had a mean age of 80 years (SD 8), and 40% (n=101) were malnourished on hospital admission, 50% (n=108) had anorexia and 38% (n=97) required some assistance at mealtimes. Occasions of mealtime assistance significantly increased in all interventions (p<0.01). However, no change was seen in mealtime interruptions. No significant difference was seen in mean total energy and protein intake between the preintervention and intervention groups. However, when total kilojoule intake was compared with estimated requirements at the individual level, participants in the intervention groups were more likely to achieve adequate energy intake (OR=3.4, p=0.01), with no difference noted between interventions (p=0.29). Despite small improvements in nutritional adequacy, the majority of participants in the intervention groups (76%, n=103) had inadequate energy intakes to meet their estimated energy requirements. Patients with cognitive impairment or feeding dependency appeared to gain substantial benefit from mealtime assistance interventions. The increase in occasions of mealtime assistance by nursing staff during the intervention period was maintained six-months post-intervention. Staff focus groups highlighted the importance of clearly designating and defining mealtime responsibilities in order to provide adequate mealtime care. While the purpose of the dedicated feeding assistant was to increase levels of mealtime assistance, staff indicated that responsibility for mealtime duties may have merely shifted from nursing staff to the assistant. Implementing the multidisciplinary interventions empowered nursing staff to "protect" the mealtime from external interruptions, but further work is required to empower nurses to prioritise mealtime activities within their own work schedules. Staff reported an increase in the profile of nutritional care on all wards, with additional non-nutritional benefits noted including improved mobility and functional independence, and better identification of swallowing difficulties. IMPLICATIONS: The PhD research provides clinicians with practical strategies to immediately introduce change to deliver better mealtime care in the hospital setting, and, as such, has initiated local and state-wide roll-out of mealtime assistance programs. Improved nutritional intakes of elderly inpatients was observed; however given the modest effect size and reducing lengths of hospital stays, better nutritional outcomes may be achieved by targeting the hospital-to-home transition period. Findings from this study suggest that mealtime assistance interventions for elderly inpatients with cognitive impairment and/or functional dependency show promise.
Resumo:
Background & aims The confounding effect of disease on the outcomes of malnutrition using diagnosis-related groups (DRG) has never been studied in a multidisciplinary setting. This study aims to determine the impact of malnutrition on hospitalisation outcomes, controlling for DRG. Methods Subjective Global Assessment was used to assess the nutritional status of 818 patients within 48 hours of admission. Prospective data were collected on cost of hospitalisation, length of stay (LOS), readmission and mortality up to 3 years post-discharged using National Death Register data. Mixed model analysis and conditional logistic regression matching by DRG were carried out to evaluate the association between nutritional status and outcomes, with the results adjusted for gender, age and race. Results Malnourished patients (29%) had longer hospital stays (6.9±7.3 days vs. 4.6±5.6 days, p<0.001) and were more likely to be readmitted within 15 days (adjusted relative risk = 1.9, 95%CI 1.1–3.2, p=0.025). Within a DRG, the mean difference between actual cost of hospitalisation and the average cost for malnourished patients was greater than well-nourished patients (p=0.014). Mortality was higher in malnourished patients at 1 year (34% vs. 4.1 %), 2 years (42.6% vs. 6.7%) and 3 years (48.5% vs. 9.9%); p<0.001 for all. Overall, malnutrition was a significant predictor of mortality (adjusted hazard ratio = 4.4, 95%CI 3.3-6.0, p<0.001). Conclusions Malnutrition was evident in up to one third of inpatients and led to poor hospitalisation outcomes, even after matching for DRG. Strategies to prevent and treat malnutrition in the hospital and post-discharge are needed.
Resumo:
Maternally inherited diabetes and deafness (MIDD) is an autosomal dominant inherited syndrome caused by the mitochondrial DNA (mtDNA) nucleotide mutation A3243G. It affects various organs including the eye with external ophthalmoparesis, ptosis, and bilateral macular pattern dystrophy.1, 2 The prevalence of retinal involvement in MIDD is high, with 50% to 85% of patients exhibiting some macular changes.1 Those changes, however, can vary between patients and within families dramatically based on the percentage of retinal mtDNA mutations, making it difficult to give predictions on an individual’s visual prognosis...
Resumo:
Public policymakers are caught in a dilemma : there is a growing list of urgent issues to address, at the same time that public expenditure is being cut. Adding to this dilemma is a system of government designed in the 19th century and competing theories of policymaking dating back to the 1950s. The interlinked problems of disaster risk management and climate change adaptation are cases in point. As the climate changes, there will be more frequent, intense and/or prolonged disasters such as floods and bushfires. Clearly a well integrated whole of government response is needed, but how might this be achieved? Further, how could academic research contribute to resolving this dilemma in a way that would produce something of theoretical interest as well as practical outcomes for policymakers? These are the questions addressed by our research via a comparative analysis of the 2009 Victorian bushfires, the 2011 Perth Hills bushfires, and the 2011 Brisbane floods. Our findings suggest that there is a need to: improve community engagement and communication; refocus attention on resilience; improve interagency communication and collaboration; and, develop institutional arrangements that support continual improvement and policy learning. These findings have implications for all areas of public policy theory and practice.
Resumo:
Background Cancer-related malnutrition is associated with increased morbidity, poorer tolerance of treatment, decreased quality of life, increased hospital admissions, and increased health care costs (Isenring et al., 2013). This study’s aim was to determine whether a novel, automated screening system was a useful tool for nutrition screening when compared against a full nutrition assessment using the Patient-Generated Subjective Global Assessment (PG-SGA) tool. Methods A single site, observational, cross-sectional study was conducted in an outpatient oncology day care unit within a Queensland tertiary facility, with three hundred outpatients (51.7% male, mean age 58.6 ± 13.3 years). Eligibility criteria: ≥18 years, receiving anticancer treatment, able to provide written consent. Patients completed the Malnutrition Screening Tool (MST). Nutritional status was assessed using the PG-SGA. Data for the automated screening system was extracted from the pharmacy software program Charm. This included body mass index (BMI) and weight records dating back up to six months. Results The prevalence of malnutrition was 17%. Any weight loss over three to six weeks prior to the most recent weight record as identified by the automated screening system relative to malnutrition resulted in 56.52% sensitivity, 35.43% specificity, 13.68% positive predictive value, 81.82% negative predictive value. MST score 2 or greater was a stronger predictor of nutritional risk relative to PG-SGA classified malnutrition (70.59% sensitivity, 69.48% specificity, 32.14% positive predictive value, 92.02% negative predictive value). Conclusions Both the automated screening system and the MST fell short of the accepted professional standard for sensitivity (80%) or specificity (60%) when compared to the PG-SGA. However, although the MST remains a better predictor of malnutrition in this setting, uptake of this tool in the Oncology Day Care Unit remains challenging.
Resumo:
Purpose To determine the rate of recurrence and associated risk factors following the use of mitomycin C (MMC) and/or interferon alpha-2b (IFN) for management of non-invasive ocular surface squamous neoplasia (OSSN). Design Retrospective non-comparative interventional case series. Methods Clinical practice setting of 135 patients treated consecutively with topical MMC (0.4 mg/mL) and/or IFN (1 million units/mL) for OSSN observed for clinical recurrence. Results Clinical recurrences were diagnosed in 19 of 135 (14.1%) eyes following topical treatment. The mean time to recurrence was 17.2 months (range 4 - 61) with 14 (73.7%) recurring within a two year period. There was no greater risk of recurrence identified for variables including lesion size, lesion location, gender, age, treatment type or duration. Post-hoc log-Rank pairwise comparisons revealed that lesions initially treated using surgery alone had significantly reduced time to recurrence (21.1 ± 5.6 months) compared to previous topical treatment with MMC (with or without surgery) (29.6 ± 4.7 months) (p = 0.04) and primary OSSN (23.2 ± 1.8 months) (p = 0.09). Conclusions Topical MMC and IFN are an effective treatment modality for a wide range of non-invasive OSSN. Topical therapy avoids the morbidity of excisional surgery with equivalent or reduced recurrence rates and should be considered as primary therapy.
Resumo:
The Beyond Compliance project, which began in July 2011 with funding from the Standards and Trade Development Facility for 2 years, aims to enhance competency and confidence in the South East Asian sub-region by applying a Systems Approach for pest risk management. The Systems Approach involves the use of integrated measures, at least two of which are independent, that cumulatively reduce the risk of introducing exotic pests through trade. Although useful in circumstances where single measures are inappropriate or unavailable, the Systems Approach is inherently more complicated than single-measure approaches, which may inhibit its uptake. The project methodology is to take prototype decision-support tools, such as Control Point-Bayesian Networks (CP-BN), developed in recent plant health initiatives in other regions, including the European PRATIQUE project, and to refine them within this sub-regional context. Case studies of high-priority potential agricultural trade will be conducted by National Plant Protection Organizations of participating South East Asian countries in trials of the tools, before further modifications. Longer term outcomes may include: more robust pest risk management in the region (for exports and imports); greater inclusion of stakeholders in development of pest risk management plans; increased confidence in trade negotiations; and new opportunities for trade.
Resumo:
Objective: To explore the influencing factors of esophageal cancer in the trunk basin of Dawen river , Shandong province. Methods: A case- control study was carried out: 195 living cases of diagnosed esophageal cancer and 195 controls were matched by age and sex and surveyed by a unified inventory. Results: T he following items could rises the risk of esophageal cancer : hard dry diet, smoke homemade cigarettes, alcohol consumption> 500 ml/ day, relatives with tumor in history ( OR = 51850, OR = 161 158, OR = 111 513, OR = 11 827, respectively ) . While drinking tea may have protective effect against esophageal cancer ( OR = 01 311). Conclusion: The high incidence of esophageal cancer in the area is relative not only to the environment and dietary factors, but also to the family history of esophageal cancer.
Resumo:
This brief provides the conceptual background of current research aiming to improve the understanding of the relationship between consumer religiosity and social and psychological risks associated with adopting new products and technologies. This project includes two main studies framed by Hunt-Vitell’s General Theory of Marketing Ethics and Theory of Moral Potency. Using scenario based experimental 2x2 design, two research questions will be answered upon the completion of the project: what is the nature of the relationship between consumer religiosity and perceptions of psychological and social risk? What is the role of moral potency in the relationship between consumer religiosity perception of psychological and social risk?
Resumo:
Background Acute respiratory illness, a leading cause of cough in children, accounts for a substantial proportion of childhood morbidity and mortality worldwide. In some children acute cough progresses to chronic cough (> 4 weeks duration), impacting on morbidity and decreasing quality of life. Despite the importance of chronic cough as a cause of substantial childhood morbidity and associated economic, family and social costs, data on the prevalence, predictors, aetiology and natural history of the symptom are scarce. This study aims to comprehensively describe the epidemiology, aetiology and outcomes of cough during and after acute respiratory illness in children presenting to a tertiary paediatric emergency department. Methods/design A prospective cohort study of children aged <15 years attending the Royal Children's Hospital Emergency Department, Brisbane, for a respiratory illness that includes parent reported cough (wet or dry) as a symptom. The primary objective is to determine the prevalence and predictors of chronic cough (>= 4 weeks duration) post presentation with acute respiratory illness. Demographic, epidemiological, risk factor, microbiological and clinical data are completed at enrolment. Subjects complete daily cough dairies and weekly follow-up contacts for 28(+/-3) days to ascertain cough persistence. Children who continue to cough for 28 days post enrolment are referred to a paediatric respiratory physician for review. Primary analysis will be the proportion of children with persistent cough at day 28(+/-3). Multivariate analyses will be performed to evaluate variables independently associated with chronic cough at day 28(+/-3). Discussion Our protocol will be the first to comprehensively describe the natural history, epidemiology, aetiology and outcomes of cough during and after acute respiratory illness in children. The results will contribute to studies leading to the development of evidence-based clinical guidelines to improve the early detection and management of chronic cough in children during and after acute respiratory illness.
Resumo:
Childhood immunisation coverage reported at 12 to <15 months and 2 years of age, may mask deficiencies in the timeliness of vaccines designed to protect against diseases in infancy. This study aimed to evaluate immunisation timeliness in Indigenous infants in the Northern Territory, Australia. Coverage was analysed at the date children turned 7, 13 and 18 months of age. By 7 months of age, 45.2% of children had completed the recommended schedule, increasing to 49.5% and 81.2% at 13 and 18 months of age, respectively. Immunisation performance benchmarks must focus on improving the timeliness in these children in the first year of life.