868 resultados para decomposition of a support
Resumo:
In the last few decades, the focus on building healthy communities has grown significantly (Ashton, 2009). There is growing evidence that new approaches to planning are required to address the challenges faced by contemporary communities. These approaches need to be based on timely access to local information and collaborative planning processes (Murray, 2006; Scotch & Parmanto, 2006; Ashton, 2009; Kazda et al., 2009). However, there is little research to inform the methods that can support this type of responsive, local, collaborative and consultative health planning (Northridge et al., 2003). Some research justifies the use of decision support systems (DSS) as a tool to support planning for healthy communities. DSS have been found to increase collaboration between stakeholders and communities, improve the accuracy and quality of the decision-making process, and improve the availability of data and information for health decision-makers (Nobre et al., 1997; Cromley & McLafferty, 2002; Waring et al., 2005). Geographic information systems (GIS) have been suggested as an innovative method by which to implement DSS because they promote new ways of thinking about evidence and facilitate a broader understanding of communities. Furthermore, literature has indicated that online environments can have a positive impact on decision-making by enabling access to information by a broader audience (Kingston et al., 2001). However, only limited research has examined the implementation and impact of online DSS in the health planning field. Previous studies have emphasised the lack of effective information management systems and an absence of frameworks to guide the way in which information is used to promote informed decisions in health planning. It has become imperative to develop innovative approaches, frameworks and methods to support health planning. Thus, to address these identified gaps in the knowledge, this study aims to develop a conceptual planning framework for creating healthy communities and examine the impact of DSS in the Logan Beaudesert area. Specifically, the study aims to identify the key elements and domains of information that are needed to develop healthy communities, to develop a conceptual planning framework for creating healthy communities, to collaboratively develop and implement an online GIS-based Health DSS (i.e., HDSS), and to examine the impact of the HDSS on local decision-making processes. The study is based on a real-world case study of a community-based initiative that was established to improve public health outcomes and promote new ways of addressing chronic disease. The study involved the development of an online GIS-based health decision support system (HDSS), which was applied in the Logan Beaudesert region of Queensland, Australia. A planning framework was developed to account for the way in which information could be organised to contribute to a healthy community. The decision support system was developed within a unique settings-based initiative Logan Beaudesert Health Coalition (LBHC) designed to plan and improve the health capacity of Logan Beaudesert area in Queensland, Australia. This setting provided a suitable platform to apply a participatory research design to the development and implementation of the HDSS. Therefore, the HDSS was a pilot study examined the impact of this collaborative process, and the subsequent implementation of the HDSS on the way decision-making was perceived across the LBHC. As for the method, based on a systematic literature review, a comprehensive planning framework for creating healthy communities has been developed. This was followed by using a mixed method design, data were collected through both qualitative and quantitative methods. Specifically, data were collected by adopting a participatory action research (PAR) approach (i.e., PAR intervention) that informed the development and conceptualisation of the HDSS. A pre- and post-design was then used to determine the impact of the HDSS on decision-making. The findings of this study revealed a meaningful framework for organising information to guide planning for healthy communities. This conceptual framework provided a comprehensive system within which to organise existing data. The PAR process was useful in engaging stakeholders and decision-making in the development and implementation of the online GIS-based DSS. Through three PAR cycles, this study resulted in heightened awareness of online GIS-based DSS and openness to its implementation. It resulted in the development of a tailored system (i.e., HDSS) that addressed the local information and planning needs of the LBHC. In addition, the implementation of the DSS resulted in improved decision- making and greater satisfaction with decisions within the LBHC. For example, the study illustrated the culture in which decisions were made before and after the PAR intervention and what improvements have been observed after the application of the HDSS. In general, the findings indicated that decision-making processes are not merely informed (consequent of using the HDSS tool), but they also enhance the overall sense of ‗collaboration‘ in the health planning practice. For example, it was found that PAR intervention had a positive impact on the way decisions were made. The study revealed important features of the HDSS development and implementation process that will contribute to future research. Thus, the overall findings suggest that the HDSS is an effective tool, which would play an important role in the future for significantly improving the health planning practice.
Resumo:
Background and significance: Older adults with chronic diseases are at increasing risk of hospital admission and readmission. Approximately 75% of adults have at least one chronic condition, and the odds of developing a chronic condition increases with age. Chronic diseases consume about 70% of the total Australian health expenditure, and about 59% of hospital events for chronic conditions are potentially preventable. These figures have brought to light the importance of the management of chronic disease among the growing older population. Many studies have endeavoured to develop effective chronic disease management programs by applying social cognitive theory. However, limited studies have focused on chronic disease self-management in older adults at high risk of hospital readmission. Moreover, although the majority of studies have covered wide and valuable outcome measures, there is scant evidence on examining the fundamental health outcomes such as nutritional status, functional status and health-related quality of life. Aim: The aim of this research was to test social cognitive theory in relation to self-efficacy in managing chronic disease and three health outcomes, namely nutritional status, functional status, and health-related quality of life, in older adults at high risk of hospital readmission. Methods: A cross-sectional study design was employed for this research. Three studies were undertaken. Study One examined the nutritional status and validation of a nutritional screening tool; Study Two explored the relationships between participants. characteristics, self-efficacy beliefs, and health outcomes based on the study.s hypothesized model; Study Three tested a theoretical model based on social cognitive theory, which examines potential mechanisms of the mediation effects of social support and self-efficacy beliefs. One hundred and fifty-seven patients aged 65 years and older with a medical admission and at least one risk factor for readmission were recruited. Data were collected from medical records on demographics, medical history, and from self-report questionnaires. The nutrition data were collected by two registered nurses. For Study One, a contingency table and the kappa statistic was used to determine the validity of the Malnutrition Screening Tool. In Study Two, standard multiple regression, hierarchical multiple regression and logistic regression were undertaken to determine the significant influential predictors for the three health outcome measures. For Study Three, a structural equation modelling approach was taken to test the hypothesized self-efficacy model. Results: The findings of Study One suggested that a high prevalence of malnutrition continues to be a concern in older adults as the prevalence of malnutrition was 20.6% according to the Subjective Global Assessment. Additionally, the findings confirmed that the Malnutrition Screening Tool is a valid nutritional screening tool for hospitalized older adults at risk of readmission when compared to the Subjective Global Assessment with high sensitivity (94%), and specificity (89%) and substantial agreement between these two methods (k = .74, p < .001; 95% CI .62-.86). Analysis data for Study Two found that depressive symptoms and perceived social support were the two strongest influential factors for self-efficacy in managing chronic disease in a hierarchical multiple regression. Results of multivariable regression models suggested advancing age, depressive symptoms and less tangible support were three important predictors for malnutrition. In terms of functional status, a standard regression model found that social support was the strongest predictor for the Instrumental Activities of Daily Living, followed by self-efficacy in managing chronic disease. The results of standard multiple regression revealed that the number of hospital readmission risk factors adversely affected the physical component score, while depressive symptoms and self-efficacy beliefs were two significant predictors for the mental component score. In Study Three, the results of the structural equation modelling found that self-efficacy partially mediated the effect of health characteristics and depression on health-related quality of life. The health characteristics had strong direct effects on functional status and body mass index. The results also indicated that social support partially mediated the relationship between health characteristics and functional status. With regard to the joint effects of social support and self-efficacy, social support fully mediated the effect of health characteristics on self-efficacy, and self-efficacy partially mediated the effect of social support on functional status and health-related quality of life. The results also demonstrated that the models fitted the data well with relative high variance explained by the models, implying the hypothesized constructs under discussion were highly relevant, and hence the application for social cognitive theory in this context was supported. Conclusion: This thesis highlights the applicability of social cognitive theory on chronic disease self-management in older adults at risk of hospital readmission. Further studies are recommended to validate and continue to extend the development of social cognitive theory on chronic disease self-management in older adults to improve their nutritional and functional status, and health-related quality of life.
Resumo:
Background: This study explored the experiences of university employees that participated in a walking intervention that encouraged individuals to walk more throughout their workday. The 10-week program was comprised of 5 phases (i.e. baseline, anticipating barriers, short planned walks, longer planned walks and maintenance) and utilized a pedometer diary and an online website for logging steps. The pedometer diary included “action plans” for addressing barriers and planning walking and the online dashboard provided graphical outputs that allowed participants to visualize whether they were reaching or exceeding their step targets. Methods: A subsample of 12 academic and administrative employees from the study completed open ended questionnaires at the end of the study. The questions focused on capturing the major themes of benefits/mediators and problems/moderators of the program and were assessed using phenomenological approaches. Results: Participants found a raised consciousness of physical inactivity throughout the work day. They also found it useful to have a graphical display of physical activity patterns, but found time constraints and lack of managerial support to be the primary barriers/moderators of the program. Those most likely to withdraw from the program experienced technical difficulties with objective monitors and the online website. Conclusions: Findings highlight the value in being involved in a group forum and provide insights into the challenges of supporting such programs within the workplace.
Resumo:
BACKGROUND: The efficacy of nutritional support in the management of malnutrition in chronic obstructive pulmonary disease (COPD) is controversial. Previous meta-analyses, based on only cross-sectional analysis at the end of intervention trials, found no evidence of improved outcomes. OBJECTIVE: The objective was to conduct a meta-analysis of randomized controlled trials (RCTs) to clarify the efficacy of nutritional support in improving intake, anthropometric measures, and grip strength in stable COPD. DESIGN: Literature databases were searched to identify RCTs comparing nutritional support with controls in stable COPD. RESULTS: Thirteen RCTs (n = 439) of nutritional support [dietary advice (1 RCT), oral nutritional supplements (ONS; 11 RCTs), and enteral tube feeding (1 RCT)] with a control comparison were identified. An analysis of the changes induced by nutritional support and those obtained only at the end of the intervention showed significantly greater increases in mean total protein and energy intakes with nutritional support of 14.8 g and 236 kcal daily. Meta-analyses also showed greater mean (±SE) improvements in favor of nutritional support for body weight (1.94 ± 0.26 kg, P < 0.001; 11 studies, n = 308) and grip strength (5.3%, P < 0.050; 4 studies, n = 156), which was not shown by ANOVA at the end of the intervention, largely because of bias associated with baseline imbalance between groups. CONCLUSION: This systematic review and meta-analysis showed that nutritional support, mainly in the form of ONS, improves total intake, anthropometric measures, and grip strength in COPD. These results contrast with the results of previous analyses that were based on only cross-sectional measures at the end of intervention trials.
Resumo:
Background: Most skin cancers are preventable by encouraging consistent use of sun protective behaviour. In Australia, adolescents have high levels of knowledge and awareness of the risks of skin cancer but exhibit significantly lower sun protection behaviours than adults. There is limited research aimed at understanding why people do or do not engage in sun protective behaviour, and an associated absence of theory-based interventions to improve sun safe behaviour. This paper presents the study protocol for a school-based intervention which aims to improve the sun safe behaviour of adolescents. Methods/design: Approximately 400 adolescents (aged 12-17 years) will be recruited through Queensland, Australia public and private schools and randomized to the intervention (n = 200) or 'wait-list' control group (n = 200). The intervention focuses on encouraging supportive sun protective attitudes and beliefs, fostering perceptions of normative support for sun protection behaviour, and increasing perceptions of control/self-efficacy over using sun protection. It will be delivered during three × one hour sessions over a three week period from a trained facilitator during class time. Data will be collected one week pre-intervention (Time 1), and at one week (Time 2) and four weeks (Time 3) post-intervention. Primary outcomes are intentions to sun protect and sun protection behaviour. Secondary outcomes include attitudes toward performing sun protective behaviours (i.e., attitudes), perceptions of normative support to sun protect (i.e., subjective norms, group norms, and image norms), and perceived control over performing sun protective behaviours (i.e., perceived behavioural control). Discussion: The study will provide valuable information about the effectiveness of the intervention in improving the sun protective behaviour of adolescents.
Resumo:
Encouraging quality teaching staff to apply for and accept teaching placements in rural and remote locations is an ongoing concern internationally. The value of different support mechanisms provided for pre-service teachers attending a rural and remote practicum[1] are investigated through theories of place and the school-community nexus. Qualitative data regarding the experiences of the pre-service teachers were collected through interviews and case study notes. This project adds to our understanding of practicum in rural areas by employing a conceptual understanding of place to propose how the experiences of a four-week practicum may contribute to urban pre-service teachers’ conceptions of work and life in a rural community
Resumo:
A substantial body of literature exists identifying factors contributing to under-performing Enterprise Resource Planning systems (ERPs), including poor communication, lack of executive support and user dissatisfaction (Calisir et al., 2009). Of particular interest is Momoh et al.’s (2010) recent review identifying poor data quality (DQ) as one of nine critical factors associated with ERP failure. DQ is central to ERP operating processes, ERP facilitated decision-making and inter-organizational cooperation (Batini et al., 2009). Crucial in ERP contexts is that the integrated, automated, process driven nature of ERP data flows can amplify DQ issues, compounding minor errors as they flow through the system (Haug et al., 2009; Xu et al., 2002). However, the growing appreciation of the importance of DQ in determining ERP success lacks research addressing the relationship between stakeholders’ requirements and perceptions of ERP DQ, perceived data utility and the impact of users’ treatment of data on ERP outcomes.
Resumo:
Fire safety of light gauge cold-formed steel frame (LSF) wall systems is significant to the build-ing design. Gypsum plasterboard is widely used as a fire safety material in the building industry. It contains gypsum (CaSO4.2H2O), Calcium Carbonate (CaCO3) and most importantly free and chemically bound water in its crystal structure. The dehydration of the gypsum and the decomposition of Calcium Carbonate absorb heat, which gives the gypsum plasterboard fire resistant qualities. Recently a new composite panel system was developed, where a thin insulation layer was used externally between two plasterboards to improve the fire performance of LSF walls. In this research, finite element thermal models of both the traditional LSF wall panels with cavity insulation and the new LSF composite wall panels were developed to simulate their thermal behaviour under standard and realistic design fire conditions. Suitable thermal properties of gypsum plaster-board, insulation materials and steel were used. The developed models were then validated by comparing their results with fire test results. This paper presents the details of the developed finite element models of non-load bearing LSF wall panels and the thermal analysis results. It has shown that finite element models can be used to simulate the thermal behaviour of LSF walls with varying configurations of insulations and plasterboards. The results show that the use of cavity insulation was detrimental to the fire rating of LSF walls while the use of external insulation offered superior thermal protection. Effects of real fire conditions are also presented.
Resumo:
The traditional decomposition of the gender wage gap distinguishes between a component attributable to gender differences in productivity-related characteristics and a residual component that is often taken as a measure of discrimination. This study of data from the 1989 Canadian Labour Market Activity Survey shows that when occupation is treated as a productivity-related characteristic, the proportion of the gender wage gap labeled explained increases with the number of occupational classifications distinguished. However, on the basis of evidence that occupational differences reflect the presence of barriers faced by women attempting to enter male-dominated occupations, the authors conclude that occupation should not be treated as a productivity-related characteristic; and in a decomposition of the gender wage gap that treats occupation as endogenously determined, they find that the level of occupational aggregation has little effect on the size of the "explained" component of the gap.
Resumo:
We investigated the effect of hydrotherapy on time-trial performance and cardiac parasympathetic reactivation during recovery from intense training. On three occasions, 18 well-trained cyclists completed 60 min high-intensity cycling, followed 20 min later by one of three 10-min recovery interventions: passive rest (PAS), cold water immersion (CWI), or contrast water immersion (CWT). The cyclists then rested quietly for 160 min with R-R intervals and perceptions of recovery recorded every 30 min. Cardiac parasympathetic activity was evaluated using the natural logarithm of the square root of mean squared differences of successive R-R intervals (ln rMSSD). Finally, the cyclists completed a work-based cycling time trial. Effects were examined using magnitude-based inferences. Differences in time-trial performance between the three trials were trivial. Compared with PAS, general fatigue was very likely lower for CWI (difference [90% confidence limits; -12% (-18; -5)]) and CWT [-11% (-19; -2)]. Leg soreness was almost certainly lower following CWI [-22% (-30; -14)] and CWT [-27% (-37; -15)]. The change in mean ln rMSSD following the recovery interventions (ln rMSSD(Post-interv)) was almost certainly higher following CWI [16.0% (10.4; 23.2)] and very likely higher following CWT [12.5% (5.5; 20.0)] compared with PAS, and possibly higher following CWI [3.7% (-0.9; 8.4)] compared with CWT. The correlations between performance, ln rMSSD(Post-interv) and perceptions of recovery were unclear. A moderate correlation was observed between ln rMSSD(Post-interv) and leg soreness [r = -0.50 (-0.66; -0.29)]. Although the effects of CWI and CWT on performance were trivial, the beneficial effects on perceptions of recovery support the use of these recovery strategies.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Every day inboxes are being flooded with invitations to invest money in overseas schemes, notifications of overseas lottery wins and inheritances, as well as emails from banks and other institutions asking for customers to confirm information about their identity and account details. While these requests may seem outrageous, many believe the request to be true and respond, through the sending of money or personal details. This can have devastating consequences, financially, emotionally and physically. While enforcement action is important, greater success is likely to come in the area of prevention, which avoids victim losses in the first place. Considerable victim support is also required by victims who have suffered significant losses, in trying to get their lives back on track. This project examined fraud prevention strategies and support services for victims of online fraud across the United Kingdom, United States of America and Canada. While much work has already been undertaken in Queensland, there is considerable room for improvement and a great deal can be learnt from these overseas jurisdictions. There are several examples of innovative and effective responses, particularly in the area of victim support, that are highlighted throughout this report. It is advocated that Australia can continue to improve its position regarding the prevention and support of online fraud victims, by applying the knowledge and expertise learnt overseas to a local context.
Resumo:
Although there is a paucity of scientific support for the benefits of warm-up, athletes commonly warm up prior to activity with the intention of improving performance and reducing the incidence of injuries. The purpose of this study was to examine the role of warm-up intensity on both range of motion (ROM) and anaerobic performance. Nine males (age = 21.7 +/- 1.6 years, height = 1.77 +/- 0.04 m, weight = 80.2 +/- 6.8 kg, and VO2max = 60.4 +/- 5.4 ml/kg/min) completed four trials. Each trial consisted of hip, knee, and ankle ROM evaluation using an electronic inclinometer and an anaerobic capacity test on the treadmill (time to fatigue at 13 km/hr and 20% grade). Subjects underwent no warm-up or a warm-up of 15 minutes running at 60, 70 or 80% VO2max followed by a series of lower limb stretches. Intensity of warm-up had little effect on ROM, since ankle dorsiflexion and hip extension significantly increased in all warm-up conditions, hip flexion significantly increased only after the 80% VO2max warm-up, and knee flexion did not change after any warm-up. Heart rate and body temperature were significantly increased (p < 0.05) prior to anaerobic performance for each of the warm-up conditions, but anaerobic performance improved significantly only after warm-up at 60% VO2max (10%) and 70% VO2max (13%). A 15-minute warm-up at an intensity of 60-70% VO2max is therefore recommended to improve ROM and enhance subsequent anaerobic performance.