988 resultados para Recommended guidelines


Relevância:

20.00% 20.00%

Publicador:

Resumo:

NICE guidelines have stated that patients undergoing elective hip surgery are at increased risk for venous thromboembolic events (VTE) following surgery and have recommended thromboprophylaxis for 28-35 days1, 2. However the studies looking at the new direct thrombin inhibitors have only looked at major bleeding. We prospectively looked at wound discharge in patients who underwent hip arthroplasty and were given dabigatran postoperatively between March 2010 and April 2010 (n=56). We retrospectively compared these results to a matched group of patients who underwent similar operations six months earlier when all patients were given dalteparin routinely postoperatively until discharge, and discharged home on 150mg aspirin daily for 6 weeks (n=67). Wound discharge after 5 days was significantly higher in the patients taking dabigatran (32% dabigatran n=18, 10% dalteparin n=17, p=0.003) and our rate of delayed discharges due to wound discharge significantly increased from 7% in the dalteparin group (n=5) to 27% for dabigatran (n=15, p=0.004). Patients who received dabigatran were more than five times as likely to return to theatre with a wound complication as those who received dalteparin (7% dabigatran n=4, vs. 1% dalteparin n=1), however, this was not statistically significant (p=0.18). The significantly higher wound discharge and return to theatre rates demonstrated in this study have meant that we have changed our practice to administering dalteparin until the wound is dry and then starting dabigatran. Our study demonstrates the need for further clinical studies regarding wound discharge and dabigatran.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital human modeling (DHM), as a convenient and cost-effective tool, is increasingly incorporated into product and workplace design. In product design, it is predominantly used for the development of driver-vehicle systems. Most digital human modeling software tools, such as JACK, RAMSIS and DELMIA HUMANBUILDER provide functions to predict posture and positions for drivers with selected anthropometry according to SAE (Society of Automotive Engineers) Recommended Practices and other ergonomics guidelines. However, few studies have presented 2nd row passenger postural information, and digital human modeling of these passenger postures cannot be performed directly using the existing driver posture prediction functions. In this paper, the significant studies related to occupant posture and modeling were reviewed and a framework of determinants of driver vs. 2nd row occupant posture modeling was extracted. The determinants which are regarded as input factors for posture modeling include target population anthropometry, vehicle package geometry and seat design variables as well as task definitions. The differences between determinants of driver and 2nd row occupant posture models are significant, as driver posture modeling is primarily based on the position of the foot on the accelerator pedal (accelerator actuation point AAP, accelerator heel point AHP) and the hands on the steering wheel (steering wheel centre point A-Point). The objectives of this paper are aimed to investigate those differences between driver and passenger posture, and to supplement the existing parametric model for occupant posture prediction. With the guide of the framework, the associated input parameters of occupant digital human models of both driver and second row occupant will be identified. Beyond the existing occupant posture models, for example a driver posture model could be modified to predict second row occupant posture, by adjusting the associated input parameters introduced in this paper. This study combines results from a literature review and the theoretical modeling stage of a second row passenger posture prediction model project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the goal of improving the academic performance of primary and secondary students in Malaysia by 2020, the Malaysian Ministry of Education has made a significant investment in developing a Smart School Project. The aim of this project is to introduce interactive courseware into primary and secondary schools across Malaysia. As has been the case around the world, interactive courseware is regarded as a tool to motivate students to learn meaningfully and enhance learning experiences. Through an initial pilot phase, the Malaysian government has commissioned the development of interactive courseware by a number of developers and has rolled this courseware out to selected schools over the past 12 years. However, Ministry reports and several independent researchers have concluded that its uptake has been limited, and that much of the courseware has not been used effectively in schools. This has been attributed to weaknesses in the interface design of the courseware, which, it has been argued, fails to accommodate the needs of students and teachers. Taking the Smart School Project's science courseware as a sample, this research project has investigated the extent, nature, and reasons for the problems that have arisen. In particular, it has focused on examining the quality and effectivity of the interface design in facilitating interaction and supporting learning experiences. The analysis has been conducted empirically, by first comparing the interface design principles, characteristics and components of the existing courseware against best practice, as described in the international literature, as well as against the government guidelines provided to the developers. An ethnographic study was then undertaken to observe how the courseware is used and received in the classroom, and to investigate the stakeholders' (school principal, teachers and students') perceptions of its usability and effectivity. Finally, to understand how issues may have arisen, a review of the development process has been undertaken and it has been compared to development methods recommended in the literature, as well as the guidelines provided to the developers. The outcomes of the project include an empirical evaluation of the quality of the interface design of the Smart School Project's science courseware; the identification of other issues that have affected its uptake; an evaluation of the development process and, out of this, an extended set of principles to guide the design and development of future Smart School Project courseware to ensure that it accommodates the various stakeholders' needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lower energy and protein intakes are well documented in patients on texture modified diets. In acute hospital settings, the provision of appropriate texture modified foods to meet industry standards is essential for patient safety and nutrition outcomes. The texture modified menu at an acute private hospital was evaluated in accordance with their own nutritional standards (NS) and Australian National Standards (Dietitians Association of Australia and Speech Pathology Australia, 2007). The NS documents portion sizes and nutritional requirements for each menu. Texture B and C menus were analysed qualitatively and quantitatively over 9 days of a 6 day cyclic menu for breakfast (n=4), lunch (n=34) and dinner (n=34). Results indicated a lack of portion control, as specified by the NS, across all meals including breakfast (65–140%), soup (55–115%), meat (45–165%), vegetables (55–185%) and desserts (30–300%). Dilution factors and portion sizes influenced the protein and energy availability of Texture B & C menus. While the Texture B menu provided more energy, neither menu met the NS. Limited dessert options on the Texture C menu restricted the ability of this menu to meet protein NS. A lack of portion control and menu items incorrectly modified can compromise protein and energy intakes. Strategies to correct serving sizes and provision of alternate protein sources were recommended. Suggestions included cost-effectively increasing the variety of foods to assist protein and energy intake and the procurement of standardised equipment and visual aids to assist food preparation and presentation in accordance with texture modified guidelines and the NS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Although several validated nutritional screening tools have been developed to “triage” inpatients for malnutrition diagnosis and intervention, there continues to be debate in the literature as to which tool/tools clinicians should use in practice. This study compared the accuracy of seven validated screening tools in older medical inpatients against two validated nutritional assessment methods. Methods This was a prospective cohort study of medical inpatients at least 65 y old. Malnutrition screening was conducted using seven tools recommended in evidence-based guidelines. Nutritional status was assessed by an accredited practicing dietitian using the Subjective Global Assessment (SGA) and the Mini-Nutritional Assessment (MNA). Energy intake was observed on a single day during first week of hospitalization. Results In this sample of 134 participants (80 ± 8 y old, 50% women), there was fair agreement between the SGA and MNA (κ = 0.53), with MNA identifying more “at-risk” patients and the SGA better identifying existing malnutrition. Most tools were accurate in identifying patients with malnutrition as determined by the SGA, in particular the Malnutrition Screening Tool and the Nutritional Risk Screening 2002. The MNA Short Form was most accurate at identifying nutritional risk according to the MNA. No tool accurately predicted patients with inadequate energy intake in the hospital. Conclusion Because all tools generally performed well, clinicians should consider choosing a screening tool that best aligns with their chosen nutritional assessment and is easiest to implement in practice. This study confirmed the importance of rescreening and monitoring food intake to allow the early identification and prevention of nutritional decline in patients with a poor intake during hospitalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For decades the prevailing idea in B2B marketing has been that buyers are motivated by product/service specifications. Sellers are put on approved supplier lists, invited to respond to RFPs, and are selected on the basis of superior products, at the right price, delivered on time. The history of B2B advertising is filled with the advice “provide product specifications” and your advertising will be noticed, lead to sales inquiries, and eventually result in higher sales. Advertising filled with abstractions might work in the B2C market, but the B2B marketplace is about being literal. What we know about advertising — and particularly the message component of advertising — is based on a combination of experience, unproven ideas and a bit of social science. Over the years, advertising guidelines produced by the predecessors of BMA (National Industrial Advertising Association, Association of Industrial Advertising, and the Business/Professional Advertising Association) stressed emphasizing product features and tangible benefits. The major publishers of B2B magazines, e.g., McGraw-Hill, Penton Publishing, et al. had similar recommendations. Also, B2B marketing books recommend advertising that focuses on specific product features (Kotler and Pfoertsch, 2006; Lamons, 2005). In more recent times, abstraction in advertising messages has penetrated the B2B marketplace. Even though such advertising legends as David Ogilvy (1963, 1985) frequently recommended advertising based on hard-core information, we’ve seen the growing use of emotional appeals, including humor, fear, parental affection, etc. Beyond the use of emotion, marketers attempt to build a stronger connection between their brands and buyers through the use of abstraction and symbolism. Below are two examples of B2B advertisements — Figure 1A is high in literalism and Figure 1B is high in symbolism. Which approach — a “left-brain” (literal) or “right brain” (symbolic) is more effective in B2B advertising? Are the advertising message creation guidelines from the history of B2B advertising accurate? Are the foundations of B2B message creation (experience and unproven ideas) sound?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia, research suggests that up to one quarter of child pedestrian hospitalisations result from driveway run-over incidents (Pinkney et al., 2006). In Queensland, these numbers equate to an average of four child fatalities and 81 children presenting at hospital emergency departments every year (The Commission for Children, Young People and Child Guardian). National comparison shows that these numbers represent a slightly higher per capita rate (23.5% of all deaths). To address this issue, the current research was undertaken with the aim to develop an educative intervention based on data collected from parents and caregivers of young children. Thus, the current project did not seek to use available intervention or educational material, but to develop a new evidence-based intervention specifically targeting driveway run-overs involving young children. To this end, general behavioural and environmental changes that caregivers had undertaken in order to reduce the risk of injury to any child in their care were investigated. Broadly, the first part of this report sought to: • develop a conceptual model of established domestic safety behaviours, and to investigate whether this model could be successfully applied to the driveway setting; • explore and compare sources of knowledge regarding domestic and driveway child safety; and • examine the theoretical implications of current domestic and driveway related behaviour and knowledge among caregivers. The aim of the second part of this research was to develop and test the efficacy of an intervention based on the findings in the first part of the research project. Specifically, it sought to: • develop an educational driveway intervention that is based on current safety behaviours in the domestic setting and informed by existing knowledge of driveway safety and behaviour change theory; and • evaluate its efficacy in a sample of parents and caregivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective The main aim of the present study was to identify food consumption in Sri Lankan adults based on serving characteristics. Design Cross-sectional study. Fruits, vegetables, starch, meat, pulses, dairy products and added sugars in the diet were assessed with portion sizes estimated using standard methods. Setting Twelve randomly selected clusters from the Sri Lanka Diabetes and Cardiovascular Study. Subjects Six hundred non-institutionalized adults. Results The daily intake of fruit (0·43), vegetable (1·73) and dairy (0·39) portions were well below national recommendations. Only 3·5 % of adults consumed the recommended 5 portions of fruits and vegetables/d; over a third of the population consumed no dairy products and fewer than 1 % of adults consumed 2 portions/d. In contrast, Sri Lankan adults consumed over 14 portions of starch and 3·5 portions of added sugars daily. Almost 70 % of those studied exceeded the upper limit of the recommendations for starch intake. The total daily number of meat and pulse portions was 2·78. Conclusions Dietary guidelines emphasize the importance of a balanced and varied diet; however, a substantial proportion of the Sri Lankan population studied failed to achieve such a recommendation. Nutrition-related diseases in the country may be closely correlated with unhealthy eating habits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many countries, governments and health agencies are strongly promoting physical activity as a means to prevent the accumulation of fatness that leads to weight gain and obesity. However, there is often a resistance to respond to health promotion initiatives. For example, in the UK, the Chief Medical Officer has recently reported that 71% of women and 61% of men fail to carry out even the minimal amount of physical activity recommended in the government’s guidelines. Similarly, the Food safety Agency has promoted reductions in the intake of fat, sugar and salt but with very little impact on the pattern of consumption. Why is it that recommendations to improve health are so difficult to implement, and produce the desired outcome?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: While weight gain during pregnancy is regarded as important, there has not been a prospective study of measured weight gain in pregnancy in Australia. This study aimed to prospectively evaluate pregnancy-related weight gain against the Institute of Medicine (IOM) recommendations in women receiving antenatal care in a setting where ongoing weight monitoring is not part of routine clinical practice, to describe women's knowledge of weight gain recommendations and to describe the health professional advice received relating to gestational weight gain (GWG). Methods: Pregnant women were recruited ≤20 weeks of gestation (n = 664) from a tertiary obstetric hospital between August 2010 to July 2011 for this prospective observational study. Outcome measures were weight gain from pre-pregnancy to 36 weeks of gestation, weight gain knowledge and health professional advice received. Results: Thirty-six percent of women gained weight according to guidelines. Twenty-six percent gained inadequate weight, and 38% gained excess weight. Fifty-six percent of overweight women gained weight in excess of the IOM guidelines compared with 30% of those who started with a healthy weight (P < 0.001). At 16 weeks, 47% of participants were unsure of the weight gain recommendations for them. Sixty-two percent of women reported that the health professionals caring for them during this pregnancy ‘never’ or ‘rarely’ offered advice about how much weight to gain. Conclusions: The prevalence of inappropriate gestational weight gain in this study was high. The majority of women do not know their recommended weight gain. The advice women received from health professionals relating to healthy weight gain in pregnancy could be improved.