867 resultados para regular polyhedra
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.
Resumo:
Exercise offers the potential to improve circulation, wound healing outcomes, and functional and emotional wellbeing for adults experiencing venous leg ulceration. Individuals with chronic leg ulcers typically have multiple comorbidities such as arthritis, asthma, chronic obstructive airways disease, cardiac disease or neuromuscular disorders, which would also benefit from regular exercise. The aim of this review is to highlight the relationships between the calf muscle pump and venous return and range of ankle motion for adults with venous leg ulcers. The effect of exercise will also be considered in relation to the healing rates for adults experiencing venous leg ulceration. The findings suggest there is evidence that exercises which engage the calf muscle pump improve venous return. Ankle range of motion, which is crucial for complete activation of the calf muscle pump, can also be improved with simple, home-based exercise programs. However, observational studies still report that venous leg ulcer patients are less physically active than age-matched controls. Therefore, the behavioural reasons for not exercising must be considered. Only two studies, both underpowered, have assessed the effect of exercise on the healing rates of venous leg ulcers. In conclusion, exercise is feasible with this patient population. However, future studies with larger sample sizes are needed to provide stronger evidence to support the therapeutic benefit of exercise as an adjunct therapy in wound care.
Resumo:
Ultraendurance exercise training places large energy demands on athletes and causes a high turnover of vitamins through sweat losses, metabolism, and the musculoskeletal repair process. Ultraendurance athletes may not consume sufficient quantities or quality of food in their diet to meet these needs. Consequently, they may use oral vitamin and mineral supplements to maintain their health and performance. We assessed the vitamin and mineral intake of ultraendurance athletes in their regular diet, in addition to oral vitamin and mineral supplements. Thirty-seven ultraendurance triathletes (24 men and 13 women) completed a 7-day nutrition diary including a questionnaire to determine nutrition adequacy and supplement intake. Compared with dietary reference intakes for the general population, both male and female triathletes met or exceeded all except for vitamin D. In addition, female athletes consumed slightly less than the recommended daily intake for folate and potassium; however, the difference was trivial. Over 60% of the athletes reported using vitamin supplements, of which vitamin C (97.5%), vitamin E (78.3%), and multivitamins (52.2%) were the most commonly used supplements. Almost half (47.8%) the athletes who used supplements did so to prevent or reduce cold symptoms. Only 1 athlete used supplements on formal medical advice. Vitamin C and E supplementation was common in ultraendurance triathletes, despite no evidence of dietary deficiency in these 2 vitamins.
Resumo:
The 'open window' theory is characterised by short term suppression of the immune system following an acute bout of endurance exercise. This window of opportunity may allow for an increase in susceptibility to upper respiratory illness (URI). Many studies have indicated a decrease in immune function in response to exercise. However, many studies do not indicate changes in immune function past 2 hours after the completion of exercise, consequently failing to determine whether these immune cells numbers, or importantly their function, return to resting levels before the start of another bout of exercise. Ten male 'A' grade cyclists (age 24.2 +/- 5.3 years; body mass 73.8 +/- 6.5 kg; VO(2peak) 65.9 +/- 7.1 mL.kg(-1).min(-1)) exercised for two hours at 90% of their second ventilatory threshold. Blood samples were collected pre-, immediately post-, 2 hours, 4 hours, 6 hours, 8 hours, and 24 hours post-exercise. Immune variables examined included total leukocyte counts, neutrophil function (oxidative burst and phagocytic function), lymphocyte subset counts (CD4(+), CD8(+), and CD16(+)/56(+)), natural killer cell activity (NKCA), and NK phenotypes (CD56(dim)CD16(+), and CD56(bright)CD16(-)). There was a significant increase in total lymphocyte numbers from pre-, to immediately post-exercise (p<0.01), followed by a significant decrease at 2 hours post-exercise (p<0.001). CD4(+) T-cell counts significantly increased from pre-exercise, to 4 hours post- (p<0.05), and 6 hours post-exercise (p<0.01). However, NK (CD16(+)/56(+)) cell numbers decreased significantly from pre-exercise to 4 h post-exercise (p<0.05), to 6 h post-exercise (p<0.05), and to 8 h post-exercise (p<0.01). In contrast, CD56(bright)CD16- NK cell counts significantly increased from pre-exercise to immediately post-exercise (p<0.01). Neutrophil oxidative burst activity did not significantly change in response to exercise, while neutrophil cell counts significantly increased from pre-exercise, to immediately post-exercise (p<0.05), and 2 hours post-exercise (p<0.01), and remained significantly above pre-exercise levels to 8 hours post-exercise (p<0.01). Neutrophil phagocytic function significantly decreased from 2 hours post-exercise, to 6 hours post- (p<0.05), and 24 hours post-exercise (p<0.05). Finally, eosinophil cell counts significantly increased from 2 hours post to 6 hours post- (p<0.05), and 8 hours post-exercise (p<0.05). This is the first study to show changes in immunological variables up to 8 hours post-exercise, including significant NK cell suppression, NK cell phenotype changes, a significant increase in total lymphocyte counts, and a significant increase in eosinophil cell counts all at 8 hours post-exercise. Suppression of total lymphocyte counts, NK cell counts and neutrophil phagocytic function following exercise may be important in the increased rate of URI in response to regular intense endurance training.
Resumo:
Welcome to the first of what will be a regular review essay on films about journalism, covering recent releases as well as looking back at established classics and under-rated obscurities.
Resumo:
Introduction: Subjects with atrial fibrillation are at risk of thromboembolic events. The vitamin K antagonists (e.g., warfarin) are useful at preventing coagulation in atrial fibrillation, but are difficult to use. One of the FXa inhibitors, oral apixaban, has been tested as an anticoagulant in atrial fibrillation. Areas covered: In ARISTOTLE (Apixaban for reduction in stroke and other thromboembolic events in atrial fibrillation) apixaban was compared to warfarin in subjects with atrial fibrillation, and shown to cause a lower rate of stroke or systemic embolism and of major bleeding, than warfarin. In the AVERROES (Apixaban versus acetylsalicylic acid [ASA] to prevent stroke in atrial fibrillations patients who have failed or are unsuitable for vitamin K antagonist treatment) trial, stroke or systemic embolism occurred less often with apixaban than aspirin, whereas the occurrence of major bleeding was similar in the groups. Expert opinion: Apixaban is much easier for subjects with atrial fibrillation to use than warfarin, as it does not require regular monitoring by a health professional, with dosage adjustment. In addition to replacing warfarin in subjects with atrial fibrillation who are unable or not prepared to use warfarin, apixaban has the potential to replace warfarin more widely in the prevention of thromboembolism in subjects with atrial fibrillation.
Resumo:
Abstract (provisional) Background Food Frequency Questionnaires (FFQs) are commonly used in epidemiologic studies to assess long-term nutritional exposure. Because of wide variations in dietary habits in different countries, a FFQ must be developed to suit the specific population. Sri Lanka is undergoing nutritional transition and diet-related chronic diseases are emerging as an important health problem. Currently, no FFQ has been developed for Sri Lankan adults. In this study, we developed a FFQ to assess the regular dietary intake of Sri Lankan adults. Methods A nationally representative sample of 600 adults was selected by a multi-stage random cluster sampling technique and dietary intake was assessed by random 24-h dietary recall. Nutrient analysis of the FFQ required the selection of foods, development of recipes and application of these to cooked foods to develop a nutrient database. We constructed a comprehensive food list with the units of measurement. A stepwise regression method was used to identify foods contributing to a cumulative 90% of variance to total energy and macronutrients. In addition, a series of photographs were included. Results We obtained dietary data from 482 participants and 312 different food items were recorded. Nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Conclusion We developed a FFQ and the related nutrient composition database for Sri Lankan adults. Culturally specific dietary tools are central to capturing the role of diet in risk for chronic disease in Sri Lanka. The next step will involve the verification of FFQ reproducibility and validity.
Resumo:
The Australian Securities Exchange (ASX) listing rule 3.1 requires listed companies to immediately disclose price sensitive information to the market via the ASX’s Company Announcements Platform (CAP) prior to release through other disclosure channels. Since 1999, to improve the communication process, the ASX has permitted third-party mediation in the disclosure process that leads to the release of an Open Briefing (OB) through CAP. An OB is an interview between senior executives of the firm and an Open Briefing analyst employed by Orient Capital Pty Ltd (broaching topics such as current profit and outlook). Motivated by an absence of research on factors that influence firms to use OBs as a discretionary disclosure channel, this study examines (1) Why do firms choose to release information to the market via OBs?, (2) What are the firm characteristics that explain the discretionary use of OBs as a disclosure channel?, and (3) What are the disclosure attributes that influence firms’ decisions to regularly use OBs as a disclosure channel? Based on agency and information economics theories, a theoretical framework is developed to address research questions. This theoretical framework comprises disclosure environments such as firm characteristics and external factors, disclosure attributes and disclosure consequences. In order to address the first research question, the study investigates (i) the purpose of using OBs, (2) whether firms use OBs to provide information relating to previous public announcements, and (3) whether firms use OBs to provide routine or non-routine disclosures. In relation to the second and third research questions, hypotheses are developed to test factors expected to explain the discretionary use of OBs and firms’ decisions to regularly use OBs, and to explore the factors influencing the nature of OB disclosure. Content analysis and logistic regression models are used to investigate the research questions and test the hypotheses. Data are drawn from a hand-collected population of 1863 OB announcements issued by 239 listed firms between 2000 and 2010. The results show that types of information disclosed via an OB announcement are principally on matters relating to corporate strategies and performance and outlook. Most OB announcements are linked with a previous related announcement, with the lag between announcements significantly longer for loss-making firms than profitmaking firms. The main results show that firms which tend to be larger, have an analyst following, and have higher growth opportunities, are more likely to release OBs. Further, older firms and firms that release OB announcements containing good news, historical information and less complex information tend to be regular OB users. Lastly, firms more likely to disclose strategic information via OBs tend to operate in industries facing greater uncertainty, do not have analysts following, and have higher growth opportunities are less likely to disclose good news, historical information and complex information via OBs. This study is expected to contribute to disclosure literature in terms of disclosure attributes and firm characteristics that influence behaviour in this unique (OB) disclosure channel. With regard to practical significance, regulators can gain an understanding of how OBs are disclosed which can assist them in monitoring the use of OBs and improving the effectiveness of communications with stakeholders. In addition, investors can have a better comprehension of information contained in OB announcements, which may in turn better facilitate their investment decisions.
Resumo:
Touch keyboarding as a vocational skill is disappearing at a time when students and educators across alleducational sectors are expected to use a computer keyboard on a regular basis. there is documentation surrounding the embedding of Information and Communication Technology (ICT) within the curricula and yet within the National Training Packages touch keyboarding, previously considered a core component, is now an elective in the Business Services framework. This situation is an odds with current practice overseas where touch keyboarding is a component of primary and secondary curricula. From Rhetoric to Practice explores the current issues and practice in teaching and learning touch keyboarding in primary, secondary and tertiary institutions. Through structured interview participants detailed current practice of teachers and their students. Further, tertiary students participated in a training program aimed at achquiring touch keyboarding as a skill to enhance their studies. The researcher's background experience of fifteen years teaching touch keyboarding and computer literacty to adults and 30 years in Business Services trade provides a strong basis for this project. The teaching experience is enhanced by industry experience in administration, course coordination in technical, community and tertiary institutions and a strong commitment to the efficient usage of a computer by all. The findings of this project identified coursework expectations requiring all students from kindergarten to tertiary to use a computer keyboard on a weekly basis and that neither teaching nor learning tough keyboarding appears in the primary, secondary and tertiary curricula in New South Wales. Further, teachers recognised tough keyboarding as the prefered style over 'hunt and peck' keyboarding while acknowledging the teaching and learning difficulties of time constraints, the need for qualified touch keyboarding teachers and issues arising when retraining students from existing poor habits. In conclusion, this project recommends that computer keyboarding be defined as a writing tool for education, vocation and life, with early instruction set in primary schooling area and embedding touch keyboarding with the secondary, technical and tertiary areas and finally to draw the attention of educational authorities to the Duty Of Care aspects associated with computer keyboarding in the classroom.
Resumo:
The number of internet users in Australia has been steadily increasing, with over 10.9 million people currently subscribed to an internet provider (ABS, 2011). Over the past year, the most avid users of the Internet were 15 – 24 year olds, with approximately 95% accessing the internet on a regular basis (ABS, Social Trends, 2011). While the internet, in particularly Web 2.0, has been described as fundamental to higher education students, social and leisure internet tools are also increasingly being used by these students to generate and maintain their social and professional networks and interactions (Duffy & Bruns, 2006). Rapid technological advancements have enabled greater and faster access to information for learning and education (Hemmi et al, 2009; Glassman & Kang, 2011). As such, we sought to integrate interactive, online social media into the assessment profile of a Public Health undergraduate cohort at the Queensland University of Technology (QUT). The aim of this exercise was to engage undergraduate students to both develop and showcase their research on a range of complex, contemporary health issues within the online forum of Wikispaces for review and critique by their peers. We applied Bandura’s Social Learning Theory (SLT) to analyse the interactive processes from which students developed deeper and more sustained learning, and via which their overall academic writing standards were enriched. This paper outlines the assessment task, and the students’ feedback on their learning outcomes in relation to the Attentional, Retentional, Motor Reproduction, and Motivational Processes outlined by Bandura in SLT. We conceptualise the findings in a theoretical model, and discuss the implications for this approach within the broader tertiary environment.
Resumo:
Background and aims: Lower-limb lymphoedema is a serious and feared sequela after treatment for gynaecological cancer. Given the limited prospective data on incidence of and risk factors for lymphoedema after treatment for gynaecological cancer we initiated a prospective cohort study in 2008. Methods: Data were available for 353 women with malignant disease. Participants were assessed before treatment and at regular intervals after treatment for two years. Follow-up visits were grouped into time-periods of six weeks to six months (time 1), nine months to 15 months (time 2), and 18 months to 24 months (time 3). Preliminary data analyses were undertaken up to time 2 using generalised estimating equations to model the repeated measures data of Functional Assessment of Cancer Therapy-General (FACT-G) quality of life (QoL) scores and self-reported swelling at each follow-up period (best-fitting covariance structure). Results: Depending on the time-period, between 30% and 40% of patients self-reported swelling of the lower limb. The QoL of those with self-reported swelling was lower at all time-periods compared with those who did not have swelling. Mean (95% CI) FACT-G scores at time 0, 1 and 2 were 80.7 (78.2, 83.2), 83.0 (81.0, 85.0) and 86.3 (84.2, 88.4), respectively for those with swelling and 85.0 (83.0, 86.9), 86.0 (84.1, 88.0) and 88.9 (87.0, 90.7), respectively for those without swelling. Conclusions: Lower-limb swelling adversely influences QoL and change in QoL over time in patients with gynaecological cancer.
Resumo:
Maize streak virus strain A (MSV-A), the causal agent of maize streak disease, is today one of the most serious biotic threats to African food security. Determining where MSV-A originated and how it spread transcontinentally could yield valuable insights into its historical emergence as a crop pathogen. Similarly, determining where the major extant MSV-A lineages arose could identify geographical hot spots of MSV evolution. Here, we use model-based phylogeographic analyses of 353 fully sequenced MSV-A isolates to reconstruct a plausible history of MSV-A movements over the past 150 years. We show that since the probable emergence of MSV-A in southern Africa around 1863, the virus spread transcontinentally at an average rate of 32.5 km/year (95% highest probability density interval, 15.6 to 51.6 km/year). Using distinctive patterns of nucleotide variation caused by 20 unique intra-MSV-A recombination events, we tentatively classified the MSV-A isolates into 24 easily discernible lineages. Despite many of these lineages displaying distinct geographical distributions, it is apparent that almost all have emerged within the past 4 decades from either southern or east-central Africa. Collectively, our results suggest that regular analysis of MSV-A genomes within these diversification hot spots could be used to monitor the emergence of future MSV-A lineages that could affect maize cultivation in Africa. © 2011, American Society for Microbiology.
Resumo:
Pesticide spraying by farmers has an adverse impact on their health. However, in studies to date examining farmers’ exposure to pesticides, the costs of ill health and their determinants have been based on information provided by farmers themselves. Some doubt has therefore been cast on the reliability of these estimates. In this study, we address this by conducting surveys among two groups of farmers who use pesticides on a regular basis. The first group is made up of farmers who perceive that their ill health is due to exposure to pesticides and have obtained at least some form of treatment (described in this article as the ‘general farmer group’). The second group is composed of farmers whose ill health has been diagnosed by doctors and who have been treated in hospital for exposure to pesticides (described here as the ‘hospitalised farmer group’). Cost comparisons are made between the two groups of farmers. Regression analysis of the determinants of health costs show that the most important determinants of medical costs for both samples are the defensive expenditure, the quantity of pesticides used per acre per month, frequency of pesticide use and number of pesticides used per hour per day. The results have important policy implications.
Resumo:
Objective: Malnutrition results in poor health outcomes, and people with Parkinson’s disease may be more at risk of malnutrition. However, the prevalence of malnutrition in Parkinson’s disease is not yet well defined. The aim of this study is to provide an estimate of the extent of malnutrition in community-dwelling people with Parkinson’s disease. Methods: This is a cross-sectional study of people with Parkinson’s disease residing within a 2 hour driving radius of Brisbane, Australia. The Subjective Global Assessment (SGA) and scored Patient Generated Subjective Global Assessment (PG-SGA) were used to assess nutritional status. Body weight, standing or knee height, mid-arm circumference and waist circumference were measured. Results: Nineteen (15%) of the participants were moderately malnourished (SGA-B). The median PG-SGA score of the SGA-B group was 8 (4 – 15), significantly higher than the SGA-A group, U=1860.5,p<.05. The symptoms most influencing intake were loss of appetite, constipation, early satiety and problems swallowing. Conclusions: As with other populations, malnutrition remains under-recognised and undiagnosed in people with Parkinson’s disease. Regular screening of nutritional status in people with Parkinson’s disease by health professionals with whom they have regular contact should occur to identify those who may benefit from further nutrition assessment and intervention.