565 resultados para serious
Resumo:
Children in food-insecure households may be at risk of poor health, developmental or behavioural problems. This study investigated the associations between food insecurity, potential determinants and health and developmental outcomes among children. Data on household food security, socio-demographic characteristics and children’s weight, health and behaviour were collected from households with children aged 3–17 years in socioeconomically disadvantaged suburbs by mail survey using proxy-parental reports (185 households). Data were analysed using logistic regression. Approximately one-in-three households (34%) were food insecure. Low household income was associated with an increased risk of food insecurity [odds ratio (OR), 16.20; 95% confidence interval (CI), 3.52–74.47]. Children with a parent born outside of Australia were less likely to experience food insecurity (OR, 0.42; 95% CI, 0.19–0.93). Children in food-insecure households were more likely to miss days from school or activities (OR, 3.52; 95% CI, 1.46–8.54) and were more likely to have borderline or atypical emotional symptoms (OR, 2.44; 95% CI, 1.11–5.38) or behavioural difficulties (OR, 2.35; 95% CI, 1.04–5.33). Food insecurity may be prevalent among socioeconomically disadvantaged households with children. The potential developmental consequences of food insecurity during childhood may result in serious adverse health and social implications.
Resumo:
Enterococci are versatile Gram-positive bacteria that can survive under extreme conditions. Most enterococci are non-virulent and found in the gastrointestinal tract of humans and animals. Other strains are opportunistic pathogens that contribute to a large number of nosocomial infections globally. Epidemiological studies demonstrated a direct relationship between the density of enterococci in surface waters and the risk of swimmer-associated gastroenteritis. The distribution of infectious enterococcal strains from the hospital environment or other sources to environmental water bodies through sewage discharge or other means, could increase the prevalence of these strains in the human population. Environmental water quality studies may benefit from focusing on a subset of Enterococcus spp. that are consistently associated with sources of faecal pollution such as domestic sewage, rather than testing for the entire genus. E. faecalis and E. faecium are potentially good focal species for such studies, as they have been consistently identified as the dominant Enterococcus spp. in human faeces and sewage. On the other hand enterococcal infections are predominantly caused by E. faecalis and E. faecium. The characterisation of E. faecalis and E. faecium is important in studying their population structures, particularly in environmental samples. In developing and implementing rapid, robust molecular genotyping techniques, it is possible to more accurately establish the relationship between human and environmental enterococci. Of particular importance, is to determine the distribution of high risk enterococcal clonal complexes, such as E. faecium clonal complex 17 and E. faecalis clonal complexes 2 and 9 in recreational waters. These clonal complexes are recognized as particularly pathogenic enterococcal genotypes that cause severe disease in humans globally. The Pimpama-Coomera watershed is located in South East Queensland, Australia and was investigated in this study mainly because it is used intensively for agriculture and recreational purposes and has a strong anthropogenic impact. The primary aim of this study was to develop novel, universally applicable, robust, rapid and cost effective genotyping methods which are likely to yield more definitive results for the routine monitoring of E. faecalis and E. faecium, particularly in environmental water sources. To fullfill this aim, new genotyping methods were developed based on the interrogation of highly informative single nucleotide polymorphisms (SNPs) located in housekeeping genes of both E. faecalis and E. faecium. SNP genotyping was successfully applied in field investigations of the Coomera watershed, South-East Queensland, Australia. E. faecalis and E. faecium isolates were grouped into 29 and 23 SNP profiles respectively. This study showed the high longitudinal diversity of E. faecalis and E. faecium over a period of two years, and both human-related and human-specific SNP profiles were identified. Furthermore, 4.25% of E. faecium strains isolated from water was found to correspond to the important clonal complex-17 (CC17). Strains that belong to CC17 cause the majority of hospital outbreaks and clinical infections globally. Of the six sampling sites of the Coomera River, Paradise Point had the highest number of human-related and human-specific E. faecalis and E. faecium SNP profiles. The secondary aim of this study was to determine the antibiotic-resistance profiles and virulence traits associated with environmental E. faecalis and E. faecium isolates compared to human pathogenic E. faecalis and E. faecium isolates. This was performed to predict the potential health risks associated with coming into contact with these strains in the Coomera watershed. In general, clinical isolates were found to be more resistant to all the antibiotics tested compared to water isolates and they harbored more virulence traits. Multi-drug resistance was more prevalent in clinical isolates (71.18% of E. faecalis and 70.3 % of E. faecium) compared to water isolates (only 5.66 % E. faecium). However, tetracycline, gentamicin, ciprofloxacin and ampicillin resistance was observed in water isolates. The virulence gene esp was the most prevalent virulence determinant observed in clinical isolates (67.79% of E. faecalis and 70.37 % of E. faecium), and this gene has been described as a human-specific marker used for microbial source tracking (MST). The presence of esp in water isolates (16.36% of E. faecalis and 19.14% of E. faecium) could be indicative of human faecal contamination in these waterways. Finally, in order to compare overall gene expression between environmental and clinical strains of E. faecalis, a comparative gene hybridization study was performed. The results of this investigation clearly demonstrated the up-regulation of genes associated with pathogenicity in E. faecalis isolated from water. The expression study was performed at physiological temperatures relative to ambient temperatures. The up-regulation of virulence genes demonstrates that environmental strains of E. faecalis can pose an increased health risk which can lead to serious disease, particularly if these strains belong to the virulent CC17 group. The genotyping techniques developed in this study not only provide a rapid, robust and highly discriminatory tool to characterize E. faecalis and E. faecium, but also enables the efficient identification of virulent enterococci that are distributed in environmental water sources.
Resumo:
The cultural and creative industries contribute to the knowledge economy by their role in reproducing cultural knowledge and through provision of entertainment, experience and leisure goods with cultural content, for which they are widely acknowledged as suffering serious market failure problems (Baumol and Bowen, 1966; Throsby and Withers, 1979). But they also contribute to the innovation process, an aspect that has only recently been appreciated. Specifically, the creative industries are a driver of the knowledge economy by their contribution to the innovation process on the demand side of consumer uptake of new ideas and by their facilitation of consumer-producer interaction. The creative industries are, in this respect, a legitimate part of the innovation system of a knowledge economy.
Resumo:
Work in the Australian construction industry is fraught with risk and the potential for serious harm. The industry is consistently placed within the three most hazardous industries to work along with other industries such as mining and transport (National Occupational Health and Safety Commission, 2003). In the 2001 to 2002 period, construction work killed 39 people and injured 13,250 more. Hence, more effort is required to reduce the injury rate and maximise the value of the rehabilitation/back-to-work process.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Introduction and objectives Early recognition of deteriorating patients results in better patient outcomes. Modified early warning scores (MEWS) attempt to identify deteriorating patients early so timely interventions can occur thus reducing serious adverse events. We compared frequencies of vital sign recording 24 h post-ICU discharge and 24 h preceding unplanned ICU admission before and after a new observation chart using MEWS and an associated educational programme was implemented into an Australian Tertiary referral hospital in Brisbane. Design Prospective before-and-after intervention study, using a convenience sample of ICU patients who have been discharged to the hospital wards, and in patients with an unplanned ICU admission, during November 2009 (before implementation; n = 69) and February 2010 (after implementation; n = 70). Main outcome measures Any change in a full set or individual vital sign frequency before-and-after the new MEWS observation chart and associated education programme was implemented. A full set of vital signs included Blood pressure (BP), heart rate (HR), temperature (T°), oxygen saturation (SaO2) respiratory rate (RR) and urine output (UO). Results After the MEWS observation chart implementation, we identified a statistically significant increase (210%) in overall frequency of full vital sign set documentation during the first 24 h post-ICU discharge (95% CI 148, 288%, p value <0.001). Frequency of all individual vital sign recordings increased after the MEWS observation chart was implemented. In particular, T° recordings increased by 26% (95% CI 8, 46%, p value = 0.003). An increased frequency of full vital sign set recordings for unplanned ICU admissions were found (44%, 95% CI 2, 102%, p value = 0.035). The only statistically significant improvement in individual vital sign recordings was urine output, demonstrating a 27% increase (95% CI 3, 57%, p value = 0.029). Conclusions The implementation of a new MEWS observation chart plus a supporting educational programme was associated with statistically significant increases in frequency of combined and individual vital sign set recordings during the first 24 h post-ICU discharge. There were no significant changes to frequency of individual vital sign recordings in unplanned admissions to ICU after the MEWS observation chart was implemented, except for urine output. Overall increases in the frequency of full vital sign sets were seen.
Resumo:
It is well recognised that there are serious correlates for victims of traditional bullying. These have been shown to include increased levels of depression, anxiety and psychosomatic symptoms, in addition to often severe physical harm and even suicide. Bullied students also feel more socially ineffective; have greater interpersonal difficulties, together with higher absenteeism from school and lower academic competence. In the emerging field of cyberbullying many researchers have hypothesised a greater impact and more severe consequences for victims because of the 24/7 nature and the possibility of the wider audience with this form of bullying. However, to date there is scarce empirical evidence to support this. This study sought to compare victims’ perceptions of the harshness and impact of bullying by traditional and cyber means. The major findings showed that although students who had been victimised by traditional bullying reported that they felt their bullying was harsher, crueller and had more impact on their lives than those students who had been cyberbullied, the correlates of their mental health revealed that cyber victims reported significantly more social difficulties, higher anxiety levels and depression than traditional victims. The implications for school counsellors and mental health workers are discussed.
Resumo:
In Australia, the spread and dominance of non-native plant species has been identified as a serious threat to rangeland biodiversity and ecosystem functioning. Rangelands extend over 70% of Australia’s land mass or more than 6 million km2. These rangelands consist of a diverse set of ecosystems including grasslands, shrub-lands, and woodlands spanning numerous climatic zones, ranging from arid to mesic. Because of the high economic, social, and environmental values, sustainable management of these vast landscapes is critical for Australia’s future. More than 2 million people live in these areas and major industries are ranching, mining, and tourism. In terms of biodiversity values, 53 of 85 of Australia’s biogeographical regions and 5 of 15 identified biodiversity hotspots are found in rangelands.
Resumo:
African lovegrass (Eragrostis curvula) is a C4 perennial grass, native to southern Africa, that was accidentally introduced into Australia in the late 1900s as a contaminant of pasture seed. Its utility for pasture improvement and soil conservation was explored because of its recognised ability to grow in areas of low rainfall and on nutrient-poor sandy loams. Several different agronomic types have now been intentionally introduced across Australia. African lovegrass is now found in all Australian states and territories. It is a declared weed in 33 council areas of New South Wales, a declared pest plant in the ACT and Tasmania and a Regionally Prohibited Weed in 5 out of 11 regions in Victoria. Victoria has also placed it in the very serious threat category (Carr et al. 1992). In Queensland, it has yet to be declared except under local law in the Eidsvold shire (Leigh and Walton, in press).
Resumo:
In this article, I present my experience with integrating an alternate reality gaming (ARG) framework into a pre-service science teacher education course. My goal is to provide an account of my experiences that can inform other science education practitioners at the tertiary and secondary levels that wish to adopt a similar approach in their classes. A game was designed to engage pre-service teachers with issues surrounding the declining enrolments in science, technology, engineering and mathematics disciplines (i.e., the STEM crisis; Tytler, 2007) and ways of re-engaging learners with STEM subjects. The use of ARG in science education is highly innovative. Literature on the use of ARG for educational purposes is scarce so in the article I have drawn on a range of available literature on gaming and ARG to define what it is and to suggest how it can be included in school science classrooms.
Resumo:
This paper presents Secret SLQ, a pervasive mobile game that aims to encourage eight to fourteen year olds to engage with the State Library of Queensland. The game sets out to encourage people to visit and explore the library, as well as educate a generation of young people and parents who may visit the library but have no idea of the treasures that it holds. The research explores how smartphone technology can be used to deliver an engaging and educational experience. The game aims to provide a fun and interactive way to guide participants through a multi-leveled library building, to search for unique QR codes to unlock clues, answer quiz questions and progress further up a leaderboard. This paper outlines the design and initial deployment of the game, reporting on results from a usability study and discussing initial observations made by librarians. Findings indicate that the mobile platform is suitable for delivering such experiences but consideration is needed when embedding games in such large environments so as not to confuse players as they play.
Resumo:
The term gamification describes the addition of game elements to non-game contexts as a means to motivate and engage users. This study investigates the design, delivery and pilot evaluation of a gamified, smartphone application built to introduce new students to the campus, services and people at university during their first few weeks. This paper describes changes to the application made after an initial field study was undertaken and provides an evaluation of the impact of the redesign. Survey responses were collected from thirteen students and usage data was captured from 105 students. Results indicate three levels of user engagement and suggest that there is value in adding game elements to the experience in this way. A number of issues are identified and discussed based on game challenges, input, and facilitating game elements in an event setting such as university orientation.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Adolescent risk-taking behavior has potentially serious injury consequences and school-based behavior change programs provide potential for reducing such harm. A well-designed program is likely to be theory-based and ecologically valid however it is rare that the operationalisation process of theories is described. The aim of this paper is to outline how the Theory of Planned Behavior and Cognitive Behavioral Therapy informed intervention design in a school setting. Teacher interviews provided insights into strategies that might be implemented within the curriculum and provided detail used to operationalise theory constructs. Benefits and challenges in applying both theories are described with examples from an injury prevention program, Skills for Preventing Injury in Youth.