858 resultados para Roadside rest areas
Resumo:
Atmospheric nanoparticles are one of those pollutants currently unregulated through ambient air quality standards. The aim of this chapter is to assess the environmental and health impacts of atmospheric nanoparticles in European environments. The chapter begins with the conventional information on the origin of atmospheric nanoparticles, followed by their physical and chemical characteristics. A brief overview of recently published review articles on this topic is then presented to guide those readers interested in exploring any specific aspect of nanoparticles in greater detail. A further section reports a summary of recently published studies on atmospheric nanoparticles in European cities. This covers a total of about 45 sampling locations in 30 different cities within 15 European countries for quantifying levels of roadside and urban background particle number concentrations (PNCs). Average PNCs at roadside and urban background sites were found to be 3.82±3.25 ×104 cm–3 and 1.63±0.82 ×104 cm–3, respectively, giving a roadside to background PNC ratio of ~2.4. Engineered nanoparticles are one of the key emerging categories of airborne nanoparticles, especially for the indoor environments. Their ambient concentrations may increase in future due to widespread use of nanotechnology integrated products. Evaluation of their sources and probable impacts on air quality and human health are briefly discussed in the following section. Respiratory deposition doses received by the public exposed to roadside PNCs in numerous European locations are then estimated. These were found to be in the 1.17–7.56 1010 h–1 range over the studied roadside European locations. The following section discusses the potential framework for airborne nanoparticle regulations in Europe and, in addition, the existing control measures to limit nanoparticle emissions at source. The chapter finally concludes with a synthesis of the topic areas covered and highlights important areas for further work.
Resumo:
Young people seen as ‘at risk’ are a substantial focus across a wide range of policy and practice fields in national and international contexts. This article addresses two of those fields, youth homelessness and youth failing to obtain a basic education that will give them access to employment and full community participation as active citizens. By comparing solutions to the problems of youth homelessness and youth educationally at risk, the article distils key meta-characteristics useful for both social workers and educators in mutually supporting some of the most at risk young people in our communities today. This is what the authors term ‘a joined-up practice’.
Resumo:
Walking as an out-of-home mobility activity is recognised for its contribution to healthy and active ageing. The environment can have a powerful effect on the amount of walking activity undertaken by older people, thereby influencing their capacity to maintain their wellbeing and independence. This paper reports the findings from research examining the experiences of neighbourhood walking for 12 older people from six different inner-city high density suburbs, through analysis of data derived from travel diaries, individual time/space activity maps (created via GPS tracking over a seven-day period and GIS technology), and in-depth interviews. Reliance on motor vehicles, the competing interests of pedestrians and cyclists on shared pathways and problems associated with transit systems, public transport, and pedestrian infrastructure emerged as key barriers to older people venturing out of home on foot. GPS and GIS technology provide new opportunities for furthering understanding of the out-of-home mobility of older populations.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Aging and its effects on inflammation in skeletal muscle at rest and following exercise-induced muscle injury. Am J Physiol Regul Integr Comp Physiol 298: R1485-R1495, 2010. First published April 14, 2010; doi:10.1152/ajpregu.00467.2009.-The world's elderly population is expanding rapidly, and we are now faced with the significant challenge of maintaining or improving physical activity, independence, and quality of life in the elderly. Counteracting the progressive loss of muscle mass that occurs in the elderly, known as sarcopenia, represents a major hurdle in achieving these goals. Indirect evidence for a role of inflammation in sarcopenia is that markers of systemic inflammation correlate with the loss of muscle mass and strength in the elderly. More direct evidence is that compared with skeletal muscle of young people, the number of macrophages is lower, the gene expression of several cytokines is higher, and stress signaling proteins are activated in skeletal muscle of elderly people at rest. Sarcopenia may also result from inadequate repair and chronic maladaptation following muscle injury in the elderly. Macrophage infiltration and the gene expression of certain cytokines are reduced in skeletal muscle of elderly people compared with young people following exercise-induced muscle injury. Further research is required to identify the cause(s) of inflammation in skeletal muscle of elderly people. Additional work is also needed to expand our understanding of the cells, proteins, and transcription factors that regulate inflammation in the skeletal muscle of elderly people at rest and after exercise. This knowledge is critical for devising strategies to restrict sarcopenia, and improve the health of today's elderly population.
Resumo:
Objective: Food insecurity is the limited or uncertain availability or access to nutritionally-adequate, culturally-appropriate and safe foods. Food insecurity may result in inadequate dietary intakes, overweight or obesity and the development of chronic disease. Internationally, few studies have focused on the range of potential health outcomes related to food insecurity among adults residing in disadvantaged locations and no such Australian studies exist. The objective of this study was to investigate associations between food insecurity, socio-demographic and health factors and dietary intakes among adults residing in disadvantaged urban areas. Design: Data were collected by mail survey (n= 505, 53% response rate), which ascertained information about food security status, demographic characteristics (such as age, gender, household income, education) fruit and vegetable intakes, take-away and meat consumption, general health, depression and chronic disease. Setting: Disadvantaged suburbs of Brisbane city, Australia, 2009. Subjects: Individuals aged ≥ 20 years. Results: Approximately one-in-four households (25%) were food insecure. Food insecurity was associated with lower household income, poorer general health, increased healthcare utilisation and depression. These associations remained after adjustment for age, gender and household income. Conclusion: Food insecurity is prevalent in urbanised disadvantaged areas in developed countries such as Australia. Low-income households are at high risk of experiencing food insecurity. Food insecurity may result in significant health burdens among the population, and this may be concentrated in socioeconomically-disadvantaged suburbs.
Resumo:
Introduction: Food insecurity is the limited/uncertain availability, access to or ability to acquire nutritionally-adequate, culturallyrelevant and safe foods. Adults suffering from food insecurity are at risk of inadequate nutrient intakes or, paradoxically, overweight/ obesity and the development of chronic disease. Despite the global financial crisis and rising costs of living, there are few studies investigating the potential dietary consequences of food insecurity among the Australian population. This study examined whether food insecurity was associated with weight status and poorer intakes of fruits, vegetable and takeaway foods among adults residing in socioeconomically-disadvantaged urbanised areas. Methods: In this cross-sectional study, a random sample of residents (n=1000) were selected from the most disadvantaged suburbs of Brisbane city (response rate 51%). Data were collected by postal questionnaire which ascertained information on sociodemographic information, household food security status, height, weight, fruit and vegetable intakes and takeaway consumption. Data were analysed using chi-square and logistic regression. Results: The overall prevalence of food insecurity was 31%. Food insecurity was not associated with weight status among men or women. Associations between food security status and potential dietary consequences differed for men and women. Among women, food security was not associated with intakes of fruit, vegetable or takeaway consumption. Contrastingly, among men food security was associated with vegetable intakes and consumption of takeaway food: men reporting food insecurity had lower intakes of vegetables and were more likely to consume takeaway foods compared to those that were food secure. Conclusion: Food security is an important public health issue in Australia and has potential dietary consequences that may adversely affect the health of food-insecure groups, most notably men residing in food-insecure households.
Resumo:
Purpose: Food insecurity is the limited/uncertain availability or ability to acquire nutritionally-adequate, culturally-relevant and safe foods. Adults suffering from food insecurity are at risk of inadequate nutrient intakes or, paradoxically, overweight/obesity and the development of chronic disease. Despite the global financial crisis and rising costs of living, few studies have investigated the potential dietary and health consequences of food insecurity among the Australian population. This study examined whether food insecurity was associated with health behaviours and dietary intakes among adults residing in socioeconomically-disadvantaged urbanised areas. Methods: In this cross-sectional study, a random sample of residents (n = 1000) were selected from the most disadvantaged suburbs of Brisbane city (response rate 51%). Data were collected by postal questionnaire which ascertained information on socio-demographic information, household food security, height, weight, frequency of healthcare utilisation, presence of chronic disease and intakes of fruit, vegetables and take-away. Data were analysed using logistic regression. Results/Findings: The prevalence of food insecurity was 25%. Those reporting food insecurity were two-to-three times more likely to have seen a general practitioner or been hospitalised within the previous 6 months. Furthermore, food insecurity was associated with a three-to-six-fold increase in the likelihood of experiencing depression. Food insecurity was associated with higher intakes of some take-away foods, however was not significantly associated with weight status or intakes of fruits or vegetables among this disadvantaged sample. Conclusion: Food insecurity has potential adverse health consequences that may result in significant health burdens among the population, and this may be concentrated in socioeconomically-disadvantaged suburbs.
Resumo:
This study aimed to review and synthesise existing literature that investigated the experience of overseastrained health professionals (OTHPs) in rural and remote areas of destination countries. A systematic literature review was conducted using electronic databases and manual search of studies published from January 2004 to February 2011. Data were analysed from the final 17 original report articles that met the inclusion criteria. The reviewed research studies were conducted in Australia, Canada, New Zealand, the UK and the USA. Overseas-trained medical practitioners were the most frequently researched (n = 14); two studies involved nurses and one study included several health professionals. Three main themes emerged from the review and these were: (i) expectations; (ii) cultural diversity; and (iii) orientation and integration to rural and remote health work environment. The OTHPs were expected to possess the appropriate professional and cultural skills while they themselves expected recognition of their previous experiences and adequate organisational orientation and support. A welcoming and accepting community coupled with a relaxed rural lifestyle and the joy of continued patient care resulted in successful integration and contributed to increased staff retention rates. Recognition of expectations and cultural diversity by all parties and comprehensive orientation with sufficient organisational support are important elements in the integration of OTHPs and subsequent delivery of quality health care to people living in rural and remote areas.
Resumo:
Background: Cardiovascular disease (CVD) is more prevalent in regional and remote Australia compared to metropolitan areas. The aim of Healthy Hearts was to determine age and sex specific CVD risk factor levels and the potential value of national risk clinics. Methods: Healthy Hearts was an observational research study conducted in four purposefully selected higher risk communities in regional Victoria, Australia. The main outcome measures were the proportion of participants with CVD risk factors with group comparisons to determine the adjusted likelihood of elevated risk factor levels. Trained personnel used a standardized protocol over four weeks per community to measure CVD risk factor levels, estimate absolute CVD risk and provide feedback and advice. Results: A total of 2125 self-selected participants were assessed (mean age 58 ± 15 years, 57% women). Overall, CVD risk factors were highly prevalent. More men than women had ≥ 2 modifiable CVD risk factors (76% vs. 68%, p < .001), pre-existing CVD (20 vs. 15%, p < .01) and a major ECG abnormality requiring follow-up (15% vs. 7%, p < .001) . Less men reported depressive symptoms compared to women (28% vs. 22%, p < .01). A higher proportion of women were obese (adjusted OR 1.36, 95% CI 1.13 to 1.63), and physically inactive (adjusted OR 1.32, 95% CI 1.07 to 1.63). Conclusions: High CVD risk factor levels were confirmed for regional Victoria. Close engagement with individuals and communities provides scope for the application of regional risk management clinics to reduce the burden of CVD risk in regional Australia.
Resumo:
The majority of cancer nurses have to manage intravascular devices (IVDs) on a daily basis, thus placing nurses in the strongest position to generate and use best available evidence to inform this area of practice and to ensure that patients are receiving the best care available. Our literature clearly reflects that cancer nurses are concerned about complications associated with IVDs (eg, extravasation,1 IVD-related bloodstream infection [IVD-BSI],2,3 and thrombosis4). Although enormous attention is given to this area, a number of nursing practices are not sufficiently based on empirical evidence.5,6 Nurses need to set goals and priorities for future research and investments. Priority areas for future research are suggested here for your consideration.
Resumo:
The official need for content teachers to teach the language features of their fields has never been greater in Australia than now. In 2012, the recently formed national curriculum board announced that all teachers are responsible for the English language development of students whose first language or dialect is not Standard Australian English (SAE). This formal endorsement is an important juncture regarding the way expertise might be developed, perceived and exchanged between content and language teachers through collaboration, in order for the goals of English language learners in content areas to be realised. To that end, we conducted an action research project to explore and extend the reading strategies pedagogy of one English language teacher who teaches English language learners in a parallel junior high school Geography program. Such pedagogy will be valuable for all teachers as they seek to contribute to English language development goals as outlined in national curricula.
Resumo:
The 2011 floods illustrated once again Queensland’s vulnerability to flooding and similar disasters. Climate change will increase the frequency and magnitude of such events and will have a variety of other impacts. To deal with these impacts governments at all levels need to be prepared and work together. Like the rest of the nation most of the population of the state is located in the coastal areas and these areas are more vulnerable to the impacts of climate change. This paper examines climate change adaptation efforts in coastal Queensland. The aim is increasing local disaster resilience of people and property through fostering coordination between local and state government planning activities in coastal high hazard areas. By increasing the ability of local governments and state agencies to coordinate planning activities, we can help adapt to impacts of climate change. Towards that end, we will look at the ways that these groups currently interact, especially with regard to issues involving uncertainty related to climate change impacts. Through an examination of climate change related activities by Queensland’s coastal local governments and state level planning agencies and how they coordinate their planning activities at different levels we aim to identify the weaknesses of the current planning system in responding to the challenges of climate change adaptation and opportunities for improving the ways we plan and coordinate planning, and make recommendations to improve resilience in advance of disasters so as to help speed up recovery when they occur.
Resumo:
This report presents a snapshot from work which was funded by the Queensland Injury Prevention Council in 2010-11 titled “Feasibility of Using Health Data Sources to Inform Product Safety Surveillance in Queensland children”. The project provided an evaluation of the current available evidence-base for identification and surveillance of product-related injuries in children in Queensland and Australia. A comprehensive 300 page report was produced (available at: http://eprints.qut.edu.au/46518/) and a series of recommendations were made which proposed: improvements in the product safety data system, increased utilisation of health data for proactive and reactive surveillance, enhanced collaboration between the health sector and the product safety sector, and improved ability of health data to meet the needs of product safety surveillance. At the conclusion of the project, a Consumer Product Injury Research Advisory group (CPIRAG) was established as a working party to the Queensland Injury Prevention Council (QIPC), to prioritise and advance these recommendations and to work collaboratively with key stakeholders to promote the role of injury data to support product safety policy decisions at the Queensland and national level. This group continues to meet monthly and is comprised of the organisations represented on the second page of this report. One of the key priorities of the CPIRAG group for 2012 was to produce a snapshot report to highlight problem areas for potential action arising out of the larger report. Subsequent funding to write this snapshot report was provided by the Institute for Health and Biomedical Innovation, Injury Prevention and Rehabilitation Domain at QUT in 2012. This work was undertaken by Dr Kirsten McKenzie and researchers from QUT's Centre for Accident Research and Road Safety - Queensland. This snapshot report provides an evidence base for potential further action on a range of children’s products that are significantly represented in injury data. Further information regarding injury hazards, safety advice and regulatory responses are available on the Office of Fair Trading (OFT) Queensland website and the Product Safety Australia websites. Links to these resources are provided for each product reviewed.
Resumo:
Biological validation of new radiotherapy modalities is essential to understand their therapeutic potential. Antiprotons have been proposed for cancer therapy due to enhanced dose deposition provided by antiproton-nucleon annihilation. We assessed cellular DNA damage and relative biological effectiveness (RBE) of a clinically relevant antiproton beam. Despite a modest LET (~19 keV/μm), antiproton spread out Bragg peak (SOBP) irradiation caused significant residual γ-H2AX foci compared to X-ray, proton and antiproton plateau irradiation. RBE of ~1.48 in the SOBP and ~1 in the plateau were measured and used for a qualitative effective dose curve comparison with proton and carbon-ions. Foci in the antiproton SOBP were larger and more structured compared to X-rays, protons and carbon-ions. This is likely due to overlapping particle tracks near the annihilation vertex, creating spatially correlated DNA lesions. No biological effects were observed at 28–42 mm away from the primary beam suggesting minimal risk from long-range secondary particles.