220 resultados para degree of priority importance
Resumo:
Family grocery shopping is the accepted domain of women; however, modern social and demographic movements challenge traditional gender roles with in the family structure. Men now engage in grocery shopping more freely and frequently, yet the essence of male shopping behaviour and beliefs present an opportunity for examination. This research identifies specific store characteristics, investigates the perceived importance of those characteristics and explores gender, age and income differences that may exist. A random sample collection methodology involving 280 male and female grocery shoppers was selected. Results indicated significant statistical differences between genders based on perceptions of importance of most store characteristics. Overall, male grocery shoppers considered supermarket store characteristics less important than female shoppers. Income did not affect shoppers’ level of associated importance; however respondents’ age, education and occupation influenced perceptions of price, promotions and cleanliness.
Resumo:
Creativity plays an increasingly important role in our personal, social, educational, and community lives. For adolescents, creativity can enable self-expression, be a means of pushing boundaries, and assist learning, achievement, and completion of everyday tasks. Moreover, adolescents who demonstrate creativity can potentially enhance their capacity to face unknown future challenges, address mounting social and ecological issues in our global society, and improve their career opportunities and contribution to the economy. For these reasons, creativity is an essential capacity for young people in their present and future, and is highlighted as a priority in current educational policy nationally and internationally. Despite growing recognition of creativity’s importance and attention to creativity in research, the creative experience from the perspectives of the creators themselves and the creativity of adolescents are neglected fields of study. Hence, this research investigated adolescents’ self-reported experiences of creativity to improve understandings of their creative processes and manifestations, and how these can be supported or inhibited. Although some aspects of creativity have been extensively researched, there were no comprehensive, multidisciplinary theoretical frameworks of adolescent creativity to provide a foundation for this study. Therefore, a grounded theory methodology was adopted for the purpose of constructing a new theory to describe and explain adolescents’ creativity in a range of domains. The study’s constructivist-interpretivist perspective viewed the data and findings as interpretations of adolescents’ creative experiences, co-constructed by the participants and the researcher. The research was conducted in two academically selective high schools in Australia: one arts school, and one science, mathematics, and technology school. Twenty adolescent participants (10 from each school) were selected using theoretical sampling. Data were collected via focus groups, individual interviews, an online discussion forum, and email communications. Grounded theory methods informed a process of concurrent data collection and analysis; each iteration of analysis informed subsequent data collection. Findings portray creativity as it was perceived and experienced by participants, presented in a Grounded Theory of Adolescent Creativity. The Grounded Theory of Adolescent Creativity comprises a core category, Perceiving and Pursuing Novelty: Not the Norm, which linked all findings in the study. This core category explains how creativity involved adolescents perceiving stimuli and experiences differently, approaching tasks or life unconventionally, and pursuing novel ideas to create outcomes that are not the norm when compared with outcomes by peers. Elaboration of the core category is provided by the major categories of findings. That is, adolescent creativity entailed utilising a network of Sub-Processes of Creativity, using strategies for Managing Constraints and Challenges, and drawing on different Approaches to Creativity – adaptation, transfer, synthesis, and genesis – to apply the sub-processes and produce creative outcomes. Potentially, there were Effects of Creativity on Creators and Audiences, depending on the adolescent and the task. Three Types of Creativity were identified as the manifestations of the creative process: creative personal expression, creative boundary pushing, and creative task achievement. Interactions among adolescents’ dispositions and environments were influential in their creativity. Patterns and variations of these interactions revealed a framework of four Contexts for Creativity that offered different levels of support for creativity: high creative disposition–supportive environment; high creative disposition–inhibiting environment; low creative disposition–supportive environment; and low creative disposition–inhibiting environment. These contexts represent dimensional ranges of how dispositions and environments supported or inhibited creativity, and reveal that the optimal context for creativity differed depending on the adolescent, task, domain, and environment. This study makes four main contributions, which have methodological and theoretical implications for researchers, as well as practical implications for adolescents, parents, teachers, policy and curriculum developers, and other interested stakeholders who aim to foster the creativity of adolescents. First, this study contributes methodologically through its constructivist-interpretivist grounded theory methodology combining the grounded theory approaches of Corbin and Strauss (2008) and Charmaz (2006). Innovative data collection was also demonstrated through integration of data from online and face-to-face interactions with adolescents, within the grounded theory design. These methodological contributions have broad applicability to researchers examining complex constructs and processes, and with populations who integrate multimedia as a natural form of communication. Second, applicable to creativity in diverse domains, the Grounded Theory of Adolescent Creativity supports a hybrid view of creativity as both domain-general and domain-specific. A third major contribution was identification of a new form of creativity, educational creativity (ed-c), which categorises creativity for learning or achievement within the constraints of formal educational contexts. These theoretical contributions inform further research about creativity in different domains or multidisciplinary areas, and with populations engaged in formal education. However, the key contribution of this research is that it presents an original Theory and Model of Adolescent Creativity to explain the complex, multifaceted phenomenon of adolescents’ creative experiences.
Resumo:
Abstract Background: The importance of quality-of-life (QoL) research has been recognised over the past two decades in patients with head and neck (H&N) cancer. The aims of this systematic review are to evaluate the QoL status of H&N cancer survivors one year after treatment and to identify the determinants affecting their QoL. Methods: Pubmed, Medline, Scopus, Sciencedirect and CINAHL (2000–2011) were searched for relevant studies, and two of the present authors assessed their methodological quality. The characteristics and main findings of the studies were extracted and reported. Results: Thirty-seven studies met the inclusion criteria, and the methodological quality of the majority was moderate to high. While patients of the group in question recover their global QoL by 12 months after treatment, a number of outstanding issues persist – deterioration in physical functioning, fatigue, xerostomia and sticky saliva. Age, cancer site, stage of disease, social support, smoking, feeding tube placement and alcohol consumption are the significant determinants of QoL at 12 months, while gender has little or no influence. Conclusions: Regular assessments should be carried out to monitor physical functioning,degree of fatigue, xerostomia and sticky saliva. Further research is required to develop appropriate and effective interventions to deal with these issues, and thus to promote the patients’ QoL.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Molecular dynamics simulations were carried out on single chain models of linear low-density polyethylene in vacuum to study the effects of branch length, branch content, and branch distribution on the polymer’s crystalline structure at 300 K. The trans/gauche (t/g) ratios of the backbones of the modeled molecules were calculated and utilized to characterize their degree of crystallinity. The results show that the t/g ratio decreases with increasing branch content regardless of branch length and branch distribution, indicating that branch content is the key molecular parameter that controls the degree of crystallinity. Although t/g ratios of the models with the same branch content vary, they are of secondary importance. However, our data suggests that branch distribution (regular or random) has a significant effect on the degree of crystallinity for models containing 10 hexyl branches/1,000 backbone carbons. The fractions of branches that resided in the equilibrium crystalline structures of the models were also calculated. On average, 9.8% and 2.5% of the branches were found in the crystallites of the molecules with ethyl and hexyl branches while C13 NMR experiments showed that the respective probabilities of branch inclusion for ethyl and hexyl branches are 10% and 6% [Hosoda et al., Polymer 1990, 31, 1999–2005]. However, the degree of branch inclusion seems to be insensitive to the branch content and branch distribution.
Resumo:
The project examined the responsiveness of the telenursing service provided by the Child Health Line (hereinafter referred to as CHL). It aimed to provide an account of population usage of the service, the call request types and the response of the service to the calls. In so doing, the project extends the current body of knowledge pertaining to the provision of parenting support through telenursing. Approximately 900 calls to the CHL were audio-recorded over the December 2005-2006 Christmas-New Year period. A protocol was developed to code characteristics of the call, the interactional features between the caller and nurse call-taker, and the extent to which there was (a) agreement on problem definition and the plan of action and (b) interactional alignment between nurse and caller. A quantitative analysis examined the frequencies of the main topics covered in calls to the CHL and any statistical associations between types of calls, length of calls and nurse-caller alignment. In addition, a detailed qualitative analysis was conducted on a subset of calls dealing with the nurse management of calls seeking medical advice and information. Key findings include: • Overall, 74% of the calls discussed parenting and child development issues, 48% discussed health/medical issues, and 16% were information-seeking calls. • More specifically: o 21% discussed health/medical and parenting and child development issues. o 3% discussed parenting and information-seeking issues. o 5% discussed health/medical, parenting/development and information issues. o 18% exclusively focussed on health and medical issues and therefore were outside the remit of the intended scope of the CHL. These calls caused interactional dilemmas for the nurse call-takers as they simultaneously dealt with parental expectations for help and the CHL guidelines indicating that offering medical advice was outside the remit of the service. • Most frequent reasons for calling were to discuss sleep, feeding, normative infant physical functions and parenting advice. • The average length of calls to the CHL was 7 minutes. • Longer calls were more likely to involve nurse call-takers giving advice on more than one topic, the caller displaying strong emotions, the caller not specifically providing the reason for the call, and the caller discussing parenting and developmental issues. • Shorter calls were characterised by the nurse suggesting that the child receive immediate medical attention, the nurse emphasising the importance or urgency of the plan of action, the caller referring to or requesting confirmation of a diagnosis, and caller and nurse call-taker discussion of health and medical issues. • The majority of calls, 92%, achieved parent-nurse alignment by the conclusion of the call. However, 8% did not. • The 8% of calls that were not aligned require further quantitative and qualitative investigation of the interactional features. The findings are pertinent in the current context where Child Health Line now resides within 13HEALTH. These findings indicate: 1. A high demand for parenting advice. 2. Nurse call-takers have a high level of competency in dealing with calls about parenting and normal child development, which is the remit of the CHL. 3. Nurse call-takers and callers achieve a high degree of alignment when both parties agree on a course of action. 4. There is scope for developing professional practice in calls that present difficulties in terms of call content, interactional behaviour and call closure. Recommendations of the project: 1. There are numerous opportunities for further research on interactional aspects of calls to the CHL, such as further investigations of the interactional features and the association of the features to alignment and nonalignment. The rich and detailed insights into the patterns of nurse-parent interactions were afforded by the audio-recording and analysis of calls to the CHL. 2. The regular recording of calls would serve as a way of increasing understanding of the type and nature of calls received, and provide a valuable training resource. Recording and analysing calls to CHL provides insight into the operation of the service, including evidence about the effectiveness of triaging calls. 3. Training in both recognising and dealing with problem calls may be beneficial. For example, calls where the caller showed strong emotion, appeared stressed, frustrated or troubled were less likely to be rated as aligned calls. In calls where the callers described being ‘at their wits end’, or responded to each proposed suggestion with ‘I’ve tried that’, the callers were fairly resistant to advice-giving. 4. Training could focus on strategies for managing calls relating to parenting support and advice, and parental well-being. The project found that these calls were more likely to be rated as being nonaligned. 5. With the implementation of 13HEALTH, future research could compare nurse-parent interaction following the implementation of triaging. Of the calls, 21% had both medical and parenting topics discussed and 5.3% discussed medical, parenting and information topics. Added to this, in 12% of calls, there was ambiguity between the caller and nurse call-taker as to whether the problem was medical or behavioural.
Resumo:
It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Monitoring and estimation of marine populations is of paramount importance for the conservation and management of sea species. Regular surveys are used to this purpose followed often by a manual counting process. This paper proposes an algorithm for automatic detection of dugongs from imagery taken in aerial surveys. Our algorithm exploits the fact that dugongs are rare in most images, therefore we determine regions of interest partially based on color rarity. This simple observation makes the system robust to changes in illumination. We also show that by applying the extended-maxima transform on red-ratio images, submerged dugongs with very fuzzy edges can be detected. Performance figures obtained here are promising in terms of degree of confidence in the detection of marine species, but more importantly our approach represents a significant step in automating this type of surveys.
Resumo:
Migraine is a common neurological disorder and is characterized by debilitating head pain and an assortment of additional symptoms which can include nausea, emesis, photophobia, phonophobia, and occasionally, visual sensory disturbances. A number of genes have been implicated in the pathogenesis of this disease, including genes involved in regulating the vascular system. Of particular importance are the methylenetetrahydrofolate reductase (MTHFR) gene and the role it plays in migraine with aura. Migraine with aura has previously been shown to have a significant comorbidity with stroke, making the vascular class of genes a priority for migraine studies. In this report, we outline the importance of the MTHFR gene in migraine and also discuss the use of a genetic isolate to investigate MTHFR genetic variants. From this study, 3 MTHFR single nucleotide polymorphisms showing association with migraine in the Norfolk Island population have been identified, thus reinforcing the potential role of MTHFR in migraine susceptibility. Further studies will continue to build a gene profile of variants involved in the complex disease migraine and improve understanding of the underlying genetic causes of this disorder.
Resumo:
1. There is evidence to suggest that essential hypertension is a polygenic disorder and that it arises from yet-to-be-identified predisposing variants of certain genes that influence blood pressure. The cloning of various hormone, enzyme, adrenoceptor and hormone receptor genes whose products are involved in blood pressure control and the identification of polymorphisms of these has permitted us to test their genetic association with hypertension. 2. Cross-sectional analyses of a number of candidate gene markers were performed in hypertensive and normotensive subjects who were selected on the basis of both parents being either hypertensive or normotensive, respectively, and the difference in total alleles on all chromosomes for each polymorphism between the hypertensive and normotensive groups was test by χ analysis with one degree of freedom. 3. A marked association was observed between hypertension and insertion alleles of polymorphisms of the insulin receptor gene (INSR) (P<0.0040) and the dipeptidyl carboxypeptidase-1 (angiotensin I-converting enzyme; kininase II) gene (DCP1) (P<0.0018). No association with hypertension was evident, however, for polymorphisms of the growth hormone, low-density lipoprotein receptor, renal kallikrein, α2- and β1-adrenoreceptor, atrial natriuretic factor and insulin genes. 4. All but one of the hypertensive subjects had at least one of the hypertension-associated alleles, and although subjects homozygous for both were three times more frequent in the hypertensive group, examination of the nine possible genotypes suggested that the INSR and DCP1 alleles are independent markers for hypertension. 5. The present results suggest that genetic variant(s) in close linkage disequilibrium with polymorphisms at INSR and DCP1 may be involved in part in the aetiology of essential hypertension.
Resumo:
The spontaneous reaction between microrods of an organic semiconductor molecule, copper 7,7,8,8-tetracyanoquinodimethane (CuTCNQ) with [AuBr4]− ions in an aqueous environment is reported. The reaction is found to be redox in nature which proceeds via a complex galvanic replacement mechanism, wherein the surface of the CuTCNQ microrods is replaced with metallic gold nanoparticles. Unlike previous reactions reported in acetonitrile, the galvanic replacement reaction in aqueous solution proceeds via an entirely different reaction mechanism, wherein a cyclical reaction mechanism involving continuous regeneration of CuTCNQ consumed during the galvanic replacement reaction occurs in parallel with the galvanic replacement reaction. This results in the driving force of the galvanic replacement reaction in aqueous medium being largely dependent on the availability of [AuBr4]− ions during the reaction. Therefore, this study highlights the importance of the choice of an appropriate solvent during galvanic replacement reactions, which can significantly impact upon the reaction mechanism. The reaction progress with respect to different gold salt concentration was monitored using Fourier transform infrared (FT-IR), Raman, and X-ray photoelectron spectroscopy (XPS), as well as XRD and EDX analysis, and SEM imaging. The CuTCNQ/Au nanocomposites were also investigated for their potential photocatalytic properties, wherein the destruction of the organic dye, Congo red, in a simulated solar light environment was found to be largely dependent on the degree of gold nanoparticle surface coverage. The approach reported here opens up new possibilities of decorating metal–organic charge transfer complexes with a host of metals, leading to potentially novel applications in catalysis and sensing.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.