388 resultados para Extended techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sol-gel synthesis in varied gravity is only a relatively new topic in the literature and further investigation is required to explore its full potential as a method to synthesise novel materials. Although trialled for systems such as silica, the specific application of varied gravity synthesis to other sol-gel systems such as titanium has not previously been undertaken. Current literature methods for the synthesis of sol-gel material in reduced gravity could not be applied to titanium sol-gel processing, thus a new strategy had to be developed in this study. To successfully conduct experiments in varied gravity a refined titanium sol-gel chemical precursor had to be developed which allowed the single solution precursor to remain un-reactive at temperatures up to 50oC and only begin to react when exposed to a pressure decrease from a vacuum. Due to the new nature of this precursor, a thorough characterisation of the reaction precursors was subsequently undertaken with the use of techniques such as Nuclear Magnetic Resonance, Infra-red and UV-Vis spectroscopy in order to achieve sufficient understanding of precursor chemistry and kinetic stability. This understanding was then used to propose gelation reaction mechanisms under varied gravity conditions. Two unique reactor systems were designed and built with the specific purpose to allow the effects of varied gravity (high, normal, reduced) during synthesis of titanium sol-gels to be studied. The first system was a centrifuge capable of providing high gravity environments of up to 70 g’s for extended periods, whilst applying a 100 mbar vacuum and a temperature of 40-50oC to the reaction chambers. The second system to be used in the QUT Microgravity Drop Tower Facility was also required to provide the same thermal and vacuum conditions used in the centrifuge, but had to operate autonomously during free fall. Through the use of post synthesis characterisation techniques such as Raman Spectroscopy, X-Ray diffraction (XRD) and N2 adsorption, it was found that increased gravity levels during synthesis, had the greatest effect on the final products. Samples produced in reduced and normal gravity appeared to form amorphous gels containing very small particles with moderate surface areas. Whereas crystalline anatase (TiO2), was found to form in samples synthesised above 5 g with significant increases in crystallinity, particle size and surface area observed when samples were produced at gravity levels up to 70 g. It is proposed that for samples produced in higher gravity, an increased concentration gradient of water is forms at the bottom of the reacting film due to forced convection. The particles formed in higher gravity diffuse downward towards this excess of water, which favours the condensation reaction of remaining sol gel precursors with the particles promoting increased particle growth. Due to the removal of downward convection in reduced gravity, particle growth due to condensation reaction processes are physically hindered hydrolysis reactions favoured instead. Another significant finding from this work was that anatase could be produced at relatively low temperatures of 40-50oC instead of the conventional method of calcination above 450oC solely through sol-gel synthesis at higher gravity levels. It is hoped that the outcomes of this research will lead to an increased understanding of the effects of gravity on chemical synthesis of titanium sol-gel, potentially leading to the development of improved products suitable for diverse applications such as semiconductor or catalyst materials as well as significantly reducing production and energy costs through manufacturing these materials at significantly lower temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Practice-led journalism research techniques were used in this study to produce a ‘first draft of history’ recording the human experience of survivors and rescuers during the January 2011 flash flood disaster in Toowoomba and the Lockyer Valley in Queensland, Australia. The study aimed to discover what can be learnt from engaging in journalistic reporting of natural disasters. This exegesis demonstrates that journalism can be both a creative practice and a research methodology. About 120 survivors, rescuers and family members of victims participated in extended interviews about what happened to them and how they survived. Their stories are the basis for two creative outputs of the study: a radio documentary and a non-fiction book, that document how and why people died, or survived, or were rescued. Listeners and readers are taken "into the flood" where they feel anxious for those in peril, relief when people are saved, and devastated when babies, children and adults are swept away to their deaths. In undertaking reporting about the human experience of the floods, several significant elements about journalistic reportage of disasters were exposed. The first related to the vital role that the online social media played during the disaster for individuals, citizen reporters, journalists and emergency services organisations. Online social media offer reporters powerful new reporting tools for both gathering and disseminating news. The second related to the performance of journalists in covering events involving traumatic experiences. Journalists are often required to cover trauma and are often amongst the first-responders to disasters. This study found that almost all of the disaster survivors who were approached were willing to talk in detail about their traumatic experiences. A finding of this project is that journalists who interview trauma survivors can develop techniques for improving their ability to interview people who have experienced traumatic events. These include being flexible with interview timing and selecting a location; empowering interviewees to understand they don’t have to answer every question they are asked; providing emotional security for interviewees; and by being committed to accuracy. Survivors may exhibit posttraumatic stress symptoms but some exhibit and report posttraumatic growth. The willingness of a high proportion of the flood survivors to participate in the flood research made it possible to document a relatively unstudied question within the literature about journalism and trauma – when and why disaster survivors will want to speak to reporters. The study sheds light on the reasons why a group of traumatised people chose to speak about their experiences. Their reasons fell into six categories: lessons need to be learned from the disaster; a desire for the public to know what had happened; a sense of duty to make sure warning systems and disaster responses to be improved in future; personal recovery; the financial disinterest of reporters in listening to survivors; and the timing of the request for an interview. Feedback to the creative-practice component of this thesis - the book and radio documentary - shows that these issues are not purely matters of ethics. By following appropriate protocols, it is possible to produce stories that engender strong audience responses such as that the program was "amazing and deeply emotional" and "community storytelling at its most important". Participants reported that the experience of the interview process was "healing" and that the creative outcome resulted in "a very precious record of an afternoon of tragedy and triumph and the bitter-sweetness of survival".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Establishing a persistent presence in the ocean with an autonomous underwater vehicle (AUV) capable of observing temporal variability of large-scale ocean processes requires a unique sensor platform. In this paper, we examine the utility of vehicles that can only control their depth in the water column for such extended deployments. We present a strategy that utilizes ocean model predictions to facilitate a basic level of autonomy and enables general control for these profiling floats. The proposed method is based on experimentally validated techniques for utilizing ocean current models to control autonomous gliders. With the appropriate vertical actuation, and utilizing spatio–temporal variations in water speed and direction, we show that general controllability results can be met. First, we apply an A* planner to a local controllability map generated from predictions of ocean currents. This computes a path between start and goal waypoints that has the highest likelihood of successful execution. A computed depth plan is generated with a model-predictive controller (MPC), and selects the depths for the vehicle so that ambient currents guide it toward the goal. Mission constraints are included to simulate and motivate a practical data collection mission. Results are presented in simulation for a mission off the coast of Los Angeles, CA, USA, that show encouraging results in the ability of a drifting vehicle to reach a desired location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evidence within Australia and internationally suggests parenthood as a risk factor for inactivity; however, research into understanding parental physical activity is scarce. Given that active parents can create active families and social factors are important for parents’ decision making, the authors investigated a range of social influences on parents’ intentions to be physically active. Parents (N = 580; 288 mothers and 292 fathers) of children younger than 5 years completed an extended Theory of Planned Behavior questionnaire either online or paper based. For both genders, attitude, control factors, group norms, friend general support, and an active parent identity predicted intentions, with social pressure and family support further predicting mothers’ intentions and active others further predicting fathers’ intentions. Attention to these factors and those specific to the genders may improve parents’ intentions to be physically active, thus maximizing the benefits to their own health and the healthy lifestyle practices for other family members.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Donor retention is vital to blood collection agencies. Past research has highlighted the importance of early career behavior for long-term donor retention, yet research investigating the determinants of early donor behavior is scarce. Using an extended Theory of Planned Behavior (TPB), this study sought to identify the predictors of first-time blood donors' early career retention. STUDY DESIGN AND METHODS: First-time donors (n = 256) completed three surveys on blood donation. The standard TPB predictors and self-identity as a donor were assessed 3 weeks (Time 1) and at 4 months (Time 2) after an initial donation. Path analyses examined the utility of the extended TPB to predict redonation at 4 and 8 months after initial donation. RESULTS: The extended TPB provided a good fit to the data. Post-Time 1 and 2 behavior was consistently predicted by intention to redonate. Further, intention was predicted by attitudes, perceived control, and self-identity (Times 1 and 2). Donors' intentions to redonate at Time 1 were the strongest predictor of intention to donate at Time 2, while donors' behavior at Time 1 strengthened self-identity as a blood donor at Time 2. CONCLUSION: An extended TPB framework proved efficacious in revealing the determinants of first-time donor retention in an initial 8-month period. The results suggest that collection agencies should intervene to bolster donors' attitudes, perceived control, and identity as a donor during this crucial post–first donation period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A people-to-people matching system (or a match-making system) refers to a system in which users join with the objective of meeting other users with the common need. Some real-world examples of these systems are employer-employee (in job search networks), mentor-student (in university social networks), consume-to-consumer (in marketplaces) and male-female (in an online dating network). The network underlying in these systems consists of two groups of users, and the relationships between users need to be captured for developing an efficient match-making system. Most of the existing studies utilize information either about each of the users in isolation or their interaction separately, and develop recommender systems using the one form of information only. It is imperative to understand the linkages among the users in the network and use them in developing a match-making system. This study utilizes several social network analysis methods such as graph theory, small world phenomenon, centrality analysis, density analysis to gain insight into the entities and their relationships present in this network. This paper also proposes a new type of graph called “attributed bipartite graph”. By using these analyses and the proposed type of graph, an efficient hybrid recommender system is developed which generates recommendation for new users as well as shows improvement in accuracy over the baseline methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gold is often considered as an inert material but it has been unequivocally demonstrated that it possesses unique electronic, optical, catalytic and electrocatalytic properties when in a nanostructured form.[1] For the latter the electrochemical behaviour of gold in aqueous media has been widely studied on a plethora of gold samples, including bulk polycrystalline and single-crystal electrodes, nanoparticles, evaporated films as well as electrodeposited nanostructures, particles and thin films.[1b, 2] It is now well-established that the electrochemical behaviour of gold is not as simple as an extended double-layer charging region followed by a monolayer oxide-formation/-removal process. In fact the so-called double-layer region of gold is significantly more complicated and has been investigated with a variety of electrochemical and surface science techniques. Burke and others[3] have demonstrated that significant processes due to the oxidation of low lattice stabilised atoms or clusters of atoms occur in this region at thermally and electrochemically treated electrodes which were confirmed later by Bond[4] to be Faradaic in nature via large-amplitude Fourier transformed ac voltammetric experiments. Supporting evidence for the oxidation of gold in the double-layer region was provided by Bard,[5] who used a surface interrogation mode of scanning electrochemical microscopy to quantify the extent of this process that forms incipient oxides on the surface. These were estimated to be as high as 20% of a monolayer. This correlated with contact electrode resistance measurements,[6] capacitance measurements[7] and also electroreflection techniques...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using cooperative learning in classrooms promotes academic achievement, communication skills, problem-solving, social skills and student motivation. Yet it is reported that cooperative learning as a Western educational concept may be ineffective in Asian cultural contexts. The study aims to investigate the utilisation of scaffolding techniques for cooperative learning in Thailand primary mathematics classes. A teacher training program was designed to foster Thai primary school teachers’ cooperative learning implementation. Two teachers participated in this experimental program for one and a half weeks and then implemented cooperative learning strategies in their mathematics classes for six weeks. The data collected from teacher interviews and classroom observations indicates that the difficulty or failure of implementing cooperative learning in Thailand education may not be directly derived from cultural differences. Instead, it does indicate that Thai culture can be constructively merged with cooperative learning through a teacher training program and practices of scaffolding techniques.