48 resultados para Sonae MC
Resumo:
We have developed digital image registration program for a MC 68000 based fundus image processing system (FIPS). FIPS not only is capable of executing typical image processing algorithms in spatial as well as Fourier domain, the execution time for many operations has been made much quicker by using a hybrid of "C", Fortran and MC6000 assembly languages.
Resumo:
Brisbane, the capital city of Queensland, has catapulted from a small, provincial town to a larger metropolis within two decades from the inception of urban renewal in 1992. Once a low-density suburban city, its inner-city and some suburban areas are now medium to high density, with the rise in apartment buildings creating new and denser modes of living. This article suggests that urbanism demands different habits of living from suburbanism and examines the relationship between the material and representational city to explore the ways in which promotions of the “new” Brisbane during its early urban renewal period reproduces the ethos of suburban living.
Resumo:
The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.
Resumo:
Visual adaptation regulates contrast sensitivity during dynamically changing light conditions (Crawford, 1947; Hecht, Haig & Chase, 1937). These adaptation dynamics are unknown under dim (mesopic) light levels when the rod (R) and long (L), medium (M) and short (S) wavelength cone photoreceptor classes contribute to vision via interactions in shared non-opponent Magnocellular (MC), chromatically opponent Parvocellular (PC) and Koniocellular (KC) visual pathways (Dacey, 2000). This study investigated the time-course of adaptation and post-receptoral pathways mediating receptor specific rod and cone interactions under mesopic illumination. A four-primary photostimulator (Pokorny, Smithson & Quinlan, 2004) was used to independently control the activity of the four photoreceptor classes and their post-receptoral visual athways in human observers. In the first experiment, the contrast sensitivity and time-course of visual adaptation under mesopic illumination were measured for receptoral (L, S, R) and post-receptoral (LMS, LMSR, L-M) stimuli. An incremental (Rapid-ON) sawtooth conditioning pulse biased detection to ON-cells within the visual pathways and sensitivity was assayed relative to pulse onset using a briefly presented incremental probe that did not alter adaptation. Cone.Cone interactions with luminance stimuli (L cone, LMS, LMSR) reduced sensitivity by 15% and the time course of recovery was 25± 5ms-1 (μ ± SEM). PC mediated (+L-M) chromatic stimuli sensitivity loss was less (8%) than for luminance and recovery was slower (μ = 2.95 ± 0.05 ms-1), with KC mediated (S cone) chromatic stimuli showing a high sensitivity loss (38%) and the slowest recovery time (1.6 ± 0.2 ms-1). Rod-Rod interactions increased sensitivity by 20% and the time course of recovery was 0.7 ± 0.2 ms-1 (μ ± SD). Compared to these interaction types, Rod-Cone interactions reduced sensitivity to a lesser degree (5%) and showed the fastest recovery (μ = 43 ± 7 ms-1). In the second experiment, rod contribution to the magnocellular, parvocellular and koniocellular post-receptoral pathways under mesopic illumination was determined as a function of incremental stimulus duration and waveform (rectangular; sawtooth) using a rod colour match procedure (Cao, Pokorny & Smith, 2005; Cao, Pokorny, Smith & Zele, 2008a). For a 30% rod increment, a cone match required a decrease in [L/(L+M)] and an increase in [L+M] and [S/(L+M)], giving a greenish-blue and brighter appearance for probe durations of 75 ms or longer. Probe durations less than 75 ms showed an increase in [L+M] and no change in chromaticity [L/(L+M) or S/(L+M)], uggesting mediation by the MC pathway only for short duration rod stimuli. s We advance previous studies by determining the time-course and nature of photoreceptor specific retinal interactions in the three post-receptoral pathways under mesopic illumination. In the first experiment, the time-course of adaptation for ON cell processing was determined, revealing opponent cell facilitation in chromatic PC and KC pathways. The Rod-Rod and Rod-Cone data identify previously unknown interaction types that act to maintain contrast sensitivity during dynamically changing light conditions and improve the speed of light adaptation under mesopic light levels. The second experiment determined the degree of rod contribution to the inferred post-eceptoral pathways as a function of the temporal properties of the rod signal. r The understanding of the mechanisms underlying interactions between photoreceptors under mesopic illumination has implications for the study of retinal disease. Visual function has been shown to be reduced in persons with age-related maculopathy (ARM) risk genotypes prior to clinical signs of the disease (Feigl, Cao, Morris & Zele, 2011) and disturbances in rod-mediated adaptation have been shown in early phases of ARM (Dimitrov, Guymer, Zele, Anderson & Vingrys, 2008; Feigl, Brown, Lovie-Kitchin & Swann, 2005). Also, the understanding of retinal networks controlling vision enables the development of international lighting standards to optimise visual performance nder dim light levels (e.g. work-place environments, transportation).
Resumo:
Multiple choice (MC) examinations are frequently used for the summative assessment of large classes because of their ease of marking and their perceived objectivity. However, traditional MC formats usually lead to a surface approach to learning, and do not allow students to demonstrate the depth of their knowledge or understanding. For these reasons, we have trialled the incorporation of short answer (SA) questions into the final examination of two first year chemistry units, alongside MC questions. Students’ overall marks were expected to improve, because they were able to obtain partial marks for the SA questions. Although large differences in some individual students’ performance in the two sections of their examinations were observed, most students received a similar percentage mark for their MC as for their SA sections and the overall mean scores were unchanged. In-depth analysis of all responses to a specific question, which was used previously as a MC question and in a subsequent semester in SA format, indicates that the SA format can have weaknesses due to marking inconsistencies that are absent for MC questions. However, inclusion of SA questions improved student scores on the MC section in one examination, indicating that their inclusion may lead to different study habits and deeper learning. We conclude that questions asked in SA format must be carefully chosen in order to optimise the use of marking resources, both financial and human, and questions asked in MC format should be very carefully checked by people trained in writing MC questions. These results, in conjunction with an analysis of the different examination formats used in first year chemistry units, have shaped a recommendation on how to reliably and cost-effectively assess first year chemistry, while encouraging higher order learning outcomes.
Resumo:
Purpose: Photoreceptor interactions reduce the temporal bandwidth of the visual system under mesopic illumination. The dynamics of these interactions are not clear. This study investigated cone-cone and rod-cone interactions when the rod (R) and three cone (L, M, S) photoreceptor classes contribute to vision via shared post-receptoral pathways. Methods: A four-primary photostimulator independently controlled photoreceptor activity in human observers. To determine the temporal dynamics of receptoral (L, S, R) and post-receptoral (LMS, LMSR, +L-M) pathways (5 Td, 7° eccentricity) in Experiment 1, ON-pathway sensitivity was assayed with an incremental probe (25ms) presented relative to onset of an incremental sawtooth conditioning pulse (1000ms). To define the post-receptoral pathways mediating the rod stimulus, Experiment 2 matched the color appearance of increased rod activation (30% contrast, 25-1000ms; constant cone excitation) with cone stimuli (variable L+M, L/L+M, S/L+M; constant rod excitation). Results: Cone-cone interactions with luminance stimuli (LMS, LMSR, L-cone) reduced Weber contrast sensitivity by 13% and the time course of adaptation was 23.7±1ms (μ±SE). With chromatic stimuli (+L-M, S), cone pathway sensitivity was also reduced and recovery was slower (+L-M 8%, 2.9±0.1ms; S 38%, 1.5±0.3ms). Threshold patterns at ON-conditioning pulse onset were monophasic for luminance and biphasic for chromatic stimuli. Rod-rod interactions increased sensitivity(19%) with a recovery time of 0.7±0.2ms. Compared to cone-cone interactions, rod-cone interactions with luminance stimuli reduced sensitivity to a lesser degree (5%) with faster recovery (42.9±0.7ms). Rod-cone interactions were absent with chromatic stimuli. Experiment 2 showed that rod activation generated luminance (L+M) signals at all durations, and chromatic signals (L/L+M, S/L+M) for durations >75ms. Conclusions: Temporal dynamics of cone-cone interactions are consistent with contrast sensitivity loss in the MC pathway for luminance stimuli and chromatically opponent responses in the PC and KC pathway with chromatic stimuli. Rod-cone interactions limit contrast sensitivity loss during dynamic illumination changes and increase the speed of mesopic light adaptation. The change in relative weighting of the temporal rod signal within the major post-receptoral pathways modifies the sensitivity and dynamics of photoreceptor interactions.
Resumo:
This study investigates the time-course and post-receptoral pathway signaling of photoreceptor interactions when the rod (R) and three cone (L, M, S) photoreceptor classes contribute to mesopic vision. A four-primary photostimulator independently controls photoreceptor activity in human observers. The first experiment defines the temporal adaptation response of receptoral (L-, S-cone, rod) and post-receptoral (LMS, LMSR,+L-M) signaling and interactions. Here we show that nonopponent cone-cone interactions (L-cone, LMS, LMSR) have monophasic temporal response patterns whereas opponent signals (+L-M, S-cone) show biphasic response patterns with slower recovery. By comparison, rod-cone interactions with nonopponent signals have faster adaptation responses and reduced sensitivity loss whereas opponent rod-cone interactions are small or absent. Additionally, the rod-rod interaction differs from these interaction types and acts to increase rod sensitivity due to temporal summation but with a slower time course. The second experiment shows that the temporal profile of the rod signal alters the relative rod contributions to the three primary post-receptoral pathways. We demonstrate that rod signals generate luminance (þLþM) signals mediated via the MC pathway with all rod temporal profiles and chromatic signals (L/LþM, S/LþM) in both the PC and KC pathways with durations .75 ms. Thus, we propose that the change in relative weighting of rod signals within the post-receptoral pathways contributes to the sensitivity and temporal response of rod and cone pathway signaling and interactions.
Resumo:
It is commonly assumed that rates of accumulation of organic-rich strata have varied through geologic time with some periods that were particularly favorable for accumulation of petroleum source rocks or coals. A rigorous analysis of the validity of such an assumption requires consideration of the basic fact that although sedimentary rocks have been lost through geologic time to erosion and metamorphism. Consequently, their present-day global abundance decreases with their geologic age. Measurements of the global abundance of coal-bearing strata suggest that conditions for coal accumulation were exceptionally favorable during the late Carboniferous. Strata of this age constitute 21% of the world's coal-bearing strata. Global rates of coal accumulation appear to have been relatively constant since the end of the Carboniferous, with the exception of the Triassic which contains only 1.75% of the world's coal-bearing strata. Estimation of the global amount of discovered oil by age of the source rock show that 58% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. Although most geologic periods were favourable for oil source-rock accumulation the mid-Permian to mid-Jurassic appears to have been particularly unfavourable accounting for less than 2% of the world's oil. Estimation of the global amount of discovered natural gas by age of the source rock show that 48% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. The Silurian and Late Carboniferous were particularly favourable for gas source-rock accumulation respectively accounting for 12.9% and 6.9% of the world's gas. By contrast, Permian and Triassic source rocks account for only 1.7% of the world's natural gas. Rather than invoking global climatic or oceanic events to explain the relative abundance of organic rich sediments through time, examination of the data suggests the more critical control is tectonic. The majority of coals are associated with foreland basins and the majority of oil-prone source rocks are associated with rifting. The relative abundance of these types of basin through time determines the abundance and location of coals and petroleum source rocks.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
This research is in the field of arts education. Eisner claims that ‘teachers rarely view themselves as artists’ (Taylor, 1993:21). Situating professional dance artists and teacher-artists (Mc Lean, 2009) in close proximity to classroom dance teachers, spatially, through a shared rehearsal studio and creatively, by engaging them in a co-artistry approach, allows participants to map unique and new creative processes, kinaesthetically and experientially. This pratice encourages teachers to attune and align themselves with artists’ states of mind and enables them to nurture both their teacher-self and their artist-self (Lichtenstein 2009). The research question was: can interactions between professional dance artists, teacher-artists (Mc Lean, 2009) and classroom dance teachers change classroom dance teachers’ self-perceptions? The research found that Artists in Residence projects provide up-skilling in situ for classroom dance teachers, and give credence to the act of art making for classroom dance teachers within their peer context, positively enhancing their self-image and promoting self-identification as ‘teacher-artists’ (Mc Lean, 2009). This project received an Artist in Residence Grant (an Australia Council for the Arts, Education Queensland and Queensland Arts Council partnership). The research findings were chosen for inclusion in the Queensland Performing Arts Complex program, Feet First: an invitation to dance, 2013 and selected for inclusion on the Creative Campus website, http://www.creative-campus.org.uk.
Resumo:
Table of Contents “your darkness also/rich and beyond fear”: Community Performance, Somatic Poetics and the Vessels of Self and Other - Petra Kuppers. "So what will you do on the plinth?”: A Personal Experience of Disclosure during Antony Gormley’s "One & Other" Project - Jill Francesca Dowse. Food Confessions: Disclosing the Self through the Performance of Food - Jenny Lawson Participation Cartography: The Presentation of Self in Spatio-Temporal Terms - Luis Carlos Sotelo-Castro Disclosure in Biographically-Based Fiction: The Challenges of Writing Narratives Based on True Life Stories - Donna Lee Brien. Closure through Mock-Disclosure in Bret Easton Ellis’s Lunar Park - Jennifer Anne Phillips. Disclosing the Ethnographic Self - Christine Lohmeier Celebrity Twitter: Strategies of Intrusion and Disclosure in the Age of Technoculture - Nick Muntean, Anne Helen Petersen. “Just Emotional People”? Emo Culture and the Anxieties of Disclosure - Michelle Phillipov.
Resumo:
Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs