898 resultados para Portal frames


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Electronic Portal Imaging Devices (EPIDs) are available with most linear accelerators (Amonuk, 2002), the current technology being amorphous silicon flat panel imagers. EPIDs are currently used routinely in patient positioning before radiotherapy treatments. There has been an increasing interest in using EPID technology tor dosimetric verification of radiotherapy treatments (van Elmpt, 2008). A straightforward technique involves the EPID panel being used to measure the fluence exiting the patient during a treatment which is then compared to a prediction of the fluence based on the treatment plan. However, there are a number of significant limitations which exist in this Method: Resulting in a limited proliferation ot this technique in a clinical environment. In this paper, we aim to present a technique of simulating IMRT fields using Monte Carlo to predict the dose in an EPID which can then be compared to the measured dose in the EPID. Materials: Measurements were made using an iView GT flat panel a-SI EPfD mounted on an Elekta Synergy linear accelerator. The images from the EPID were acquired using the XIS software (Heimann Imaging Systems). Monte Carlo simulations were performed using the BEAMnrc and DOSXVZnrc user codes. The IMRT fieids to be delivered were taken from the treatment planning system in DICOMRT format and converted into BEAMnrc and DOSXYZnrc input files using an in-house application (Crowe, 2009). Additionally. all image processing and analysis was performed using another in-house application written using the Interactive Data Language (IDL) (In Visual Information Systems). Comparison between the measured and Monte Carlo EPID images was performed using a gamma analysis (Low, 1998) incorporating dose and distance to agreement criteria. Results: The fluence maps recorded by the EPID were found to provide good agreement between measured and simulated data. Figure 1 shows an example of measured and simulated IMRT dose images and profiles in the x and y directions. "A technique for the quantitative evaluation of dose distributions", Med Phys, 25(5) May 1998 S. Crowe, 1. Kairn, A. Fielding, "The Development of a Monte Carlo system to verify Radiotherapy treatment dose calculations", Radiotherapy & Oncology, Volume 92, Supplement 1, August 2009, Pages S71-S71.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The latest paradigm shift in government, termed Transformational Government, puts the citizen in the centre of attention. Including citizens in the design of online one-stop portals can help governmental organisations to become more customer focussed. This study describes the initial efforts of an Australian state government to develop an information architecture to structure the content of their future one-stop portal. Hereby, card sorting exercises have been conducted and analysed, utilising contemporary approaches found in academic and non-scientific literature. This paper describes the findings of the card sorting exercises in this particular case and discusses the suitability of the applied approaches in general. These are distinguished into non-statistical, statistical, and hybrid approaches. Thus, on the one hand, this paper contributes to academia by describing the application of different card sorting approaches and discussing their strengths and weaknesses. On the other hand, this paper contributes to practice by explaining the approach that has been taken by the authors’ research partner in order to develop a customer-focussed governmental one-stop portal. Thus, they provide decision support for practitioners with regard to different analysis methods that can be used to complement recent approaches in Transformational Government.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents a segmentation pipeline that fuses colour and depth information to automatically separate objects of interest in video sequences captured from a quadcopter. Many approaches assume that cameras are static with known position, a condition which cannot be preserved in most outdoor robotic applications. In this study, the authors compute depth information and camera positions from a monocular video sequence using structure from motion and use this information as an additional cue to colour for accurate segmentation. The authors model the problem similarly to standard segmentation routines as a Markov random field and perform the segmentation using graph cuts optimisation. Manual intervention is minimised and is only required to determine pixel seeds in the first frame which are then automatically reprojected into the remaining frames of the sequence. The authors also describe an automated method to adjust the relative weights for colour and depth according to their discriminative properties in each frame. Experimental results are presented for two video sequences captured using a quadcopter. The quality of the segmentation is compared to a ground truth and other state-of-the-art methods with consistently accurate results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Achieving a robust, accurately scaled pose estimate in long-range stereo presents significant challenges. For large scene depths, triangulation from a single stereo pair is inadequate and noisy. Additionally, vibration and flexible rigs in airborne applications mean accurate calibrations are often compromised. This paper presents a technique for accurately initializing a long-range stereo VO algorithm at large scene depth, with accurate scale, without explicitly computing structure from rigidly fixed camera pairs. By performing a monocular pose estimate over a window of frames from a single camera, followed by adding the secondary camera frames in a modified bundle adjustment, an accurate, metrically scaled pose estimate can be found. To achieve this the scale of the stereo pair is included in the optimization as an additional parameter. Results are presented both on simulated and field gathered data from a fixed-wing UAV flying at significant altitude, where the epipolar geometry is inaccurate due to structural deformation and triangulation from a single pair is insufficient. Comparisons are made with more conventional VO techniques where the scale is not explicitly optimized, and demonstrated over repeated trials to indicate robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A refined plastic hinge method suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in a companion paper. The method implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the method for the analysis of steel frames comprising non-compact sections is established in this paper by comparison with a comprehensive range of analytical benchmark frame solutions. The refined plastic hinge method is shown to be more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is motivated by the need to efficiently machine the edges of ophthalmic polymer lenses for mounting in spectacle or instrument frames. The polymer materials used are required to have suitable optical characteristics such high refractive index and Abbe number, combined with low density and high scratch and impact resistance. Edge surface finish is an important aesthetic consideration; its quality is governed by the material removal operation and the physical properties of the material being processed. The wear behaviour of polymer materials is not as straightforward as for other materials due to their molecular and structural complexity, not to mention their time-dependent properties. Four commercial ophthalmic polymers have been studied in this work using nanoindentation techniques which are evaluated as tools for probing surface mechanical properties in order to better understand the grinding response of polymer materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In late 2007, newly elected Prime Minister Kevin Rudd placed education reform on centre stage as a key policy in the Labor Party's agenda for social reform in Australia. A major policy strategy within this 'Education Revolution' was the development of a national curriculum, the Australian Curriculum Within this political context, this study is an investigation into how social justice and equity have been used in political speeches to justify the need for, and the nature of, Australia's first official national curriculum. The aim is to provide understandings into what is said or not said; who is included or excluded, represented or misrepresented; for what purpose; and for whose benefit. The study investigates political speeches made by Education Ministers between 2008 and 201 0; that is, from the inception of the Australian Curriculum to the release of the Phase 1 F - 10 draft curriculum documents in English, mathematics, science and history. Curriculum development is defined here as an ongoing process of complex conversations. To contextualise the process of curriculum development within Australia, the thesis commences with an initial review of curriculum development in this nation over the past three decades. It then frames this review within contemporary curriculum theory; in particular it calls upon the work of William Pinar and the key notions of currere and reconceptualised curriculum. This contextualisation work is then used as a foundation to examine how social justice and equity have been represented in political speeches delivered by the respective Education Ministers Julia Gillard and Peter Garrett at key junctures of Australian Curriculum document releases. A critical thematic policy analysis is the approach used to examine selected official speech transcripts released by the ministerial media centre through the DEEWR website. This approach provides a way to enable insights and understandings of representations of social justice and equity issues in the policy agenda. Broader social implications are also discussed. The project develops an analytic framework that enables an investigation into the framing of social justice and equity issues such as inclusion, equality, quality education, sharing of resources and access to learning opportunities in political speeches aligned with the development of the Australian Curriculum Through this analysis, the study adopts a focus on constructions of educationally disadvantaged students and how the solutions of 'fixing' teachers and providing the 'right' curriculum are presented as resolutions to the perceived problem. In this way, it aims to work towards offering insights into political justifications for a national curriculum in Australia from a social justice perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the dimensions of time and effort and how these factors impact on the venture creation process in Australia. Findings of interest in this paper include: • First and foremost business planning is used as a thinking tool within the boundaries of the firm. • Business planning is more strongly associated with firms that are more ambitious, being built by teams, draw on more experience or education. • Industry characteristics in part drive the duration required for venture creation. Manufacturing and Agriculture based firms seemingly take longer timeframes than other industries. Extended venture creation times frames are also associated with firms that aim to bring more novel concepts to market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Service bundles, in the context of e-government, are used to group services together that relate to a certain citizen need. These bundles can then be presented on a governmental one-stop portal to structure the available service offerings according to citizen expectations. In order to ensure that citizens utilise the one-stop portal and comprised service bundles for future transactions, the quality of these service bundles needs to be managed and maximised accordingly. Consequently, models and tools that focus on assessing service bundle quality play an important role, when it comes to increasing or retaining usage behaviour of citizens. This study focuses on providing a rigorous and structured literature review of e-government outlets with regards to their coverage of service bundle quality and e-service quality themes. The study contributes to academia and practice by providing a framework that allows structuring and classifying existing studies relevant for the assessment of quality for government portals. Furthermore, this study provides insights into the status quo of quality models that can be used by governments to assess the quality of their service bundles. Directions for future research and limitations of the present study are provided as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research project frames an emerging field – fashion curation – through a theoretical, historical, and practical enquiry. Recent decades have seen fashion curation grow rapidly as a form of praxis and an area of academic attention, predominantly in museums and universities. Within this context, two major models for conceptualising the role of the fashion curator have emerged: the institutional and the independent curator. This project proposes and applies a third model: the adjunct fashion curator. In developing this model my project seeks to move the growing dialogue around fashion curation away from exclusively focusing on the museum. By proposing a third curatorial model for fashion, this research draws on the past of fashion display and exhibition for its context, while simultaneously exploring the adjunct model through my curatorial practice. The impact of sites of display, the role of gender, and the relationship between art and fashion are explored as pivotal themes in the development of fashion curation and thus provide contextual grounding for the proposal of the adjunct curatorial model. Alongside a theoretical and historical account of fashion curation, I conduct a practice-led inquiry that explores these themes through five exhibition projects and one photographic series. I argue that the introduction and application of the adjunct model enables curatorial practitioners to sensitively work around the dominant museum model, and circumvent the divide between institutional and independent curation. Introducing the adjunct model allows the curator to develop personalised narratives relating to the experience of fashion and clothing as an exhibited phenomenon in a variety of institutional and non-institutional sites. Hence this research project contributes to a developing field by proposing a valuable and nuanced model for fashion curation.