519 resultados para optimisation methods
Resumo:
Introduction The dose to skin surface is an important factor for many radiotherapy treatment techniques. It is known that TPS predicted surface doses can be significantly different from actual ICRP skin doses as defined at 70 lm. A number of methods have been implemented for the accurate determination of surface dose including use of specific dosimeters such as TLDs and radiochromic film as well as Monte Carlo calculations. Stereotactic radiosurgery involves delivering very high doses per treatment fraction using small X-ray fields. To date, there has been limited data on surface doses for these very small field sizes. The purpose of this work is to evaluate surface doses by both measurements and Monte Carlo calculations for very small field sizes. Methods All measurements were performed on a Novalis Tx linear accelerator which has a 6 MV SRS X-ray beam mode which uses a specially thin flattening filter. Beam collimation was achieved by circular cones with apertures that gave field sizes ranging from 4 to 30 mm at the isocentre. The relative surface doses were measured using Gafchromic EBT3 film which has the active layer at a depth similar to the ICRP skin dose depth. Monte Carlo calculations were performed using the BEAMnrc/EGSnrc Monte Carlo codes (V4 r225). The specifications of the linear accelerator, including the collimator, were provided by the manufacturer. Optimisation of the incident X-ray beam was achieved by an iterative adjustment of the energy, spatial distribution and radial spread of the incident electron beam striking the target. The energy cutoff parameters were PCUT = 0.01 MeV and ECUT = 0.700 - MeV. Directional bremsstrahlung splitting was switched on for all BEAMnrc calculations. Relative surface doses were determined in a layer defined in a water phantom of the same thickness and depth as compared to the active later in the film. Results Measured surface doses using the EBT3 film varied between 13 and 16 % for the different cones with an uncertainty of 3 %. Monte Carlo calculated surface doses were in agreement to better than 2 % to the measured doses for all the treatment cones. Discussion and conclusions This work has shown the consistency of surface dose measurements using EBT3 film with Monte Carlo predicted values within the uncertainty of the measurements. As such, EBT3 film is recommended for in vivo surface dose measurements.
Resumo:
In 2009, BJSM's first editorial argued that ‘Physical inactivity is the greatest public health problem of the 21st century’.1 The data supporting that claim have not yet been challenged. Now, 5 years after BJSM published its first dedicated ‘Physical Activity is Medicine’ theme issue (http://bjsm.bmj.com/content/43/1.toc) we are pleased to highlight 23 new contributions from six countries. This issue contains an analysis of the cost of physical inactivity from the US Centre for Diseases Control.2 We also report the cost-effectiveness of one particular physical activity intervention for adults.3
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Social contexts are possible information sources that can foster connections between mobile application users, but they are also minefields of privacy concerns and have great potential for misinterpretation. This research establishes a framework for guiding the design of context-aware mobile social applications from a socio-technical perspective. Agile ridesharing was chosen as the test domain for the research because its success relies upon effectively connecting people through mobile technologies.
Resumo:
Sociological approaches to inquiry on emotion in educational settings are growing. Despite a long tradition of research and theory in disciplines such as psychology and sociology, the methods and approaches for naturalistic investigation of emotion are in a developmental phase in educational settings. In this article, recent empirical studies on emotion in educational contexts are canvassed. The discussion focuses on the use of multiple methods within research conducted in high school and university classrooms highlighting recent methodological progress. The methods discussed include facial expression analysis, verbal and non-verbal conduct, and self-report methods. Analyses drawn from different studies, informed by perspectives from microsociology, highlight the strengths and limitations of any one method. The power and limitations of multi-method approaches is discussed.
Resumo:
The oxides of copper (CuxO) are fascinating materials due to their remarkable optical, electrical, thermal and magnetic properties. Nanostructuring of CuxO can further enhance the performance of this important functional material and provide it with unique properties that do not exist in its bulk form. Three distinctly different phases of CuxO, mainly CuO, Cu2O and Cu4O3, can be prepared by numerous synthesis techniques including, vapour deposition and liquid phase chemical methods. In this article, we present a review of nanostructured CuxO focusing on their material properties, methods of synthesis and an overview of various applications that have been associated with nanostructured CuxO.
Resumo:
Design Science is the process of solving ‘wicked problems’ through designing, developing, instantiating, and evaluating novel solutions (Hevner, March, Park and Ram, 2004). Wicked problems are described as agent finitude in combination with problem complexity and normative constraint (Farrell and Hooker, 2013). In Information Systems Design Science, determining that problems are ‘wicked’ differentiates Design Science research from Solutions Engineering (Winter, 2008) and is a necessary part of proving the relevance to Information Systems Design Science research (Hevner, 2007; Iivari, 2007). Problem complexity is characterised as many problem components with nested, dependent and co-dependent relationships interacting through multiple feedback and feed-forward loops. Farrell and Hooker (2013) specifically state for wicked problems “it will often be impossible to disentangle the consequences of specific actions from those of other co-occurring interactions”. This paper discusses the application of an Enterprise Information Architecture modelling technique to disentangle the wicked problem complexity for one case. It proposes that such a modelling technique can be applied to other wicked problems and can lay the foundations for proving relevancy to DSR, provide solution pathways for artefact development, and aid to substantiate those elements required to produce Design Theory.
Resumo:
Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.
Resumo:
Thin plate spline finite element methods are used to fit a surface to an irregularly scattered dataset [S. Roberts, M. Hegland, and I. Altas. Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions. SIAM, 1:208--234, 2003]. The computational bottleneck for this algorithm is the solution of large, ill-conditioned systems of linear equations at each step of a generalised cross validation algorithm. Preconditioning techniques are investigated to accelerate the convergence of the solution of these systems using Krylov subspace methods. The preconditioners under consideration are block diagonal, block triangular and constraint preconditioners [M. Benzi, G. H. Golub, and J. Liesen. Numerical solution of saddle point problems. Acta Numer., 14:1--137, 2005]. The effectiveness of each of these preconditioners is examined on a sample dataset taken from a known surface. From our numerical investigation, constraint preconditioners appear to provide improved convergence for this surface fitting problem compared to block preconditioners.
Resumo:
Introduction This investigation aimed to assess the consistency and accuracy of radiation therapists (RTs) performing cone beam computed tomography (CBCT) alignment to fiducial markers (FMs) (CBCTFM) and the soft tissue prostate (CBCTST). Methods Six patients receiving prostate radiation therapy underwent daily CBCTs. Manual alignment of CBCTFM and CBCTST was performed by three RTs. Inter-observer agreement was assessed using a modified Bland–Altman analysis for each alignment method. Clinically acceptable 95% limits of agreement with the mean (LoAmean) were defined as ±2.0 mm for CBCTFM and ±3.0 mm for CBCTST. Differences between CBCTST alignment and the observer-averaged CBCTFM (AvCBCTFM) alignment were analysed. Clinically acceptable 95% LoA were defined as ±3.0 mm for the comparison of CBCTST and AvCBCTFM. Results CBCTFM and CBCTST alignments were performed for 185 images. The CBCTFM 95% LoAmean were within ±2.0 mm in all planes. CBCTST 95% LoAmean were within ±3.0 mm in all planes. Comparison of CBCTST with AvCBCTFM resulted in 95% LoA of −4.9 to 2.6, −1.6 to 2.5 and −4.7 to 1.9 mm in the superior–inferior, left–right and anterior–posterior planes, respectively. Conclusions Significant differences were found between soft tissue alignment and the predicted FM position. FMs are useful in reducing inter-observer variability compared with soft tissue alignment. Consideration needs to be given to margin design when using soft tissue matching due to increased inter-observer variability. This study highlights some of the complexities of soft tissue guidance for prostate radiation therapy.
Resumo:
Purpose To establish whether the use of a passive or active technique of planning target volume (PTV) definition and treatment methods for non-small cell lung cancer (NSCLC) deliver the most effective results. This literature review assesses the advantages and disadvantages in recent studies of each, while assessing the validity of the two approaches for planning and treatment. Methods A systematic review of literature focusing on the planning and treatment of radiation therapy to NSCLC tumours. Different approaches which have been published in recent articles are subjected to critical appraisal in order to determine their relative efficacy. Results Free-breathing (FB) is the optimal method to perform planning scans for patients and departments, as it involves no significant increase in cost, workload or education. Maximum intensity projection (MIP) is the fastest form of delineation, however it is noted to be less accurate than the ten-phase overlap approach for computed tomography (CT). Although gating has proven to reduce margins and facilitate sparing of organs at risk, treatment times can be longer and planning time can be as much as 15 times higher for intensity modulated radiation therapy (IMRT). This raises issues with patient comfort and stabilisation, impacting on the chance of geometric miss. Stereotactic treatments can take up to 3 hours to treat, along with increases in planning and treatment, as well as the additional hardware, software and training required. Conclusion Four-dimensional computed tomography (4DCT) is superior to 3DCT, with the passive FB approach for PTV delineation and treatment optimal. Departments should use a combination of MIP with visual confirmation ensuring coverage for stage 1 disease. Stages 2-3 should be delineated using ten-phases overlaid. Stereotactic and gated treatments for early stage disease should be used accordingly; FB-IMRT is optimal for latter stage disease.
Resumo:
The Environmental Kuznets Curve (EKC) hypothesises an inverse U-shaped relationship between a measure of environmental pollution and per capita income levels. In this study, we apply non-parametric estimation of local polynomial regression (local quadratic fitting) to allow more flexibility in local estimation. This study uses a larger and globally representative sample of many local and global pollutants and natural resources including Biological Oxygen Demand (BOD) emission, CO2 emission, CO2 damage, energy use, energy depletion, mineral depletion, improved water source, PM10, particulate emission damage, forest area and net forest depletion. Copyright © 2009 Inderscience Enterprises Ltd.
Resumo:
We implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to investigate boarding times for Boeing 777 and Airbus 380 aircraft. We also introduce three new boarding methods to find the optimum boarding strategy. Our models explicitly simulate the behaviour of groups of people travelling together and we explicitly simulate the timing to store their luggage as part of the boarding process. Results from the simulation demonstrates the Reverse Pyramid method is the best boarding method for Boeing 777, and the Steffen method is the best boarding method for Airbus 380. For the new suggested boarding methods, aisle first boarding method is the best boarding strategy for Boeing 777 and row arrangement method is the best boarding strategy for Airbus 380. Overall best boarding strategy is aisle first boarding method for Boeing 777 and Steffen method for Airbus 380.