979 resultados para approximate calculation of sums
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
The mineral schlossmacherite (H3O,Ca)Al3(AsO4,PO4,SO4)2(OH)6 , a multi-cation-multi-anion mineral of the beudantite mineral subgroup has been characterised by Raman spectroscopy. The mineral and related minerals functions as a heavy metal collector and is often amorphous or poorly crystalline, such that XRD identification is difficult. The Raman spectra are dominated by an intense band at 864 cm-1, assigned to the symmetric stretching mode of the AsO43- anion. Raman bands at 809 and 819 cm-1 are assigned to the antisymmetric stretching mode of AsO43- . The sulphate anion is characterised by bands at 1000 cm-1 (ν1), and at 1031, 1082 and 1139 cm-1 (ν3). Two sets of bands in the OH stretching region are observed: firstly between 2800 and 3000 cm-1 with bands observed at 2850, 2868, 2918 cm-1 and secondly between 3300 and 3600 with bands observed at 3363, 3382, 3410, 3449 and 3537 cm-1. These bands enabled the calculation of hydrogen bond distances and show a wide range of H-bond distances.
Resumo:
The single crystal Raman spectra of natural mineral paulmooreite Pb2As2O5 from the Långban locality, Filipstad district, Värmland province, Sweden are presented for the first time. It is a monoclinic mineral containing an isolated [As2O5]4-. Depolarised and single crystal spectra of the natural and synthetic sample compare favorably and are characterized by strong bands around 186 and 140 cm-1 and three medium bands at 800 – 700 cm-1. Band assignments were made based on band symmetry and spectral comparison between experimental band positions and those resulting from Hartree-Fock calculation of an isolated [As2O5]4- ion. Spectral comparison was also made with lead arsenites such as synthetic PbAs2O4 and Pb2(AsO2)3Cl and natural finnemanite in order to determine the contribution of the terminal and bridging O in paulmooreite. Bands at 760 – 733 cm-1 were assigned to terminal As-O vibrations, whereas stretches of the bridging O occur at 562 and 503 cm-1. The single crystal spectra showed good mode separation, allowing bands to be assigned a symmetry species of Ag or Bg.
Resumo:
Purpose-- DB clients play a vital role in the delivery of DB system and the clients’ competences are critical to the success of DB projects. Most of DB clients, however, remain inexperienced with the DB system. This study, therefore, aims to identify the key competences that DB clients should possess to ensure the success of DB projects in the construction market of China. Design/Methodology/Approach -- Five semi-structured face-to-face interviews and two rounds Delphi questionnaire survey were conducted in the construction market of China to identify the key competences of DB clients. Rankings have been assigned to these key competences on the basis of their relative importance. Findings-- Six ranked key competences of DB clients have been identified, which are, namely, (1) the ability to clearly define project scope and objectives; (2) financial capacity for the projects; (3) capacity in contract management; (4) adequate staff or consulting team; (5) effective coordination with DB contractors and (6) experience with similar design-build projects. Calculation of Kendall’s Coefficient of Concordance (W) indicates a statistically significant consensus of panel experts on these top six key competences. Practical implications—Clients should clearly understand the competence requirements in DB projects and should assess their DB capability before going for the DB option. Originality/Value-- The examination of DB client’s key competences will help the client deepen the understanding of the DB system. DB clients can also make use of the research findings as guidelines to improve their DB competence.
Resumo:
Purpose: To demonstrate that relatively simple third-order theory can provide a framework which shows how peripheral refraction can be manipulated by altering the forms of spectacle lenses. Method: Third-order equations were used to yield lens forms that correct peripheral power errors, either for the lenses alone or in combination with typical peripheral refractions of myopic eyes. These results were compared with those of finite ray-tracing. Results: The approximate forms of spherical and conicoidal lenses provided by third-order theory were flatter over a moderate myopic range than the forms obtained by rigorous raytracing. Lenses designed to correct peripheral refractive errors produced large errors when used with foveal vision and a rotating eye. Correcting astigmatism tended to give large errors in mean oblique error and vice versa. When only spherical lens forms are used, correction of the relative hypermetropic peripheral refractions of myopic eyes which are observed experimentally, or the provision of relative myopic peripheral refractions in such eyes, seems impossible in the majority of cases. Conclusion: The third-order spectacle lens design approach can readily be used to show trends in peripheral refraction.
Resumo:
Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.
Resumo:
The position of housing demand and supply is not consistent. The Australian situation counters the experience demonstrated in many other parts of the world in the aftermath of the Global Financial Crisis, with residential housing prices proving particularly resilient. A seemingly inexorable housing demand remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of population growth fuelled by immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand level ensures problems related to housing affordability continue almost unabated. A significant, but less visible factor impacting housing affordability relates to holding costs. Although only one contributor in the housing affordability matrix, the nature and extent of holding cost impact requires elucidation: for example, the computation and methodology behind the calculation of holding costs varies widely - and in some instances completely ignored. In addition, ambiguity exists in terms of the inclusion of various elements that comprise holding costs, thereby affecting the assessment of their relative contribution. Such anomalies may be explained by considering that assessment is conducted over time in an ever-changing environment. A strong relationship with opportunity cost - in turn dependant inter alia upon prevailing inflation and / or interest rates - adds further complexity. By extending research in the general area of housing affordability, this thesis seeks to provide a detailed investigation of those elements related to holding costs specifically in the context of midsized (i.e. between 15-200 lots) greenfield residential property developments in South East Queensland. With the dimensions of holding costs and their influence over housing affordability determined, the null hypothesis H0 that holding costs are not passed on can be addressed. Arriving at these conclusions involves the development of robust economic and econometric models which seek to clarify the componentry impacts of holding cost elements. An explanatory sequential design research methodology has been adopted, whereby the compilation and analysis of quantitative data and the development of an economic model is informed by the subsequent collection and analysis of primarily qualitative data derived from surveying development related organisations. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.
Resumo:
We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Shaft fracture at an early stage of operation is a common problem for a certain type of wind turbine. To determine the cause of shaft failure a series of experimental tests were conducted to evaluate the chemical composition and mechanical properties. A detail analysis involving macroscopic feature and microstructure analysis of the material of the shaft was also performed to have an in depth knowledge of the cause of fracture. The experimental tests and analysis results show that there are no significant differences in the material property of the main shaft when comparing it with the Standard, EN10083-3:2006. The results show that stress concentration on the shaft surface close to the critical section of the shaft due to rubbing of the annular ring and coupled with high stress concentration caused by the change of inner diameter of the main shaft are the main reasons that result in fracture of the main shaft. In addition, inhomogeneity of the main shaft micro-structure also accelerates up the fracture process of the main shaft. In addition, the theoretical calculation of equivalent stress at the end of the shaft was performed, which demonstrate that cracks can easily occur under the action of impact loads. The contribution of this paper is to provide a reference in fracture analysis of similar main shaft of wind turbines.
Resumo:
Condensation technique of degree of freedom is firstly proposed to improve the computational efficiency of meshfree method with Galerkin weak form. In present method, scattered nodes without connectivity are divided into several subsets by cells with arbitrary shape. The local discrete equations are established over each cell by using moving kriging interpolation, in which the nodes that located in the cell are used for approximation. Then, the condensation technique can be introduced into the local discrete equations by transferring equations of inner nodes to equations of boundary nodes based on cell. In the scheme of present method, the calculation of each cell is carried out by meshfree method with Galerkin weak form, and local search is implemented in interpolation. Numerical examples show that the present method has high computational efficiency and convergence, and good accuracy is also obtained.
Resumo:
Bacterial siderophores are a group of chemically diverse, virulence-associated secondary metabolites whose expression exerts metabolic costs. A combined bacterial genetic and metabolomic approach revealed differential metabolomic impacts associated with biosynthesis of different siderophore structural families. Despite myriad genetic differences, the metabolome of a cheater mutant lacking a single set of siderophore biosynthetic genes more closely approximate that of a nonpathogenic K12 strain than its isogenic, uropathogen parent strain. Siderophore types associated with greater metabolomic perturbations are less common among human isolates, suggesting that metabolic costs influence success in a human population. Although different siderophores share a common iron acquisition function, our analysis shows how a metabolomic approach can distinguish their relative metabolic impacts in E.coli.