40 resultados para markov chains monte carlo methods
Resumo:
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Resumo:
Low energy X-rays Intra-Operative Radiation Therapy (XIORT) treatment delivered during surgery (ex: INTRABEAM, Carl Zeiss, and Axxent, Xoft) can benefit from accurate and fast dose prediction in a patient 3D volume.
Resumo:
The fixed point implementation of IIR digital filters usually leads to the appearance of zero-input limit cycles, which degrade the performance of the system. In this paper, we develop an efficient Monte Carlo algorithm to detect and characterize limit cycles in fixed-point IIR digital filters. The proposed approach considers filters formulated in the state space and is valid for any fixed point representation and quantization function. Numerical simulations on several high-order filters, where an exhaustive search is unfeasible, show the effectiveness of the proposed approach.
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.
Resumo:
The neutronics hall of the Nuclear Engineering Department at the Polytechnical University of Madrid has been characterized. The neutron spectra and the ambient dose equivalent produced by an 241AmBe source were measured at various source-to-detector distances on the new bench. Using Monte Carlo methods a detailed model of the neutronics hall was designed, and neutron spectra and the ambient dose equivalent were calculated at the same locations where measurements were carried out. A good agreement between measured and calculated values was found.
Resumo:
A passive neutron area monitor has been designed using Monte Carlo methods; the monitor is a polyethylene cylinder with pairs of thermoluminescent dosimeters (TLD600 and TLD700) as thermal neutron detector. The monitor was calibrated with a bare and a thermalzed 241AmBe neutron sources and its performance was evaluated measuring the ambient dose equivalent due to photoneutrons produced by a 15 MV linear accelerator for radiotherapy and the neutrons in the output of a TRIGA Mark III radial beam port.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
This thesis aims to introduce some fundamental concepts underlying option valuation theory including implementation of computational tools. In many cases analytical solution for option pricing does not exist, thus the following numerical methods are used: binomial trees, Monte Carlo simulations and finite difference methods. First, an algorithm based on Hull and Wilmott is written for every method. Then these algorithms are improved in different ways. For the binomial tree both speed and memory usage is significantly improved by using only one vector instead of a whole price storing matrix. Computational time in Monte Carlo simulations is reduced by implementing a parallel algorithm (in C) which is capable of improving speed by a factor which equals the number of processors used. Furthermore, MatLab code for Monte Carlo was made faster by vectorizing simulation process. Finally, obtained option values are compared to those obtained with popular finite difference methods, and it is discussed which of the algorithms is more appropriate for which purpose.
Resumo:
En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.