975 resultados para markov chains monte carlo methods
Resumo:
This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time. A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation. The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step. In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications. Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.
Resumo:
Diese Arbeit beschäftigt sich mit Strukturbildung im schlechten Lösungsmittel bei ein- und zweikomponentigen Polymerbürsten, bei denen Polymerketten durch Pfropfung am Substrat verankert sind. Solche Systeme zeigen laterale Strukturbildungen, aus denen sich interessante Anwendungen ergeben. Die Bewegung der Polymere erfolgt durch Monte Carlo-Simulationen im Kontinuum, die auf CBMC-Algorithmen sowie lokalen Monomerverschiebungen basieren. Eine neu entwickelte Variante des CBMC-Algorithmus erlaubt die Bewegung innerer Kettenteile, da der bisherige Algorithmus die Monomere in Nähe des Pfropfmonomers nicht gut relaxiert. Zur Untersuchung des Phasenverhaltens werden mehrere Analysemethoden entwickelt und angepasst: Dazu gehören die Minkowski-Maße zur Strukturuntersuchung binären Bürsten und die Pfropfkorrelationen zur Untersuchung des Einflusses von Pfropfmustern. Bei einkomponentigen Bürsten tritt die Strukturbildung nur beim schwach gepfropften System auf, dichte Pfropfungen führen zu geschlossenen Bürsten ohne laterale Struktur. Für den graduellen Übergang zwischen geschlossener und aufgerissener Bürste wird ein Temperaturbereich bestimmt, in dem der Übergang stattfindet. Der Einfluss des Pfropfmusters (Störung der Ausbildung einer langreichweitigen Ordnung) auf die Bürstenkonfiguration wird mit den Pfropfkorrelationen ausgewertet. Bei unregelmäßiger Pfropfung sind die gebildeten Strukturen größer als bei regelmäßiger Pfropfung und auch stabiler gegen höhere Temperaturen. Bei binären Systemen bilden sich Strukturen auch bei dichter Pfropfung aus. Zu den Parametern Temperatur, Pfropfdichte und Pfropfmuster kommt die Zusammensetzung der beiden Komponenten hinzu. So sind weitere Strukturen möglich, bei gleicher Häufigkeit der beiden Komponenten bilden sich streifenförmige, lamellare Muster, bei ungleicher Häufigkeit formt die Minoritätskomponente Cluster, die in der Majoritätskomponente eingebettet sind. Selbst bei gleichmäßig gepfropften Systemen bildet sich keine langreichweitige Ordnung aus. Auch bei binären Bürsten hat das Pfropfmuster großen Einfluss auf die Strukturbildung. Unregelmäßige Pfropfmuster führen schon bei höheren Temperaturen zur Trennung der Komponenten, die gebildeten Strukturen sind aber ungleichmäßiger und etwas größer als bei gleichmäßig gepfropften Systemen. Im Gegensatz zur self consistent field-Theorie berücksichtigen die Simulationen Fluktuationen in der Pfropfung und zeigen daher bessere Übereinstimmungen mit dem Experiment.
Resumo:
Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
Recently, the new high definition multileaf collimator (HD120 MLC) was commercialized by Varian Medical Systems providing high resolution in the center section of the treatment field. The aim of this work is to investigate the characteristics of the HD120 MLC using Monte Carlo (MC) methods.
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
PURPOSE: Study of behavior and influence of a multileaf collimator (MLC) on dose calculation, verification, and portal energy spectra in the case of intensity-modulated fields obtained with a step-and-shoot or a dynamic technique. METHODS: The 80-leaf MLC for the Varian Clinac 2300 C/D was implemented in a previously developed Monte Carlo (MC) based multiple source model (MSM) for a 6 MV photon beam. Using this model and the MC program GEANT, dose distributions, energy fluence maps and energy spectra at different portal planes were calculated for three different MLC applications. RESULTS: The comparison of MC-calculated dose distributions in the phantom and portal plane, with those measured with films showed an agreement within 3% and 1.5 mm for all cases studied. The deviations mainly occur in the extremes of the intensity modulation. The MC method allows to investigate, among other aspects, dose components, energy fluence maps, tongue-and-groove effects and energy spectra at portal planes. CONCLUSION: The MSM together with the implementation of the MLC is appropriate for a number of investigations in intensity-modulated radiation therapy (IMRT).
Resumo:
Introduction Commercial treatment planning systems employ a variety of dose calculation algorithms to plan and predict the dose distributions a patient receives during external beam radiation therapy. Traditionally, the Radiological Physics Center has relied on measurements to assure that institutions participating in the National Cancer Institute sponsored clinical trials administer radiation in doses that are clinically comparable to those of other participating institutions. To complement the effort of the RPC, an independent dose calculation tool needs to be developed that will enable a generic method to determine patient dose distributions in three dimensions and to perform retrospective analysis of radiation delivered to patients who enrolled in past clinical trials. Methods A multi-source model representing output for Varian 6 MV and 10 MV photon beams was developed and evaluated. The Monte Carlo algorithm, know as the Dose Planning Method (DPM), was used to perform the dose calculations. The dose calculations were compared to measurements made in a water phantom and in anthropomorphic phantoms. Intensity modulated radiation therapy and stereotactic body radiation therapy techniques were used with the anthropomorphic phantoms. Finally, past patient treatment plans were selected and recalculated using DPM and contrasted against a commercial dose calculation algorithm. Results The multi-source model was validated for the Varian 6 MV and 10 MV photon beams. The benchmark evaluations demonstrated the ability of the model to accurately calculate dose for the Varian 6 MV and the Varian 10 MV source models. The patient calculations proved that the model was reproducible in determining dose under similar conditions described by the benchmark tests. Conclusions The dose calculation tool that relied on a multi-source model approach and used the DPM code to calculate dose was developed, validated, and benchmarked for the Varian 6 MV and 10 MV photon beams. Several patient dose distributions were contrasted against a commercial algorithm to provide a proof of principal to use as an application in monitoring clinical trial activity.
Resumo:
PURPOSE Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). METHODS This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. RESULTS For 15 × 34, 5 × 5, and 2 × 2 cm(2) fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. CONCLUSIONS The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
Uveal melanoma is a rare but life-threatening form of ocular cancer. Contemporary treatment techniques include proton therapy, which enables conservation of the eye and its useful vision. Dose to the proximal structures is widely believed to play a role in treatment side effects, therefore, reliable dose estimates are required for properly evaluating the therapeutic value and complication risk of treatment plans. Unfortunately, current simplistic dose calculation algorithms can result in errors of up to 30% in the proximal region. In addition, they lack predictive methods for absolute dose per monitor unit (D/MU) values. ^ To facilitate more accurate dose predictions, a Monte Carlo model of an ocular proton nozzle was created and benchmarked against measured dose profiles to within ±3% or ±0.5 mm and D/MU values to within ±3%. The benchmarked Monte Carlo model was used to develop and validate a new broad beam dose algorithm that included the influence of edgescattered protons on the cross-field intensity profile, the effect of energy straggling in the distal portion of poly-energetic beams, and the proton fluence loss as a function of residual range. Generally, the analytical algorithm predicted relative dose distributions that were within ±3% or ±0.5 mm and absolute D/MU values that were within ±3% of Monte Carlo calculations. Slightly larger dose differences were observed at depths less than 7 mm, an effect attributed to the dose contributions of edge-scattered protons. Additional comparisons of Monte Carlo and broad beam dose predictions were made in a detailed eye model developed in this work, with generally similar findings. ^ Monte Carlo was shown to be an excellent predictor of the measured dose profiles and D/MU values and a valuable tool for developing and validating a broad beam dose algorithm for ocular proton therapy. The more detailed physics modeling by the Monte Carlo and broad beam dose algorithms represent an improvement in the accuracy of relative dose predictions over current techniques, and they provide absolute dose predictions. It is anticipated these improvements can be used to develop treatment strategies that reduce the incidence or severity of treatment complications by sparing normal tissue. ^
Resumo:
ObjectKineticMonteCarlo models allow for the study of the evolution of the damage created by irradiation to time scales that are comparable to those achieved experimentally. Therefore, the essential ObjectKineticMonteCarlo parameters can be validated through comparison with experiments. However, this validation is not trivial since a large number of parameters is necessary, including migration energies of point defects and their clusters, binding energies of point defects in clusters, as well as the interactionradii. This is particularly cumbersome when describing an alloy, such as the Fe–Cr system, which is of interest for fusion energy applications. In this work we describe an ObjectKineticMonteCarlo model for Fe–Cr alloys in the dilute limit. The parameters used in the model come either from density functional theory calculations or from empirical interatomic potentials. This model is used to reproduce isochronal resistivity recovery experiments of electron irradiateddiluteFe–Cr alloys performed by Abe and Kuramoto. The comparison between the calculated results and the experiments reveal that an important parameter is the capture radius between substitutionalCr and self-interstitialFe atoms. A parametric study is presented on the effect of the capture radius on the simulated recovery curves.
Resumo:
The uncertainty propagation in fuel cycle calculations due to Nuclear Data (ND) is a important important issue for : issue for : • Present fuel cycles (e.g. high burnup fuel programme) • New fuel cycles designs (e.g. fast breeder reactors and ADS) Different error propagation techniques can be used: • Sensitivity analysis • Response Response Surface Method Surface Method • Monte Carlo technique Then, p p , , in this paper, it is assessed the imp y pact of ND uncertainties on the decay heat and radiotoxicity in two applications: • Fission Pulse Decay ( y Heat calculation (FPDH) • Conceptual design of European Facility for Industrial Transmutation (EFIT)