963 resultados para Monte-Carlo Simulation Method
Resumo:
Das Phasenverhalten und die Grenzflächeneigenschaften vonPolymeren in superkritischer Lösung werden anhand einesvergröberten Kugel-Feder-Modells für das ReferenzsystemHexadekan-CO2 untersucht. Zur Bestimmung der Parameter imPotential setzt man die kritischen Punkte von Simulation undExperiment gleich. Wechselwirkungen zwischen beidenKomponenten werden durch eine modifizierteLorentz-Berthelot-Regel modelliert. Die Übereinstimmung mitden Experimenten ist sehr gut - insbesondere kann dasPhasendiagramm des Mischsystems inklusive kritischer Linienreproduziert werden. Ein Vergleich mit numerischenStörungsrechnungen (TPT1) liefert eine qualitativeÜbereinstimmung und Hinweise zur Verbesserung derverwendeten Zustandsgleichung. Aufbauend auf diesen Betrachtungen werden die Frühstadiender Keimbildung untersucht. Für das Lennard-Jones-Systemwird zum ersten Mal der Übergang vom homogenen Gas zu einemeinzelnen Tropfen im endlichen Volumen direkt nachgewiesenund quantifiziert. Die freie Energie von kleinen Clusternwird mit einem einfachen, klassischen Nukleationsmodellbestimmt und nach oben abgeschätzt. Die vorgestellten Untersuchungen wurden durch eineWeiterentwicklung des Umbrella-Sampling-Algorithmusermöglicht. Hierbei wird die Simulation in mehrereSimulationsfenster unterteilt, die nacheinander abgearbeitetwerden. Die Methode erlaubt eine Bestimmung derFreien-Energie-Landschaft an einer beliebigen Stelle desPhasendiagramms. Der Fehler ist kontrollierbar undunabhängig von der Art der Unterteilung.
Resumo:
Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Resumo:
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation for 4 and 6 MeV electron beams of Varian linear accelerators.
Resumo:
This article presents the implementation and validation of a dose calculation approach for deforming anatomical objects. Deformation is represented by deformation vector fields leading to deformed voxel grids representing the different deformation scenarios. Particle transport in the resulting deformed voxels is handled through the approximation of voxel surfaces by triangles in the geometry implementation of the Swiss Monte Carlo Plan framework. The focus lies on the validation methodology which uses computational phantoms representing the same physical object through regular and irregular voxel grids. These phantoms are chosen such that the new implementation for a deformed voxel grid can be compared directly with an established dose calculation algorithm for regular grids. Furthermore, separate validation of the aspects voxel geometry and the density changes resulting from deformation is achieved through suitable design of the validation phantom. We show that equivalent results are obtained with the proposed method and that no statistically significant errors are introduced through the implementation for irregular voxel geometries. This enables the use of the presented and validated implementation for further investigations of dose calculation on deforming anatomy.
Resumo:
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Resumo:
Permutation tests are useful for drawing inferences from imaging data because of their flexibility and ability to capture features of the brain that are difficult to capture parametrically. However, most implementations of permutation tests ignore important confounding covariates. To employ covariate control in a nonparametric setting we have developed a Markov chain Monte Carlo (MCMC) algorithm for conditional permutation testing using propensity scores. We present the first use of this methodology for imaging data. Our MCMC algorithm is an extension of algorithms developed to approximate exact conditional probabilities in contingency tables, logit, and log-linear models. An application of our non-parametric method to remove potential bias due to the observed covariates is presented.
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
Today electronic portal imaging devices (EPID's) are used primarily to verify patient positioning. They have, however, also the potential as 2D-dosimeters and could be used as such for transit dosimetry or dose reconstruction. It has been proven that such devices, especially liquid filled ionization chambers, have a stable dose response relationship which can be described in terms of the physical properties of the EPID and the pulsed linac radiation. For absolute dosimetry however, an accurate method of calibration to an absolute dose is needed. In this work, we concentrate on calibration against dose in a homogeneous water phantom. Using a Monte Carlo model of the detector we calculated dose spread kernels in units of absolute dose per incident energy fluence and compared them to calculated dose spread kernels in water at different depths. The energy of the incident pencil beams varied between 0.5 and 18 MeV. At the depth of dose maximum in water for a 6 MV beam (1.5 cm) and for a 18 MV beam (3.0 cm) we observed large absolute differences between water and detector dose above an incident energy of 4 MeV but only small relative differences in the most frequent energy range of the beam energy spectra. It is shown that for a 6 MV beam the absolute reference dose measured at 1.5 cm water depth differs from the absolute detector dose by 3.8%. At depth 1.2 cm in water, however, the relative dose differences are almost constant between 2 and 6 MeV. The effects of changes in the energy spectrum of the beam on the dose responses in water and in the detector are also investigated. We show that differences larger than 2% can occur for different beam qualities of the incident photon beam behind water slabs of different thicknesses. It is therefore concluded that for high-precision dosimetry such effects have to be taken into account. Nevertheless, the precise information about the dose response of the detector provided in this Monte Carlo study forms the basis of extracting directly the basic radiometric quantities photon fluence and photon energy fluence from the detector's signal using a deconvolution algorithm. The results are therefore promising for future application in absolute transit dosimetry and absolute dose reconstruction.
Resumo:
Introduction Commercial treatment planning systems employ a variety of dose calculation algorithms to plan and predict the dose distributions a patient receives during external beam radiation therapy. Traditionally, the Radiological Physics Center has relied on measurements to assure that institutions participating in the National Cancer Institute sponsored clinical trials administer radiation in doses that are clinically comparable to those of other participating institutions. To complement the effort of the RPC, an independent dose calculation tool needs to be developed that will enable a generic method to determine patient dose distributions in three dimensions and to perform retrospective analysis of radiation delivered to patients who enrolled in past clinical trials. Methods A multi-source model representing output for Varian 6 MV and 10 MV photon beams was developed and evaluated. The Monte Carlo algorithm, know as the Dose Planning Method (DPM), was used to perform the dose calculations. The dose calculations were compared to measurements made in a water phantom and in anthropomorphic phantoms. Intensity modulated radiation therapy and stereotactic body radiation therapy techniques were used with the anthropomorphic phantoms. Finally, past patient treatment plans were selected and recalculated using DPM and contrasted against a commercial dose calculation algorithm. Results The multi-source model was validated for the Varian 6 MV and 10 MV photon beams. The benchmark evaluations demonstrated the ability of the model to accurately calculate dose for the Varian 6 MV and the Varian 10 MV source models. The patient calculations proved that the model was reproducible in determining dose under similar conditions described by the benchmark tests. Conclusions The dose calculation tool that relied on a multi-source model approach and used the DPM code to calculate dose was developed, validated, and benchmarked for the Varian 6 MV and 10 MV photon beams. Several patient dose distributions were contrasted against a commercial algorithm to provide a proof of principal to use as an application in monitoring clinical trial activity.
Resumo:
PURPOSE This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). METHODS As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. RESULTS The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V95% increased from 90% to 96% and V107% decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. CONCLUSIONS MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Resumo:
In this second part of our comparative study inspecting the (dis)similarities between “Stokes” and “Jones,” we present simulation results yielded by two independent Monte Carlo programs: (i) one developed in Bern with the Jones formalism and (ii) the other implemented in Ulm with the Stokes notation. The simulated polarimetric experiments involve suspensions of polystyrene spheres with varying size. Reflection and refraction at the sample/air interfaces are also considered. Both programs yield identical results when propagating pure polarization states, yet, with unpolarized illumination, second order statistical differences appear, thereby highlighting the pre-averaged nature of the Stokes parameters. This study serves as a validation for both programs and clarifies the misleading belief according to which “Jones cannot treat depolarizing effects.”
Resumo:
The uncertainty propagation in fuel cycle calculations due to Nuclear Data (ND) is a important important issue for : issue for : • Present fuel cycles (e.g. high burnup fuel programme) • New fuel cycles designs (e.g. fast breeder reactors and ADS) Different error propagation techniques can be used: • Sensitivity analysis • Response Response Surface Method Surface Method • Monte Carlo technique Then, p p , , in this paper, it is assessed the imp y pact of ND uncertainties on the decay heat and radiotoxicity in two applications: • Fission Pulse Decay ( y Heat calculation (FPDH) • Conceptual design of European Facility for Industrial Transmutation (EFIT)
Resumo:
The new Spanish Regulation in Building Acoustic establishes values and limits for the different acoustic magnitudes whose fulfillment can be verify by means field measurements. In this sense, an essential aspect of a field measurement is to give the measured magnitude and the uncertainty associated to such a magnitude. In the calculus of the uncertainty it is very usual to follow the uncertainty propagation method as described in the Guide to the expression of Uncertainty in Measurements (GUM). Other option is the numerical calculus based on the distribution propagation method by means of Monte Carlo simulation. In fact, at this stage, it is possible to find several publications developing this last method by using different software programs. In the present work, we used Excel for the Monte Carlo simulation for the calculus of the uncertainty associated to the different magnitudes derived from the field measurements following ISO 140-4, 140-5 and 140-7. We compare the results with the ones obtained by the uncertainty propagation method. Although both methods give similar values, some small differences have been observed. Some arguments to explain such differences are the asymmetry of the probability distributions associated to the entry magnitudes,the overestimation of the uncertainty following the GUM