973 resultados para Monte-carlo Calculations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid configurations generated by Metropolis Monte Carlo simulations are used in time-dependent density functional theory calculations of the spectral line shifts and line profiles of the lowest lying excitation of the alkaline earth atoms, Be, Mg, Ca, Sr and Ba embedded in liquid helium. The results are in very good agreement with the available experimental data. Special attention is given to the calculated spectroscopic shift and the associated line broadening. The analysis specifies the inhomogeneous broadening of the three separate contributions due to the splitting of the s -> p transition of the alkaline earth atom in the liquid environment. (C) 2012 Elsevier B. V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gegenstand dieser Arbeit ist die nummerische Berechnung von Schleifenintegralen welche in höheren Ordnungen der Störungstheorie auftreten.rnAnalog zur reellen Emission kann man auch in den virtuellen Beiträgen Subtraktionsterme einführen, welche die kollinearen und soften Divergenzen des Schleifenintegrals entfernen. Die Phasenraumintegration und die Schleifenintegration können dann in einer einzigen Monte Carlo Integration durchgeführt werden. In dieser Arbeit zeigen wir wie eine solche numerische Integration unter zu Hilfenahme einer Kontourdeformation durchgeführt werden kann. Ausserdem zeigen wir wie man die benötigeten Integranden mit Rekursionsformeln berechnen kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis is the acceleration of numerical calculations of QCD observables, both at leading order and next–to–leading order in the coupling constant. In particular, the optimization of helicity and spin summation in the context of VEGAS Monte Carlo algorithms is investigated. In the literature, two such methods are mentioned but without detailed analyses. Only one of these methods can be used at next–to–leading order. This work presents a total of five different methods that replace the helicity sums with a Monte Carlo integration. This integration can be combined with the existing phase space integral, in the hope that this causes less overhead than the complete summation. For three of these methods, an extension to existing subtraction terms is developed which is required to enable next–to–leading order calculations. All methods are analyzed with respect to efficiency, accuracy, and ease of implementation before they are compared with each other. In this process, one method shows clear advantages in relation to all others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power calculations in a small sample comparative study, with a continuous outcome measure, are typically undertaken using the asymptotic distribution of the test statistic. When the sample size is small, this asymptotic result can be a poor approximation. An alternative approach, using a rank based test statistic, is an exact power calculation. When the number of groups is greater than two, the number of calculations required to perform an exact power calculation is prohibitive. To reduce the computational burden, a Monte Carlo resampling procedure is used to approximate the exact power function of a k-sample rank test statistic under the family of Lehmann alternative hypotheses. The motivating example for this approach is the design of animal studies, where the number of animals per group is typically small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous Nf = 2 results and their phenomenological values. Finally, the strategy for extending simulations to Nf = 2+1+1 is outlined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the accuracy of parameters related to the reactor core performance (e.g., ke) and f el cycle (e.g., isotopic evolution/transmutation) due to the uncertainties in the basic nuclear data (ND) is a critical issue. Different error propagation techniques (adjoint/forward sensitivity analysis procedures and/or Monte Carlo technique) can be used to address by computational simulation the systematic propagation of uncertainties on the final parameters. To perform this uncertainty assessment, the ENDF covariance les (variance/correlation in energy and cross- reactions-isotopes correlations) are required. In this paper, we assess the impact of ND uncertainties on the isotopic prediction for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) for a discharge burnup of 150 GWd/tHM. The complete set of uncertainty data for cross sections (EAF2007/UN, SCALE6.0/COVA-44G), radioactive decay and fission yield data (JEFF-3.1.1) are processed and used in ACAB code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.

Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.

In this work, we propose two methods for improving the efficiency of free energy calculations.

First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.

We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.

Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.

We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.

Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.

Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a permanent, stable ice sheet in East Antarctica happened during the middle Miocene, about 14 million years (Myr) ago. The middle Miocene therefore represents one of the distinct phases of rapid change in the transition from the "greenhouse" of the early Eocene to the "icehouse" of the present day. Carbonate carbon isotope records of the period immediately following the main stage of ice sheet development reveal a major perturbation in the carbon system, represented by the positive d13C excursion known as carbon maximum 6 ("M6"), which has traditionally been interpreted as reflecting increased burial of organic matter and atmospheric pCO2 drawdown. More recently, it has been suggested that the d13C excursion records a negative feedback resulting from the reduction of silicate weathering and an increase in atmospheric pCO2. Here we present high-resolution multi-proxy (alkenone carbon and foraminiferal boron isotope) records of atmospheric carbon dioxide and sea surface temperature across CM6. Similar to previously published records spanning this interval, our records document a world of generally low (~300 ppm) atmospheric pCO2 at a time generally accepted to be much warmer than today. Crucially, they also reveal a pCO2 decrease with associated cooling, which demonstrates that the carbon burial hypothesis for CM6 is feasible and could have acted as a positive feedback on global cooling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have tri-faceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module.---------- Methods: That this simple collimator model can produce spatially and dosimetrically accurate micro-collimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms.---------- Results: Monte Carlo dose calculations for on- and off-axis fields are shown to produce good agreement with experimental values, even upon close examination of the penumbrae.--------- Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knowledge of the accuracy of dose calculations in intensity-modulated radiotherapy of the head and neck is essential for clinical confidence in these highly conformal treatments. High dose gradients are frequently placed very close to critical structures, such as the spinal cord, and good coverage of complex shaped nodal target volumes is important for long term-local control. A phantom study is presented comparing the performance of standard clinical pencil-beam and collapsed-cone dose algorithms to Monte Carlo calculation and three-dimensional gel dosimetry measurement. All calculations and measurements are normalized to the median dose in the primary planning target volume, making this a purely relative study. The phantom simulates tissue, air and bone for a typical neck section and is treated using an inverse-planned 5-field IMRT treatment, similar in character to clinically used class solutions. Results indicate that the pencil-beam algorithm fails to correctly model the relative dose distribution surrounding the air cavity, leading to an overestimate of the target coverage. The collapsed-cone and Monte Carlo results are very similar, indicating that the clinical collapsed-cone algorithm is perfectly sufficient for routine clinical use. The gel measurement shows generally good agreement with the collapsed-cone and Monte Carlo calculated dose, particularly in the spinal cord dose and nodal target coverage, thus giving greater confidence in the use of this class solution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Polymeric graphitic carbon nitride materials have attracted increasing attention in recent years owning to their potential applications in energy conversion, environment protection, and so on. Here, from first-principles calculations, we report the electronic structure modification of graphitic carbon nitride (g-C3N4) in response to carbon doping. We showed that each dopant atom can induce a local magnetic moment of 1.0 μB in non-magnetic g-C3N4. At the doping concentration of 1/14, the local magnetic moments of the most stable doping configuration which has the dopant atom at the center of heptazine unit prefer to align in a parallel way leading to long-range ferromagnetic (FM) ordering. When the joint N atom is replaced by C atom, the system favors an antiferromagnetic (AFM) ordering at unstrained state, but can be tuned to ferromagnetism (FM) by applying biaxial tensile strain. More interestingly, the FM state of the strained system is half-metallic with abundant states at the Fermi level in one spin channel and a band gap of 1.82 eV in another spin channel. The Curie temperature (Tc) was also evaluated using a mean-field theory and Monte Carlo simulations within the Ising model. Such tunable electron spin-polarization and ferromagnetism are quite promising for the applications of graphitic carbon nitride in spintronics.