961 resultados para the Low-variance deviational simulation Monte Carlo (LVDSMC)
Resumo:
Monte Carlo burnup codes use various schemes to solve the coupled criticality and burnup equations. Previous studies have shown that the simplest methods, such as the beginning-of-step and middle-of-step constant flux approximations, are numerically unstable in fuel cycle calculations of critical reactors. Here we show that even the predictor-corrector methods that are implemented in established Monte Carlo burnup codes can be numerically unstable in cycle calculations of large systems. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equation by separation of integration domain. The hemispherical integration domain is symmetrically separated into 16 parts. First 9 sub-domains are equal size of orthogonal spherical triangles. They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles, also symmetric each to other. All sub-domains have fixed vertices and computable parameters. The bijections of unit square into an orthogonal spherical triangle and into a spherical quadrangle are derived and used to generate sampling points. Then, the symmetric sampling scheme is applied to generate the sampling points distributed over the hemispherical integration domain. The necessary transformations are made and the stratified Monte Carlo estimator is presented. The rate of convergence is obtained and one can see that the algorithm is of super-convergent type.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
The numerical simulations of the magnetic properties of extended three-dimensional networks containing M(II) ions with an S = 5/2 ground-state spin have been carried out within the framework of the isotropic Heisenberg model. Analytical expressions fitting the numerical simulations for the primitive cubic, diamond, together with (10−3) cubic networks have all been derived. With these empirical formulas in hands, we can now extract the interaction between the magnetic ions from the experimental data for these networks. In the case of the primitive cubic network, these expressions are directly compared with those from the high-temperature expansions of the partition function. A fit of the experimental data for three complexes, namely [(N(CH3)4][Mn(N3)] 1, [Mn(CN4)]n 2, and [FeII(bipy)3][MnII2(ox)3] 3, has been carried out. The best fits were those obtained using the following parameters, J = −3.5 cm-1, g = 2.01 (1); J = −8.3 cm-1, g = 1.95 (2); and J = −2.0 cm-1, g = 1.95 (3).
Resumo:
The dynamics of low-density flows is governed by the Boltzmann equation of the kinetic theory of gases. This is a nonlinear integro-differential equation and, in general, numerical methods must be used to obtain its solution. The present paper, after a brief review of Direct Simulation Monte Carlo (DSMC) methods due to Bird, and Belotserkovskii and Yanitskii, studies the details of theDSMC method of Deshpande for mono as well as multicomponent gases. The present method is a statistical particle-in-cell method and is based upon the Kac-Prigogine master equation which reduces to the Boltzmann equation under the hypothesis of molecular chaos. The proposed Markoff model simulating the collisions uses a Poisson distribution for the number of collisions allowed in cells into which the physical space is divided. The model is then extended to a binary mixture of gases and it is shown that it is necessary to perform the collisions in a certain sequence to obtain unbiased simulation.
Resumo:
The density fluctuations below the onset of convection in the Rayleigh-Benard problem are studied with the direct simulation Monte Carlo method. The particle simulation results clearly show the connection between the static correlation functions of fluctuations below the critical Rayleigh number and the flow patterns above the onset of convection for small Knudsen number flows (Kn=0.01 and Kn=0.005). Furthermore, the physical nature for no convection in the Rayleigh-Benard problem under large Knudsen number conditions (Kn>0.028) is explained based on the dynamics of fluctuations.
Resumo:
Performing an event-based continuous kinetic Monte Carlo simulation, we investigate the modulated effect induced by the dislocation on the substrate to the growth of semiconductor quantum dots (QDs). The relative positions between the QDs and the dislocations are studied. The stress effects to the growth of the QDs are considered in simulation. The simulation results are compared with the experiment and the agreement between them indicates that this simulation is useful to study the growth mode and the atomic kinetics during the growth of the semiconductor QDs. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
The present work deals with the prediction of stiffness of an Indian nanoclay-reinforced polypropylene composite (that can be termed as a nanocomposite) using a Monte Carlo finite element analysis (FEA) technique. Nanocomposite samples are at first prepared in the laboratory using a torque rheometer for achieving desirable dispersion of nanoclay during master batch preparation followed up with extrusion for the fabrication of tensile test dog-bone specimens. It has been observed through SEM (scanning electron microscopy) images of the prepared nanocomposite containing a given percentage (3–9% by weight) of the considered nanoclay that nanoclay platelets tend to remain in clusters. By ascertaining the average size of these nanoclay clusters from the images mentioned, a planar finite element model is created in which nanoclay groups and polymer matrix are modeled as separate entities assuming a given homogeneous distribution of the nanoclay clusters. Using a Monte Carlo simulation procedure, the distribution of nanoclay is varied randomly in an automated manner in a commercial FEA code, and virtual tensile tests are performed for computing the linear stiffness for each case. Values of computed stiffness modulus of highest frequency for nanocomposites with different nanoclay contents correspond well with the experimentally obtained measures of stiffness establishing the effectiveness of the present approach for further applications.
Resumo:
A quantum Monte Carlo algorithm is constructed starting from the standard perturbation expansion in the interaction representation. The resulting configuration space is strongly related to that of the Stochastic Series Expansion (SSE) method, which is based on a direct power series expansion of exp(-beta*H). Sampling procedures previously developed for the SSE method can therefore be used also in the interaction representation formulation. The new method is first tested on the S=1/2 Heisenberg chain. Then, as an application to a model of great current interest, a Heisenberg chain including phonon degrees of freedom is studied. Einstein phonons are coupled to the spins via a linear modulation of the nearest-neighbor exchange. The simulation algorithm is implemented in the phonon occupation number basis, without Hilbert space truncations, and is exact. Results are presented for the magnetic properties of the system in a wide temperature regime, including the T-->0 limit where the chain undergoes a spin-Peierls transition. Some aspects of the phonon dynamics are also discussed. The results suggest that the effects of dynamic phonons in spin-Peierls compounds such as GeCuO3 and NaV2O5 must be included in order to obtain a correct quantitative description of their magnetic properties, both above and below the dimerization temperature.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.
Resumo:
Добри Данков, Владимир Русинов, Мария Велинова, Жасмина Петрова - Изследвана е химическа реакция чрез два начина за моделиране на вероятността за химическа реакция използвайки Direct Simulation Monte Carlo метод. Изследван е порядъка на разликите при температурите и концентрациите чрез тези начини. Когато активността на химическата реакция намалява, намаляват и разликите между концентрациите и температурите получени по двата начина. Ключови думи: Механика на флуидите, Кинетична теория, Разреден газ, DSMC
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.