971 resultados para Sequential Monte Carlo methods
Resumo:
Estudio elaborado a partir de una estancia en el Karolinska University Hospital, Suecia, entre marzo y junio del 2006. En la radioterapia estereotáxica extracraneal (SBRT) de tumores de pulmón existen principalmente dos problemas en el cálculo de la dosis con los sistemas de planificación disponibles: la precisión limitada de los algoritmos de cálculo en presencia de tejidos con densidades muy diferentes y los movimientos debidos a la respiración del paciente durante el tratamiento. El objetivo de este trabajo ha sido llevar a cabo la simulación con el código Monte Carlo PENELOPE de la distribución de dosis en tumores de pulmón en casos representativos de tratamientos con SBRT teniendo en cuenta los movimientos respiratorios y su comparación con los resultados de varios planificadores. Se han estudiado casos representativos de tratamientos de SBRT en el Karolinska University Hospital. Los haces de radiación se han simulado mediante el código PENELOPE y se han usado para la obtención de los resultados MC de perfiles de dosis. Los resultados obtenidos para el caso estático (sin movimiento respiratorio ) ponen de manifiesto que, en comparación con la MC, la dosis (Gy/MU) calculada por los planificadores en el tumor tiene una precisión del 2-3%. En la zona de interfase entre tumor y tejido pulmonar los planificadores basados en el algoritmo PB sobrestiman la dosis en un 10%, mientras que el algoritmo CC la subestima en un 3-4%. Los resultados de la simulación mediante MC de los movimientos respiratorios indican que los resultados de los planificadores son suficientemente precisos en el tumor, aunque en la interfase hay una mayor subestimación de la dosis en comparación con el caso estático. Estos resultados son compatibles con la experiencia clínica adquirida durante 15 años en el Karolinska University Hospital. Los resultados se han publicado en la revista Acta Oncologica.
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.
Resumo:
This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.
Resumo:
The paper presents an introductory and general discussion on the quantum Monte Carlo methods, some fundamental algorithms, concepts and applicability. In order to introduce the quantum Monte Carlo method, preliminary concepts associated with Monte Carlo techniques are discussed.
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
A model for the structure of amorphous molybdenum trisulfide, a-MoS3, has been created using reverse Monte Carlo methods. This model, which consists of chains Of MoS6 units sharing three sulfurs with each of its two neighbors and forming alternate long, nonbonded, and short, bonded, Mo-Mo separations, is a good fit to the neutron diffraction data and is chemically and physically realistic. The paper identifies the limitations of previous models based on Mo-3 triangular clusters in accounting for the available experimental data.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equation by separation of integration domain. The hemispherical integration domain is symmetrically separated into 16 parts. First 9 sub-domains are equal size of orthogonal spherical triangles. They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles, also symmetric each to other. All sub-domains have fixed vertices and computable parameters. The bijections of unit square into an orthogonal spherical triangle and into a spherical quadrangle are derived and used to generate sampling points. Then, the symmetric sampling scheme is applied to generate the sampling points distributed over the hemispherical integration domain. The necessary transformations are made and the stratified Monte Carlo estimator is presented. The rate of convergence is obtained and one can see that the algorithm is of super-convergent type.
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
In any data mining applications, automated text and text and image retrieval of information is needed. This becomes essential with the growth of the Internet and digital libraries. Our approach is based on the latent semantic indexing (LSI) and the corresponding term-by-document matrix suggested by Berry and his co-authors. Instead of using deterministic methods to find the required number of first "k" singular triplets, we propose a stochastic approach. First, we use Monte Carlo method to sample and to build much smaller size term-by-document matrix (e.g. we build k x k matrix) from where we then find the first "k" triplets using standard deterministic methods. Second, we investigate how we can reduce the problem to finding the "k"-largest eigenvalues using parallel Monte Carlo methods. We apply these methods to the initial matrix and also to the reduced one. The algorithms are running on a cluster of workstations under MPI and results of the experiments arising in textual retrieval of Web documents as well as comparison of the stochastic methods proposed are presented. (C) 2003 IMACS. Published by Elsevier Science B.V. All rights reserved.
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties, and in some cases rewards, which introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the Maximum Continuous Interruption Duration per customer (MCID). This paper describes a chronological Monte Carlo simulation approach to evaluate probability distributions of reliability indices, including the MCID, and the corresponding penalties. In order to get the desired efficiency, modern computational techniques are used for modeling (UML -Unified Modeling Language) as well as for programming (Object- Oriented Programming). Case studies on a simple distribution network and on real Brazilian distribution systems are presented and discussed. © Copyright KTH 2006.
Resumo:
Pós-graduação em Física - IFT
Resumo:
Utilizou-se o método seqüencial Monte Carlo / Mecânica Quântica para obterem-se os desvios de solvatocromismo e os momentos de dipolo dos sistemas de moléculas orgânicas: Uracil em meio aquoso, -Caroteno em Ácido Oléico, Ácido Ricinoléico em metanol e em Etanol e Ácido Oléico em metanol e em Etanol. As otimizações das geometrias e as distribuições de cargas foram obtidas através da Teoria do Funcional Densidade com o funcional B3LYP e os conjuntos de funções de base 6-31G(d) para todas as moléculas exceto para a água e Uracil, as quais, foram utilizadas o conjunto de funções de base 6-311++G(d,p). No tratamento clássico, Monte Carlo, aplicou-se o algoritmo Metropólis através do programa DICE. A separação de configurações estatisticamente relevantes para os cálculos das propriedades médias foi implementada com a utilização da função de auto-correlação calculada para cada sistema. A função de distribuição radial dos líquidos moleculares foi utilizada para a separação da primeira camada de solvatação, a qual, estabelece a principal interação entre soluto-solvente. As configurações relevantes da primeira camada de solvatação de cada sistema foram submetidas a cálculos quânticos a nível semi-empírico com o método ZINDO/S-CI. Os espectros de absorção foram obtidos para os solutos em fase gasosa e para os sistemas de líquidos moleculares comentados. Os momentos de dipolo elétrico dos mesmos também foram obtidos. Todas as bandas dos espectros de absorção dos sistemas tiveram um desvio para o azul, exceto a segunda banda do sistema de Beta-Caroteno em Ácido Oléico que apresentou um desvio para o vermelho. Os resultados encontrados apresentam-se em excelente concordância com os valores experimentais encontrados na literatura. Todos os sistemas tiveram aumento no momento de dipolo elétrico devido às moléculas dos solventes serem moléculas polares. Os sistemas de ácidos graxos em álcoois apresentaram resultados muito semelhantes, ou seja, os ácidos graxos mencionados possuem comportamentos espectroscópicos semelhantes submetidos aos mesmos solventes. As simulações através do método seqüencial Monte Carlo / Mecânica Quântica estudadas demonstraram que a metodologia é eficaz para a obtenção das propriedades espectroscópicas dos líquidos moleculares analisados.