960 resultados para variational Monte-Carlo method
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
The dynamics of low-density flows is governed by the Boltzmann equation of the kinetic theory of gases. This is a nonlinear integro-differential equation and, in general, numerical methods must be used to obtain its solution. The present paper, after a brief review of Direct Simulation Monte Carlo (DSMC) methods due to Bird, and Belotserkovskii and Yanitskii, studies the details of theDSMC method of Deshpande for mono as well as multicomponent gases. The present method is a statistical particle-in-cell method and is based upon the Kac-Prigogine master equation which reduces to the Boltzmann equation under the hypothesis of molecular chaos. The proposed Markoff model simulating the collisions uses a Poisson distribution for the number of collisions allowed in cells into which the physical space is divided. The model is then extended to a binary mixture of gases and it is shown that it is necessary to perform the collisions in a certain sequence to obtain unbiased simulation.
Monte Carlo simulation of network formation based on structural fragments in epoxy-anhydride systems
Resumo:
A method combining the Monte Carlo technique and the simple fragment approach has been developed for simulating network formation in amine-catalysed epoxy-anhydride systems. The method affords a detailed insight into the nature and composition of the network, showing the distribution of various fragments. It has been used to characterize the network formation in the reaction of the diglycidyl ester of isophthalic acid with hexahydrophthalic anhydride, catalysed by benzyldimethylamine. Pre-gel properties like number and weight distributions and average molecular weights have been calculated as a function of epoxy conversion, leading to a prediction of the gel-point conversion. Analysis of the simulated network further yields other characteristic properties such as concentration of crosslink points, distribution and concentration of elastically active chains, average molecular weight between crosslinks, sol content and mass fraction of pendent chains. A comparison has been made of the properties obtained through simulation with those predicted by the fragment approach alone, which, however, gives only average properties. The Monte Carlo simulation results clearly show that loops and other cyclic structures occur in the gel. This may account for the differences observed between the results of the simulation and the fragment model in the post-gel phase. Copyright (C) 1996 Elsevier Science Ltd.
Resumo:
We report the results of Monte Carlo simulation of the phase diagram and oxygen ordering in YBa2Cu3O6+x for low intra-sublattice repulsion. At low temperatures, apart from tetragonal (T), orthorhombic (OI) and 'double cell' ortho II phases, there is evidence for two additional orthorhombic phases labelled here as OIBAR and OIII. At high temperatures, there was no evidence for the decomposition of the OI phase into the T and OI phases. We find qualitative agreement with experimental observations and cluster-variation method results.
Resumo:
Geometry and energy of argon clusters confined in zeolite NaCaA are compared with those of free clusters. Results indicate the possible existence of magic numbers among the confined clusters. Spectra obtained from instantaneous normal mode analysis of free and confined clusters give a larger percentage of imaginary frequencies for the latter indicating that the confined cluster atoms populate the saddle points of the potential energy surface significantly. The variation of the percentage of imaginary frequencies with temperature during melting is akin to the variation of other properties. It is shown that confined clusters might exhibit inverse surface melting, unlike medium-to-large-sized free clusters that exhibit surface melting. Configurational-bias Monte Carte (CBMC) simulations of n-alkanes in zeolites Y and A are reported. CBMC method gives reliable estimates of the properties relating to the conformation of molecules. Changes in the conformational properties of n-butane and other longer n-alkanes such as n-hexane and n-heptane when they are confined in different zeolites are presented. The changes in the conformational properties of n-butane and n-hexane with temperature and concentration is discussed. In general, in zeolite Y as well as A, there is significant enhancement of the gauche population as compared to the pure unconfined fluid.
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
We present a stochastic simulation technique for subset selection in time series models, based on the use of indicator variables with the Gibbs sampler within a hierarchical Bayesian framework. As an example, the method is applied to the selection of subset linear AR models, in which only significant lags are included. Joint sampling of the indicators and parameters is found to speed convergence. We discuss the possibility of model mixing where the model is not well determined by the data, and the extension of the approach to include non-linear model terms.
Resumo:
Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.
Resumo:
An information preservation (IP) method has been used to simulate many micro scale gas flows. It may efficiently reduce the statistical scatter inherent in conventional particle approaches such as the direct simulation Monte Carlo (DSMC) method. This paper reviews applications of IP to some benchmark problems. Comparison of the IP results with those given by experiment, DSMC, and the linearized Boltzmann equation, as well as the Navier-Stokes equations with a slip boundary condition, and the lattice Boltzmann equation, shows that the IP method is applicable to micro scale gas flows over the entire flow regime from continuum to free molecular.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
We demonstrate that the parametric resonance in a magnetic quadrupole trap can be exploited to cool atoms by using Bird's method. In our programme the parametric resonance was realized by anisotropically modulating the trap potential. The modulation frequency dependences of temperature and fraction of the trapped atoms are explored. Furthermore, the temperature after the modulation as functions of the modulation amplitude and the mean elastic collision time are also studied. These results are valuable for the experiment of parametric resonance in a quadrupole trap.
Resumo:
This paper discusses the problem of restoring a digital input signal that has been degraded by an unknown FIR filter in noise, using the Gibbs sampler. A method for drawing a random sample of a sequence of bits is presented; this is shown to have faster convergence than a scheme by Chen and Li, which draws bits independently. ©1998 IEEE.