57 resultados para markov chains monte carlo methods
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
This paper presents results on the simulation of the solid state sintering of copper wires using Monte Carlo techniques based on elements of lattice theory and cellular automata. The initial structure is superimposed onto a triangular, two-dimensional lattice, where each lattice site corresponds to either an atom or vacancy. The number of vacancies varies with the simulation temperature, while a cluster of vacancies is a pore. To simulate sintering, lattice sites are picked at random and reoriented in terms of an atomistic model governing mass transport. The probability that an atom has sufficient energy to jump to a vacant lattice site is related to the jump frequency, and hence the diffusion coefficient, while the probability that an atomic jump will be accepted is related to the change in energy of the system as a result of the jump, as determined by the change in the number of nearest neighbours. The jump frequency is also used to relate model time, measured in Monte Carlo Steps, to the actual sintering time. The model incorporates bulk, grain boundary and surface diffusion terms and includes vacancy annihilation on the grain boundaries. The predictions of the model were found to be consistent with experimental data, both in terms of the microstructural evolution and in terms of the sintering time. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We introduce a new class of quantum Monte Carlo methods, based on a Gaussian quantum operator representation of fermionic states. The methods enable first-principles dynamical or equilibrium calculations in many-body Fermi systems, and, combined with the existing Gaussian representation for bosons, provide a unified method of simulating Bose-Fermi systems. As an application relevant to the Fermi sign problem, we calculate finite-temperature properties of the two dimensional Hubbard model and the dynamics in a simple model of coherent molecular dissociation.
Resumo:
We shall be concerned with the problem of determining quasi-stationary distributions for Markovian models directly from their transition rates Q. We shall present simple conditions for a mu-invariant measure m for Q to be mu-invariant for the transition function, so that if m is finite, it can be normalized to produce a quasi-stationary distribution. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.
Resumo:
This paper presents a detailed analysis of adsorption of supercritical fluids on nonporous graphitized thermal carbon black. Two methods are employed in the analysis. One is the molecular layer structure theory (MLST), proposed recently by our group, and the other is the grand canonical Monte Carlo (GCMC) simulation. They were applied to describe the adsorption of argon, krypton, methane, ethylene, and sulfur hexafluoride on graphitized thermal carbon black. It was found that the MLST describes all the experimental data at various temperatures well. Results from GCMC simulations describe well the data at low pressure but show some deviations at higher pressures for all the adsorbates tested. The question of negative surface excess is also discussed in this paper.
Resumo:
Grand canonical Monte Carlo (GCMC) simulation was used for the systematic investigation of the supercritical methane adsorption at 273 K on an open graphite surface and in slitlike micropores of different sizes. For both considered adsorption systems the calculated excess adsorption isotherms exhibit a maximum. The effect of the pore size on the maximum surface excess and isosteric enthalpy of adsorption for methane storage at 273 K is discussed. The microscopic detailed picture of methane densification near the homogeneous graphite wall and in slitlike pores at 273 K is presented with selected local density profiles and snapshots. Finally, the reliable pore size distributions, obtained in the range of the microporosity, for two pitch-based microporous activated carbon fibers are calculated from the local excess adsorption isotherms obtained via the GCMC simulation. The current systematic study of supercritical methane adsorption both on an open graphite surface and in slitlike micropores performed by the GCMC summarizes recent investigations performed at slightly different temperatures and usually a lower pressure range by advanced methods based on the statistical thermodynamics.
Resumo:
Aim: To identify an appropriate dosage strategy for patients receiving enoxaparin by continuous intravenous infusion (CII). Methods: Monte Carlo simulations were performed in NONMEM, (200 replicates of 1000 patients) to predict steady state anti-Xa concentrations (Css) for patients receiving a CII of enoxaparin. The covariate distribution model was simulated based on covariate demographics in the CII study population. The impact of patient weight, renal function (creatinine clearance (CrCL)) and patient location (intensive care unit (ICU)) were evaluated. A population pharmacokinetic model was used as the input-output model (1-compartment first order output model with mixed residual error structure). Success of a dosing regimen was based on the percent of Css that is between the therapeutic range of 0.5 IU/ml to 1.2 IU/ml. Results: The best dose for patients in the ICU was 4.2IU/kg/h (success mean 64.8% and 90% prediction interval (PI): 60.1–69.8%) if CrCL60ml/min, the best dose was 8.3IU/kg/h (success mean 65.4%, 90% PI: 58.5–73.2%). Simulations suggest that there was a 50% improvement in the success of the CII if the dose rate for ICU patients with CrCL
Resumo:
Dimensionless spray flux Ψa is a dimensionless group that characterises the three most important variables in liquid dispersion: flowrate, drop size and powder flux through the spray zone. In this paper, the Poisson distribution was used to generate analytical solutions for the proportion of nuclei formed from single drops (fsingle) and the fraction of the powder surface covered by drops (fcovered) as a function of Ψa. Monte-Carlo simulations were performed to simulate the spray zone and investigate how Ψa, fsingle and fcovered are related. The Monte-Carlo data was an excellent match with analytical solutions of fcovered and fsingle as a function of Ψa. At low Ψa, the proportion of the surface covered by drops (fcovered) was equal to Ψa. As Ψa increases, drop overlap becomes more dominant and the powder surface coverage levels off. The proportion of nuclei formed from single drops (fsingle) falls exponentially with increasing Ψa. In the ranges covered, these results were independent of drop size, number of drops, drop size distribution (mono-sized, bimodal and trimodal distributions), and the uniformity of the spray. Experimental data of nuclei size distributions as a function of spray flux were fitted to the analytical solution for fsingle by defining a cutsize for single drop nuclei. The fitted cutsizes followed the spray drop sizes suggesting that the method is robust and that the cutsize does indicate the transition size between single drop and agglomerate nuclei. This demonstrates that the nuclei distribution is determined by the dimensionless spray flux and the fraction of drop controlled nuclei can be calculated analytically in advance.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
We shall study continuous-time Markov chains on the nonnegative integers which are both irreducible and transient, and which exhibit discernible stationarity before drift to infinity sets in. We will show how this 'quasi' stationary behaviour can be modelled using a limiting conditional distribution: specifically, the limiting state probabilities conditional on not having left 0 for the last time. By way of a dual chain, obtained by killing the original process on last exit from 0, we invoke the theory of quasistationarity for absorbing Markov chains. We prove that the conditioned state probabilities of the original chain are equal to the state probabilities of its dual conditioned on non-absorption, thus allowing us to establish the simultaneous existence and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always a quasistationary distribution in the usual sense, a similar statement is not possible for the original chain.