963 resultados para Monte Carlo models
Resumo:
State and parameter estimations of non-linear dynamical systems, based on incomplete and noisy measurements, are considered using Monte Carlo simulations. Given the measurements. the proposed method obtains the marginalized posterior distribution of an appropriately chosen (ideally small) subset of the state vector using a particle filter. Samples (particles) of the marginalized states are then used to construct a family of conditionally linearized system of equations and thus obtain the posterior distribution of the states using a bank of Kalman filters. Discrete process equations for the marginalized states are derived through truncated Ito-Taylor expansions. Increased analyticity and reduced dispersion of weights computed over a smaller sample space of marginalized states are the key features of the filter that help achieve smaller sample variance of the estimates. Numerical illustrations are provided for state/parameter estimations of a Duffing oscillator and a 3-DOF non-linear oscillator. Performance of the filter in parameter estimation is also assessed using measurements obtained through experiments on simple models in the laboratory. Despite an added computational cost, the results verify that the proposed filter generally produces estimates with lower sample variance over the standard sequential importance sampling (SIS) filter.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
Based on the analogy between polytypes and spin-half Ising chains with competing short- and infinite-range interactions, a Monte Carlo simulation of polytypes has been attempted. A general double-layer mechanism connects different states of the polytype chain with about the same probability as the spin-flip mechanism in magnetic Ising chains. It has been possible to simulate various polytypes with periodicities extending up to 12 layers. The Monte Carlo method should be useful in testing different interaction models that may be proposed in the future to describe polytypism.
Resumo:
The van der Waals and Platteuw (vdVVP) theory has been successfully used to model the thermodynamics of gas hydrates. However, earlier studies have shown that this could be due to the presence of a large number of adjustable parameters whose values are obtained through regression with experimental data. To test this assertion, we carry out a systematic and rigorous study of the performance of various models of vdWP theory that have been proposed over the years. The hydrate phase equilibrium data used for this study is obtained from Monte Carlo molecular simulations of methane hydrates. The parameters of the vdWP theory are regressed from this equilibrium data and compared with their true values obtained directly from simulations. This comparison reveals that (i) methane-water interactions beyond the first cage and methane-methane interactions make a significant contribution to the partition function and thus cannot be neglected, (ii) the rigorous Monte Carlo integration should be used to evaluate the Langmuir constant instead of the spherical smoothed cell approximation, (iii) the parameter values describing the methane-water interactions cannot be correctly regressed from the equilibrium data using the vdVVP theory in its present form, (iv) the regressed empty hydrate property values closely match their true values irrespective of the level of rigor in the theory, and (v) the flexibility of the water lattice forming the hydrate phase needs to be incorporated in the vdWP theory. Since methane is among the simplest of hydrate forming molecules, the conclusions from this study should also hold true for more complicated hydrate guest molecules.
Resumo:
Noninvasive or minimally invasive identification of sentinel lymph node (SLN) is essential to reduce the surgical effects of SLN biopsy. Photoacoustic (PA) imaging of SLN in animal models has shown its promise for clinical use in the future. Here, we present a Monte Carlo simulation for light transport in the SLN for various light delivery configurations with a clinical ultrasound probe. Our simulation assumes a realistic tissue layer model and also can handle the transmission/reflectance at SLN-tissue boundary due to the mismatch of refractive index. Various light incidence angles show that for deeply situated SLNs the maximum absorption of light in the SLN is for normal incidence. We also show that if a part of the diffused reflected photons is reflected back into the skin using a reflector, the absorption of light in the SLN can be increased significantly to enhance the PA signal. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
The flexibility of the water lattice in clathrate hydrates and guest-guest interactions has been shown in previous studies to significantly affect the values of the thermodynamic properties, such as chemical potentials and free energies. Here we describe methods for computing occupancies, chemical potentials, and free energies that account for the flexibility of water lattice and guest-guest interactions in the hydrate phase. The methods are validated for a wide variety of guest molecules, such as methane, ethane, carbon dioxide, and tetrahydrodfuran by comparing the predicted occupancy values of guest molecules with those obtained from isothermal isobaric semigrand Monte Carlo simulations. The proposed methods extend the van der Waals and Platteuw theory for clathrate hydrates, and the Langmuir constant is calculated based on the structure of the empty hydrate lattice. These methods in combination with development of advanced molecular models for water and guest molecules should lead to a more thermodynamically consistent theory for clathrate hydrates.
Resumo:
Methane and ethane are the simplest hydrocarbon molecules that can form clathrate hydrates. Previous studies have reported methods for calculating the three-phase equilibrium using Monte Carlo simulation methods in systems with a single component in the gas phase. Here we extend those methods to a binary gas mixture of methane and ethane. Methane-ethane system is an interesting one in that the pure components form sII clathrate hydrate whereas a binary mixture of the two can form the sII clathrate. The phase equilibria computed from Monte Carlo simulations show a good agreement with experimental data and are also able to predict the sI-sII structural transition in the clathrate hydrate. This is attributed to the quality of the TIP4P/Ice and TRaPPE models used in the simulations. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We develop methods for performing filtering and smoothing in non-linear non-Gaussian dynamical models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. In particular, novel techniques are presented for generation of random realisations from the joint smoothing distribution and for MAP estimation of the state sequence. Realisations of the smoothing distribution are generated in a forward-backward procedure, while the MAP estimation procedure can be performed in a single forward pass of the Viterbi algorithm applied to a discretised version of the state space. An application to spectral estimation for time-varying autoregressions is described.
Resumo:
The permeability of the fractal porous media is simulated by Monte Carlo technique in this work. Based oil the fractal character of pore size distribution in porous media, the probability models for pore diameter and for permeability are derived. Taking the bi-dispersed fractal porous media as examples, the permeability calculations are performed by the present Monte Carlo method. The results show that the present simulations present a good agreement compared with the existing fractal analytical solution in the general interested porosity range. The proposed simulation method may have the potential in prediction of other transport properties (such as thermal conductivity, dispersion conductivity and electrical conductivity) in fractal porous media, both saturated and unsaturated.
Resumo:
We present a stochastic simulation technique for subset selection in time series models, based on the use of indicator variables with the Gibbs sampler within a hierarchical Bayesian framework. As an example, the method is applied to the selection of subset linear AR models, in which only significant lags are included. Joint sampling of the indicators and parameters is found to speed convergence. We discuss the possibility of model mixing where the model is not well determined by the data, and the extension of the approach to include non-linear model terms.
Resumo:
Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose a new SMC algorithm to compute the expectation of additive functionals recursively. Essentially, it is an on-line or "forward only" implementation of a forward filtering backward smoothing SMC algorithm proposed by Doucet, Godsill and Andrieu (2000). Compared to the standard \emph{path space} SMC estimator whose asymptotic variance increases quadratically with time even under favorable mixing assumptions, the non asymptotic variance of the proposed SMC estimator only increases linearly with time. We show how this allows us to perform recursive parameter estimation using an SMC implementation of an on-line version of the Expectation-Maximization algorithm which does not suffer from the particle path degeneracy problem.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
In this paper, we present an expectation-maximisation (EM) algorithm for maximum likelihood estimation in multiple target models (MTT) with Gaussian linear state-space dynamics. We show that estimation of sufficient statistics for EM in a single Gaussian linear state-space model can be extended to the MTT case along with a Monte Carlo approximation for inference of unknown associations of targets. The stochastic approximation EM algorithm that we present here can be used along with any Monte Carlo method which has been developed for tracking in MTT models, such as Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate the performance of the algorithm with a simulation. © 2012 ISIF (Intl Society of Information Fusi).
Resumo:
Effects of chain flexibility on the conformation of homopolymers in good solvents have been investigated by Monte Carlo simulation. Bond angle constraint coupled with persistence length of polymer chains has been introduced in the modified eight-site bond fluctuation simulation model. The study about the effects of chain flexibility on polymer sizes reveals that the orientation of polymer chains under confinement is driven by the loss of conformation entropy. The conformation of polymer chains undergoing a gradual change from spherical iso-diametric ellipsoid to rodlike iso-diametric ellipsoid with the decrease of polymer chain flexibility in a wide region has been clearly illustrated from several aspects. Furthermore, a comparison of the freely jointed chain (FJC) model and the wormlike chain (WLC) model has also been made to describe the polymer sizes in terms of chain flexibility and quasi-quantitative boundary toward the suitability of the models.