955 resultados para Simulations Monte Carlo de la chimie de trajectoires
Resumo:
The Cervarola Sandstones Formation (CSF), Aquitanian-Burdigalian in age, was deposited in an elongate, NW-stretched foredeep basin formed in front of the growing Northern Apennines orogenic wedge. The stratigraphic succession of the CSF, in the same way of other Apennine foredeep deposits, records the progressive closure of the basin due to the propagation of thrust fronts toward north-east, i.e. toward the outer and shallower foreland ramp. This process produce a complex foredeep characterized by synsedimentary structural highs and depocenters that can strongly influence the lateral and vertical turbidite facies distribution. Of consequence the main aim of this work is to describe and discuss this influence on the basis of a new high-resolution stratigraphic framework performed by measuring ten stratigraphic logs, for a total thickness of about 2000m, between the Secchia and Scoltenna Valleys (30km apart). In particular, the relationship between the turbidite sedimentation and the ongoing tectonic activity during the foredeep evolution has been describe through various stratigraphic cross sections oriented parallel and perpendicular to the main tectonic structures. On the basis of the high resolution physical stratigraphy of the studied succession, we propose a facies tract and an evolutionary model for the Cervarola Sandstones in the studied area. Thanks to these results and the analogies with others foredeep deposits of the northern Apennines, such as the Marnoso-arenacea Formation, the Cervarola basin has been interpreted as a highly confined foredeep controlled by an intense synsedimentary tectonic activity. The most important evidences supporting this hypothesis are: 1) the upward increase, in the studied stratigraphic succession (about 1000m thick), of sandstone/mudstone ratio, grain sizes and Ophiomorpha-type trace fossils testifying the high degree of flow deceleration related to the progressive closure and uplift of the foredeep. 2) the occurrence in the upper part of the stratigraphic succession of coarse-grained massive sandstones overlain by tractive structures such as megaripples and traction carpets passing downcurrent into fine-grained laminated contained-reflected beds. This facies tract is interpreted as related to deceleration and decoupling of bipartite flows with the deposition of the basal dense flows and bypass of the upper turbulent flows. 3) the widespread occurrence of contained reflected beds related to morphological obstacles created by tectonic structures parallel and perpendicular to the basin axis (see for example the Pievepelago line). 4) occurrence of intra-formational slumps, constituted by highly deformed portion of fine-grained succession, indicating a syn-sedimentary tectonic activity of the tectonic structures able to destabilize the margins of the basin. These types of deposits increase towards the upper part of the stratigraphic succession (see points 1 and 2) 5) the impressive lateral facies changes between intrabasinal topographic highs characterized by fine-grained and thin sandstone beds and marlstones and depocenters characterized by thick to very thick coarse-grained massive sandstones. 6) the common occurrence of amalgamation surfaces, flow impact structures and mud-draped scours related to sudden deceleration of the turbidite flows induced by the structurally-controlled confinement and morphological irregularities. In conclusion, the CSF has many analogies with the facies associations occurring in other tectonically-controlled foredeeps such as those of Marnoso-arenacea Formation (northern Italy) and Annot Sandstones (southern France) showing how thrust fronts and transversal structures moving towards the foreland, were able to produce a segmented foredeep that can strongly influence the turbidity current deposition.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.
Resumo:
∗This research, which was funded by a grant from the Natural Sciences and Engineering Research Council of Canada, formed part of G.A.’s Ph.D. thesis [1].
Resumo:
We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.
Resumo:
2000 Mathematics Subject Classification: 91B28, 65C05.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
Добри Данков, Владимир Русинов, Мария Велинова, Жасмина Петрова - Изследвана е химическа реакция чрез два начина за моделиране на вероятността за химическа реакция използвайки Direct Simulation Monte Carlo метод. Изследван е порядъка на разликите при температурите и концентрациите чрез тези начини. Когато активността на химическата реакция намалява, намаляват и разликите между концентрациите и температурите получени по двата начина. Ключови думи: Механика на флуидите, Кинетична теория, Разреден газ, DSMC
Resumo:
MSC Subject Classification: 65C05, 65U05.
Resumo:
An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.
Resumo:
2002 Mathematics Subject Classification: 65C05.
Resumo:
2000 Mathematics Subject Classification: Primary 62F35; Secondary 62P99
Resumo:
2000 Mathematics Subject Classification: 65C05