990 resultados para Simulations Monte Carlo de la chimie de trajectoires
Resumo:
The hybrid Monte Carlo (HMC) method is a popular and rigorous method for sampling from a canonical ensemble. The HMC method is based on classical molecular dynamics simulations combined with a Metropolis acceptance criterion and a momentum resampling step. While the HMC method completely resamples the momentum after each Monte Carlo step, the generalized hybrid Monte Carlo (GHMC) method can be implemented with a partial momentum refreshment step. This property seems desirable for keeping some of the dynamic information throughout the sampling process similar to stochastic Langevin and Brownian dynamics simulations. It is, however, ultimate to the success of the GHMC method that the rejection rate in the molecular dynamics part is kept at a minimum. Otherwise an undesirable Zitterbewegung in the Monte Carlo samples is observed. In this paper, we describe a method to achieve very low rejection rates by using a modified energy, which is preserved to high-order along molecular dynamics trajectories. The modified energy is based on backward error results for symplectic time-stepping methods. The proposed generalized shadow hybrid Monte Carlo (GSHMC) method is applicable to NVT as well as NPT ensemble simulations.
Resumo:
The objective of this paper is to apply the mis-specification (M-S) encompassing perspective to the problem of choosing between linear and log-linear unit-root models. A simple M-S encompassing test, based on an auxiliary regression stemming from the conditional second moment, is proposed and its empirical size and power are investigated using Monte Carlo simulations. It is shown that by focusing on the conditional process the sampling distributions of the relevant statistics are well behaved under both the null and alternative hypotheses. The proposed M-S encompassing test is illustrated using US total disposable income quarterly data.
Resumo:
Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.
Resumo:
We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data
Resumo:
Runoff generation processes and pathways vary widely between catchments. Credible simulations of solute and pollutant transport in surface waters are dependent on models which facilitate appropriate, catchment-specific representations of perceptual models of the runoff generation process. Here, we present a flexible, semi-distributed landscape-scale rainfall-runoff modelling toolkit suitable for simulating a broad range of user-specified perceptual models of runoff generation and stream flow occurring in different climatic regions and landscape types. PERSiST (the Precipitation, Evapotranspiration and Runoff Simulator for Solute Transport) is designed for simulating present-day hydrology; projecting possible future effects of climate or land use change on runoff and catchment water storage; and generating hydrologic inputs for the Integrated Catchments (INCA) family of models. PERSiST has limited data requirements and is calibrated using observed time series of precipitation, air temperature and runoff at one or more points in a river network. Here, we apply PERSiST to the river Thames in the UK and describe a Monte Carlo tool for model calibration, sensitivity and uncertainty analysis
Resumo:
We study the orientational ordering on the surface of a sphere using Monte Carlo and Brownian dynamics simulations of rods interacting with an anisotropic potential. We restrict the orientations to the local tangent plane of the spherical surface and fix the position of each rod to be at a discrete point on the spherical surface. On the surface of a sphere, orientational ordering cannot be perfectly nematic due to the inevitable presence of defects. We find that the ground state of four +1/2 point defects is stable across a broad range of temperatures. We investigate the transition from disordered to ordered phase by decreasing the temperature and find a very smooth transition. We use fluctuations of the local directors to estimate the Frank elastic constant on the surface of a sphere and compare it to the planar case. We observe subdiffusive behavior in the mean square displacement of the defect cores and estimate their diffusion constants.
Resumo:
Recent research into flood modelling has primarily concentrated on the simulation of inundation flow without considering the influences of channel morphology. River channels are often represented by a simplified geometry that is implicitly assumed to remain unchanged during flood simulations. However, field evidence demonstrates that significant morphological changes can occur during floods to mobilise the boundary sediments. Despite this, the effect of channel morphology on model results has been largely unexplored. To address this issue, the impact of channel cross-section geometry and channel long-profile variability on flood dynamics is examined using an ensemble of a 1D-2D hydraulic model (LISFLOOD-FP) of the 1:2102 year recurrence interval floods in Cockermouth, UK, within an uncertainty framework. A series of hypothetical scenarios of channel morphology were constructed based on a simple velocity based model of critical entrainment. A Monte-Carlo simulation framework was used to quantify the effects of channel morphology together with variations in the channel and floodplain roughness coefficients, grain size characteristics, and critical shear stress on measures of flood inundation. The results showed that the bed elevation modifications generated by the simplistic equations reflected a good approximation of the observed patterns of spatial erosion despite its overestimation of erosion depths. The effect of uncertainty on channel long-profile variability only affected the local flood dynamics and did not significantly affect the friction sensitivity and flood inundation mapping. The results imply that hydraulic models generally do not need to account for within event morphodynamic changes of the type and magnitude modelled, as these have a negligible impact that is smaller than other uncertainties, e.g. boundary conditions. Instead morphodynamic change needs to happen over a series of events to become large enough to change the hydrodynamics of floods in supply limited gravel-bed rivers like the one used in this research.
Resumo:
A procedure is presented for fitting incoherent scatter radar data from non-thermal F-region ionospheric plasma, using theoretical spectra previously predicted. It is found that values of the shape distortion factor D∗, associated with deviations of the ion velocity distribution from a Maxwellian distribution, and ion temperatures can be deduced (the results being independent of the path of iteration) if the angle between the line-of-sight and the geomagnetic field is larger than about 15–20°. The procedure can be used with one or both of two sets of assumptions. These concern the validity of the adopted model for the line-of-sight ion velocity distribution in the one case or for the full three-dimensional ion velocity distribution function in the other. The distribution function employed was developed to describe the line-of-sight velocity distribution for large aspect angles, but both experimental data and Monte Carlo simulations indicate that the form of the field-perpendicular distribution can also describe the distribution at more general aspect angles. The assumption of this form for the line-of-sight velocity distribution at a general aspect angle enables rigorous derivation of values of the one-dimensional, line-of-sight ion temperature. With some additional assumptions (principally that the field-parallel distribution is always Maxwellian and there is a simple relationship between the ion temperature anisotropy and the distortion of the field-perpendicular distribution from a Maxwellian), fits to data for large aspect angles enable determination of line-of-sight temperatures at all aspect angles and hence, of the average ion temperature and the ion temperature anisotropy. For small aspect angles, the analysis is restricted to the determination of the line-of-sight ion temperature because the theoretical spectrum is insensitive to non-thermal effects when the plasma is viewed along directions almost parallel to the magnetic field. This limitation is expected to apply to any realistic model of the ion velocity distribution function and its consequences are discussed. Fit strategies which allow for mixed ion composition are also considered. Examples of fits to data from various EISCAT observing programmes are presented.
Resumo:
In recent years an increasing number of papers have employed meta-analysis to integrate effect sizes of researchers’ own series of studies within a single paper (“internal meta-analysis”). Although this approach has the obvious advantage of obtaining narrower confidence intervals, we show that it could inadvertently inflate false-positive rates if researchers are motivated to use internal meta-analysis in order to obtain a significant overall effect. Specifically, if one decides whether to stop or continue a further replication experiment depending on the significance of the results in an internal meta-analysis, false-positive rates would increase beyond the nominal level. We conducted a set of Monte-Carlo simulations to demonstrate our argument, and provided a literature review to gauge awareness and prevalence of this issue. Furthermore, we made several recommendations when using internal meta-analysis to make a judgment on statistical significance.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
In this paper, we introduce a Bayesian analysis for survival multivariate data in the presence of a covariate vector and censored observations. Different ""frailties"" or latent variables are considered to capture the correlation among the survival times for the same individual. We assume Weibull or generalized Gamma distributions considering right censored lifetime data. We develop the Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We analyze a threshold contact process on a square lattice in which particles are created on empty sites with at least two neighboring particles and are annihilated spontaneously. We show by means of Monte Carlo simulations that the process undergoes a discontinuous phase transition at a definite value of the annihilation parameter, in accordance with the Gibbs phase rule, and that the discontinuous transition exhibits critical behavior. The simulations were performed by using boundary conditions in which the sites of the border of the lattice are permanently occupied by particles.
Resumo:
We consider a non-equilibrium three-state model whose dynamics is Markovian and displays the same symmetry as the three-state Potts model, i.e. the transition rates are invariant under the cyclic permutation of the states. Unlike the Potts model, detailed balance is, in general, not satisfied. The aging and the stationary properties of the model defined on a square lattice are obtained by means of large-scale Monte Carlo simulations. We show that the phase diagram presents a critical line, belonging to the three-state Potts universality class, that ends at a point whose universality class is that of the Voter model. Aging is considered on the critical line, at the Voter point and in the ferromagnetic phase.