5 resultados para fixed path methods

em Duke University


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting from first-principles calculations whether mixed metallic elements phase-separate or form ordered structures is a major challenge of current materials research. It can be partially addressed in cases where experiments suggest the underlying lattice is conserved, using cluster expansion (CE) and a variety of exhaustive evaluation or genetic search algorithms. Evolutionary algorithms have been recently introduced to search for stable off-lattice structures at fixed mixture compositions. The general off-lattice problem is still unsolved. We present an integrated approach of CE and high-throughput ab initio calculations (HT) applicable to the full range of compositions in binary systems where the constituent elements or the intermediate ordered structures have different lattice types. The HT method replaces the search algorithms by direct calculation of a moderate number of naturally occurring prototypes representing all crystal systems and guides CE calculations of derivative structures. This synergy achieves the precision of the CE and the guiding strengths of the HT. Its application to poorly characterized binary Hf systems, believed to be phase-separating, defines three classes of alloys where CE and HT complement each other to uncover new ordered structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulations of reaction processes in solution in general rely on the definition of a reaction coordinate and the determination of the thermodynamic changes of the system along the reaction coordinate. The reaction coordinate often is constituted of characteristic geometrical properties of the reactive solute species, while the contributions of solvent molecules are implicitly included in the thermodynamics of the solute degrees of freedoms. However, solvent dynamics can provide the driving force for the reaction process, and in such cases explicit description of the solvent contribution in the free energy of the reaction process becomes necessary. We report here a method that can be used to analyze the solvent contributions to the reaction activation free energies from the combined QM/MM minimum free-energy path simulations. The method was applied to the self-exchange S(N)2 reaction of CH(3)Cl + Cl(-), showing that the importance of solvent-solute interactions to the reaction process. The results were further discussed in the context of coupling between solvent and solute molecules in reaction processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.

Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.

In this work, we propose two methods for improving the efficiency of free energy calculations.

First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.

We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.

Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.

We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.

Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.

Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.