50 resultados para Monte-carlo Calculations
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.
Resumo:
The phase diagram for an AB diblock copolymer melt with polydisperse A blocks and monodisperse B blocks is evaluated using lattice-based Monte Carlo simulations. Experiments on this system have shown that the A-block polydispersity shifts the order-order transitions (OOTs) towards higher A-monomer content, while the order-disorder transition (ODT) moves towards higher temperatures when the A blocks form the minority domains and lower temperatures when the A blocks form the matrix. Although self-consistent field theory (SCFT) correctly accounts for the change in the OOTs, it incorrectly predicts the ODT to shift towards higher temperatures at all diblock copolymer compositions. In contrast, our simulations predict the correct shifts for both the OOTs and the ODT. This implies that polydispersity amplifies the fluctuation-induced correction to the mean-field ODT, which we attribute to a reduction in packing frustration. Consistent with this explanation, polydispersity is found to enhance the stability of the perforated-lamellar phase.
Resumo:
Using grand canonical Monte Carlo simulation we show, for the first time, the influence of the carbon porosity and surface oxidation on the parameters of the Dubinin-Astakhov (DA) adsorption isotherm equation. We conclude that upon carbon surface oxidation, the adsorption decreases for all carbons studied. Moreover, the parameters of the DA model depend on the number of surface oxygen groups. That is why in the case of carbons containing surface polar groups, SF(6) adsorption isotherm data cannot be used for characterization of the porosity.
Resumo:
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
Resumo:
The adsorption of gases on microporous carbons is still poorly understood, partly because the structure of these carbons is not well known. Here, a model of microporous carbons based on fullerene- like fragments is used as the basis for a theoretical study of Ar adsorption on carbon. First, a simulation box was constructed, containing a plausible arrangement of carbon fragments. Next, using a new Monte Carlo simulation algorithm, two types of carbon fragments were gradually placed into the initial structure to increase its microporosity. Thirty six different microporous carbon structures were generated in this way. Using the method proposed recently by Bhattacharya and Gubbins ( BG), the micropore size distributions of the obtained carbon models and the average micropore diameters were calculated. For ten chosen structures, Ar adsorption isotherms ( 87 K) were simulated via the hyper- parallel tempering Monte Carlo simulation method. The isotherms obtained in this way were described by widely applied methods of microporous carbon characterisation, i. e. Nguyen and Do, Horvath - Kawazoe, high- resolution alpha(a)s plots, adsorption potential distributions and the Dubinin - Astakhov ( DA) equation. From simulated isotherms described by the DA equation, the average micropore diameters were calculated using empirical relationships proposed by different authors and they were compared with those from the BG method.
Resumo:
The application of forecast ensembles to probabilistic weather prediction has spurred considerable interest in their evaluation. Such ensembles are commonly interpreted as Monte Carlo ensembles meaning that the ensemble members are perceived as random draws from a distribution. Under this interpretation, a reasonable property to ask for is statistical consistency, which demands that the ensemble members and the verification behave like draws from the same distribution. A widely used technique to assess statistical consistency of a historical dataset is the rank histogram, which uses as a criterion the number of times that the verification falls between pairs of members of the ordered ensemble. Ensemble evaluation is rendered more specific by stratification, which means that ensembles that satisfy a certain condition (e.g., a certain meteorological regime) are evaluated separately. Fundamental relationships between Monte Carlo ensembles, their rank histograms, and random sampling from the probability simplex according to the Dirichlet distribution are pointed out. Furthermore, the possible benefits and complications of ensemble stratification are discussed. The main conclusion is that a stratified Monte Carlo ensemble might appear inconsistent with the verification even though the original (unstratified) ensemble is consistent. The apparent inconsistency is merely a result of stratification. Stratified rank histograms are thus not necessarily flat. This result is demonstrated by perfect ensemble simulations and supplemented by mathematical arguments. Possible methods to avoid or remove artifacts that stratification induces in the rank histogram are suggested.
Resumo:
The formation of complexes appearing in solutions containing oppositely charged polyelectrolytes has been investigated by Monte Carlo simulations using two different models. The polyions are described as flexible chains of 20 connected charged hard spheres immersed in a homogenous dielectric background representing water. The small ions are either explicitly included or their effect described by using a screened Coulomb potential. The simulated solutions contained 10 positively charged polyions with 0, 2, or 5 negatively charged polyions and the respective counterions. Two different linear charge densities were considered, and structure factors, radial distribution functions, and polyion extensions were determined. A redistribution of positively charged polyions involving strong complexes formed between the oppositely charged polyions appeared as the number of negatively charged polyions was increased. The nature of the complexes was found to depend on the linear charge density of the chains. The simplified model involving the screened Coulomb potential gave qualitatively similar results as the model with explicit small ions. Finally, owing to the complex formation, the sampling in configurational space is nontrivial, and the efficiency of different trial moves was examined.
Resumo:
The dependency of the blood oxygenation level dependent (BOLD) signal on underlying hemodynamics is not well understood. Building a forward biophysical model of this relationship is important for the quantitative estimation of the hemodynamic changes and neural activity underlying functional magnetic resonance imaging (fMRI) signals. We have developed a general model of the BOLD signal which can model both intra- and extravascular signals for an arbitrary tissue model across a wide range of imaging parameters. The model of the BOLD signal was instantiated as a look-up-table (LuT), and was verified against concurrent fMRI and optical imaging measurements of activation induced hemodynamics. Magn Reson Med, 2008. © 2008 Wiley-Liss, Inc.
Resumo:
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.