960 resultados para variational Monte-Carlo method
Resumo:
We report the results of Monte Carlo simulations with the aim to clarify the microscopic origin of exchange bias in the magnetization hysteresis loops of a model of individual core/shell nanoparticles. Increase of the exchange coupling across the core/shell interface leads to an enhancement of exchange bias and to an increasing asymmetry between the two branches of the loops which is due to different reversal mechanisms. A detailed study of the magnetic order of the interfacial spins shows compelling evidence that the existence of a net magnetization due to uncompensated spins at the shell interface is responsible for both phenomena and allows to quantify the loop shifts directly in terms of microscopic parameters with striking agreement with the macroscopic observed values.
Resumo:
The magnetic structure of the edge-sharing cuprate compound Li2CuO2 has been investigated with highly correlated ab initio electronic structure calculations. The first- and second-neighbor in-chain magnetic interactions are calculated to be 142 and -22 K, respectively. The ratio between the two parameters is smaller than suggested previously in the literature. The interchain interactions are antiferromagnetic in nature and of the order of a few K only. Monte Carlo simulations using the ab initio parameters to define the spin model Hamiltonian result in a Nel temperature in good agreement with experiment. Spin population analysis situates the magnetic moment on the copper and oxygen ions between the completely localized picture derived from experiment and the more delocalized picture based on local-density calculations.
Resumo:
The author studies random walk estimators for radiosity with generalized absorption probabilities. That is, a path will either die or survive on a patch according to an arbitrary probability. The estimators studied so far, the infinite path length estimator and finite path length one, can be considered as particular cases. Practical applications of the random walks with generalized probabilities are given. A necessary and sufficient condition for the existence of the variance is given, together with heuristics to be used in practical cases. The optimal probabilities are also found for the case when one is interested in the whole scene, and are equal to the reflectivities
Resumo:
In this paper we extend the reuse of paths to the shot from a moving light source. In the classical algorithm new paths have to be cast from each new position of a light source. We show that we can reuse all paths for all positions, obtaining in this way a theoretical maximum speed-up equal to the average length of the shooting path
Resumo:
Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of Cappe et al. (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm.
Resumo:
We present a stochastic approach for solving the quantum-kinetic equation introduced in Part I. A Monte Carlo method based on backward time evolution of the numerical trajectories is developed. The computational complexity and the stochastic error are investigated numerically. Variance reduction techniques are applied, which demonstrate a clear advantage with respect to the approaches based on symmetry transformation. Parallel implementation is realized on a GRID infrastructure.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
A new approach to the study of the local organization in amorphous polymer materials is presented. The method couples neutron diffraction experiments that explore the structure on the spatial scale 1–20 Å with the reverse Monte Carlo fitting procedure to predict structures that accurately represent the experimental scattering results over the whole momentum transfer range explored. Molecular mechanics and molecular dynamics techniques are also used to produce atomistic models independently from any experimental input, thereby providing a test of the viability of the reverse Monte Carlo method in generating realistic models for amorphous polymeric systems. An analysis of the obtained models in terms of single chain properties and of orientational correlations between chain segments is presented. We show the viability of the method with data from molten polyethylene. The analysis derives a model with average C-C and C-H bond lengths of 1.55 Å and 1.1 Å respectively, average backbone valence angle of 112, a torsional angle distribution characterized by a fraction of trans conformers of 0.67 and, finally, a weak interchain orientational correlation at around 4 Å.
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A high incidence of waterborne diseases is observed worldwide and in order to address contamination problems prior to an outbreak, quantitative microbial risk assessment is a useful tool for estimating the risk of infection. The objective of this paper was to assess the probability of Giardia infection from consuming water from shallow wells in a peri-urban area. Giardia has been described as an important waterborne pathogen and reported in several water sources, including ground waters. Sixteen water samples were collected and examined according to the US EPA (1623, 2005). A Monte Carlo method was used to address the potential risk as described by the exponential dose response model. Giardia cysts occurred in 62.5% of the samples (0.1-36.1 cysts/l). A median risk of 10-1 for the population was estimated and the adult ingestion was the highest risk driver. This study illustrates the vulnerability of shallow well water supply systems in peri-urban areas.
Resumo:
This paper generalizes the HEGY-type test to detect seasonal unit roots in data at any frequency, based on the seasonal unit root tests in univariate time series by Hylleberg, Engle, Granger and Yoo (1990). We introduce the seasonal unit roots at first, and then derive the mechanism of the HEGY-type test for data with any frequency. Thereafter we provide the asymptotic distributions of our test statistics when different test regressions are employed. We find that the F-statistics for testing conjugation unit roots have the same asymptotic distributions. Then we compute the finite-sample and asymptotic critical values for daily and hourly data by a Monte Carlo method. The power and size properties of our test for hourly data is investigated, and we find that including lag augmentations in auxiliary regression without lag elimination have the smallest size distortion and tests with seasonal dummies included in auxiliary regression have more power than the tests without seasonal dummies. At last we apply the our test to hourly wind power production data in Sweden and shows there are no seasonal unit roots in the series.