62 resultados para Markov-chain Monte Carlo
Resumo:
Nano- and meso-scale simulation of chemical ordering kinetics in nano-layered L1(0)-AB binary intermetallics was performed. In the nano- (atomistic) scale Monte Carlo (MC) technique with vacancy mechanism of atomic migration implemented with diverse models for the system energetics was used. The meso-scale microstructure evolution was, in turn, simulated by means of a MC procedure applied to a system built of meso-scale voxels ordered in particular L1(0) variants. The voxels were free to change the L1(0) variant and interacted with antiphase-boundary energies evaluated within the nano-scale simulations. The study addressed FePt thin layers considered as a material for ultra-high-density magnetic storage media and revealed metastability of the L1(0) c-variant superstructure with monoatomic planes parallel to the (001)-oriented layer surface and off-plane easy magnetization. The layers, originally perfectly ordered in the c-variant, showed discontinuous precipitation of a- and b-L1(0)-variant domains running in parallel with homogeneous disordering (i.e. generation of antisite defects). The domains nucleated heterogeneously on the free monoatomic Fe surface of the layer, grew inwards its volume and relaxed towards an equilibrium microstructure of the system. Two
Resumo:
We address the question of the observed pinning of 1/2
Resumo:
We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (epsilon, delta). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (epsilon, delta) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen epsilon parameter is particularly robust compared to that of slowness and the Thomsen delta parameter, even in the face of complex subsurface anomalies. The Thomsen epsilon and delta parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that epsilon and delta in the MGMC scheme, inverted images of phase velocity reflect the integrated effects of these two modes of anisotropy. The new MGMC technique thus promises to facilitate rapid inversion of crosshole P-wave data for seismic slownesses and the Thomsen anisotropy parameters, with minimal user input in the inversion process.
Resumo:
In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.
Resumo:
Massive young stellar objects (YSOs) are powerful infrared Hi line emitters. It has been suggested that these lines form in an outflow from a disc surrounding the YSO. Here, new two-dimensional Monte Carlo radiative transfer calculations are described which test this hypothesis. Infrared spectra are synthesized for a YSO disc wind model based on earlier hydrodynamical calculations. The model spectra are in qualitative agreement with the observed spectra from massive YSOs, and therefore provide support for a disc wind explanation for the Hi lines. However, there are some significant differences: the models tend to overpredict the Bra/Br? ratio of equivalent widths and produce line profiles which are slightly too broad and, in contrast to typical observations, are double-peaked. The interpretation of these differences within the context of the disc wind picture and suggestions for their resolution via modifications to the assumed disc and outflow structure are discussed. © 2005 RAS.
Resumo:
In astrophysical systems, radiation-matter interactions are important in transferring energy and momentum between the radiation field and the surrounding material. This coupling often makes it necessary to consider the role of radiation when modelling the dynamics of astrophysical fluids. During the last few years, there have been rapid developments in the use of Monte Carlo methods for numerical radiative transfer simulations. Here, we present an approach to radiation hydrodynamics that is based on coupling Monte Carlo radiative transfer techniques with finite-volume hydrodynamical methods in an operator-split manner. In particular, we adopt an indivisible packet formalism to discretize the radiation field into an ensemble of Monte Carlo packets and employ volume-based estimators to reconstruct the radiation field characteristics. In this paper the numerical tools of this method are presented and their accuracy is verified in a series of test calculations. Finally, as a practical example, we use our approach to study the influence of the radiation-matter coupling on the homologous expansion phase and the bolometric light curve of Type Ia supernova explosions. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
Resumo:
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body leading to systematic errors and inaccuracies when deployed in this context. In this paper RSSI-based Monte Carlo Localisation was implemented using commercial 868 MHz off the shelf hardware and empirical data was gathered across a relatively large number of scenarios within a single indoor office environment. This data showed that the body shadowing effect caused by the human body introduced path skew into location estimates. It was also shown that, by using two body-worn nodes in concert, the effect of body shadowing can be mitigated by averaging the estimated position of the two nodes worn on either side of the body. © Springer Science+Business Media, LLC 2012.
Resumo:
A numerical method is developed to simulate complex two-dimensional crack propagation in quasi-brittle materials considering random heterogeneous fracture properties. Potential cracks are represented by pre-inserted cohesive elements with tension and shear softening constitutive laws modelled by spatially varying Weibull random fields. Monte Carlo simulations of a concrete specimen under uni-axial tension were carried out with extensive investigation of the effects of important numerical algorithms and material properties on numerical efficiency and stability, crack propagation processes and load-carrying capacities. It was found that the homogeneous model led to incorrect crack patterns and load–displacement curves with strong mesh-dependence, whereas the heterogeneous model predicted realistic, complicated fracture processes and load-carrying capacity of little mesh-dependence. Increasing the variance of the tensile strength random fields with increased heterogeneity led to reduction in the mean peak load and increase in the standard deviation. The developed method provides a simple but effective tool for assessment of structural reliability and calculation of characteristic material strength for structural design.
Resumo:
Research into localization has produced a wealth of algorithms and techniques to estimate the location of wireless network nodes, however the majority of these schemes do not explicitly account for non-line of sight conditions. Disregarding this common situation reduces their accuracy and their potential for exploitation in real world applications. This is a particular problem for personnel tracking where the user's body itself will inherently cause time-varying blocking according to their movements. Using empirical data, this paper demonstrates that, by accounting for non-line of sight conditions and using received signal strength based Monte Carlo localization, meter scale accuracy can be achieved for a wrist-worn personnel tracking tag in a 120 m indoor office environment. © 2012 IEEE.
Resumo:
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Resumo:
Monte Carlo calculations of quantum yield in PtSi/p-Si infrared detectors are carried out taking into account the presence of a spatially distributed barrier potential. In the 1-4 mu m wavelength range it is found that the spatial inhomogeneity of the barrier has no significant effect on the overall device photoresponse. However, above lambda = 4.0 mu m and particularly as the cut-off wavelength (lambda approximate to 5.5 mu m) is approached, these calculations reveal a difference between the homogeneous and inhomogeneous barrier photoresponse which becomes increasingly significant and exceeds 50% at lambda = 5.3 mu m. It is, in fact, the inhomogeneous barrier which displays an increased photoyield, a feature that is confirmed by approximate analytical calculations assuming a symmetric Gaussian spatial distribution of the barrier. Furthermore, the importance of the silicide layer thickness in optimizing device efficiency is underlined as a trade-off between maximizing light absorption in the silicide layer and optimizing the internal yield. The results presented here address important features which determine the photoyield of PtSi/Si Schottky diodes at energies below the Si absorption edge and just above the Schottky barrier height in particular.
Resumo:
Classification methods with embedded feature selection capability are very appealing for the analysis of complex processes since they allow the analysis of root causes even when the number of input variables is high. In this work, we investigate the performance of three techniques for classification within a Monte Carlo strategy with the aim of root cause analysis. We consider the naive bayes classifier and the logistic regression model with two different implementations for controlling model complexity, namely, a LASSO-like implementation with a L1 norm regularization and a fully Bayesian implementation of the logistic model, the so called relevance vector machine. Several challenges can arise when estimating such models mainly linked to the characteristics of the data: a large number of input variables, high correlation among subsets of variables, the situation where the number of variables is higher than the number of available data points and the case of unbalanced datasets. Using an ecological and a semiconductor manufacturing dataset, we show advantages and drawbacks of each method, highlighting the superior performance in term of classification accuracy for the relevance vector machine with respect to the other classifiers. Moreover, we show how the combination of the proposed techniques and the Monte Carlo approach can be used to get more robust insights into the problem under analysis when faced with challenging modelling conditions.