44 resultados para SEQUENTIAL MONTE-CARLO
Resumo:
A numerical method is developed to simulate complex two-dimensional crack propagation in quasi-brittle materials considering random heterogeneous fracture properties. Potential cracks are represented by pre-inserted cohesive elements with tension and shear softening constitutive laws modelled by spatially varying Weibull random fields. Monte Carlo simulations of a concrete specimen under uni-axial tension were carried out with extensive investigation of the effects of important numerical algorithms and material properties on numerical efficiency and stability, crack propagation processes and load-carrying capacities. It was found that the homogeneous model led to incorrect crack patterns and load–displacement curves with strong mesh-dependence, whereas the heterogeneous model predicted realistic, complicated fracture processes and load-carrying capacity of little mesh-dependence. Increasing the variance of the tensile strength random fields with increased heterogeneity led to reduction in the mean peak load and increase in the standard deviation. The developed method provides a simple but effective tool for assessment of structural reliability and calculation of characteristic material strength for structural design.
Resumo:
In this paper, we report a fully ab initio variational Monte Carlo study of the linear and periodic chain of hydrogen atoms, a prototype system providing the simplest example of strong electronic correlation in low dimensions. In particular, we prove that numerical accuracy comparable to that of benchmark density-matrix renormalization-group calculations can be achieved by using a highly correlated Jastrow-antisymmetrized geminal power variational wave function. Furthermore, by using the so-called "modern theory of polarization" and by studying the spin-spin and dimer-dimer correlations functions, we have characterized in detail the crossover between the weakly and strongly correlated regimes of this atomic chain. Our results show that variational Monte Carlo provides an accurate and flexible alternative to highly correlated methods of quantum chemistry which, at variance with these methods, can be also applied to a strongly correlated solid in low dimensions close to a crossover or a phase transition.
Resumo:
Research into localization has produced a wealth of algorithms and techniques to estimate the location of wireless network nodes, however the majority of these schemes do not explicitly account for non-line of sight conditions. Disregarding this common situation reduces their accuracy and their potential for exploitation in real world applications. This is a particular problem for personnel tracking where the user's body itself will inherently cause time-varying blocking according to their movements. Using empirical data, this paper demonstrates that, by accounting for non-line of sight conditions and using received signal strength based Monte Carlo localization, meter scale accuracy can be achieved for a wrist-worn personnel tracking tag in a 120 m indoor office environment. © 2012 IEEE.
Resumo:
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Resumo:
We present results for a variety of Monte Carlo annealing approaches, both classical and quantum, benchmarked against one another for the textbook optimization exercise of a simple one-dimensional double well. In classical (thermal) annealing, the dependence upon the move chosen in a Metropolis scheme is studied and correlated with the spectrum of the associated Markov transition matrix. In quantum annealing, the path integral Monte Carlo approach is found to yield nontrivial sampling difficulties associated with the tunneling between the two wells. The choice of fictitious quantum kinetic energy is also addressed. We find that a "relativistic" kinetic energy form, leading to a higher probability of long real-space jumps, can be considerably more effective than the standard nonrelativistic one.
Resumo:
Monte Carlo calculations of quantum yield in PtSi/p-Si infrared detectors are carried out taking into account the presence of a spatially distributed barrier potential. In the 1-4 mu m wavelength range it is found that the spatial inhomogeneity of the barrier has no significant effect on the overall device photoresponse. However, above lambda = 4.0 mu m and particularly as the cut-off wavelength (lambda approximate to 5.5 mu m) is approached, these calculations reveal a difference between the homogeneous and inhomogeneous barrier photoresponse which becomes increasingly significant and exceeds 50% at lambda = 5.3 mu m. It is, in fact, the inhomogeneous barrier which displays an increased photoyield, a feature that is confirmed by approximate analytical calculations assuming a symmetric Gaussian spatial distribution of the barrier. Furthermore, the importance of the silicide layer thickness in optimizing device efficiency is underlined as a trade-off between maximizing light absorption in the silicide layer and optimizing the internal yield. The results presented here address important features which determine the photoyield of PtSi/Si Schottky diodes at energies below the Si absorption edge and just above the Schottky barrier height in particular.
Resumo:
Classification methods with embedded feature selection capability are very appealing for the analysis of complex processes since they allow the analysis of root causes even when the number of input variables is high. In this work, we investigate the performance of three techniques for classification within a Monte Carlo strategy with the aim of root cause analysis. We consider the naive bayes classifier and the logistic regression model with two different implementations for controlling model complexity, namely, a LASSO-like implementation with a L1 norm regularization and a fully Bayesian implementation of the logistic model, the so called relevance vector machine. Several challenges can arise when estimating such models mainly linked to the characteristics of the data: a large number of input variables, high correlation among subsets of variables, the situation where the number of variables is higher than the number of available data points and the case of unbalanced datasets. Using an ecological and a semiconductor manufacturing dataset, we show advantages and drawbacks of each method, highlighting the superior performance in term of classification accuracy for the relevance vector machine with respect to the other classifiers. Moreover, we show how the combination of the proposed techniques and the Monte Carlo approach can be used to get more robust insights into the problem under analysis when faced with challenging modelling conditions.
Resumo:
This paper proposes a continuous time Markov chain (CTMC) based sequential analytical approach for composite generation and transmission systems reliability assessment. The basic idea is to construct a CTMC model for the composite system. Based on this model, sequential analyses are performed. Various kinds of reliability indices can be obtained, including expectation, variance, frequency, duration and probability distribution. In order to reduce the dimension of the state space, traditional CTMC modeling approach is modified by merging all high order contingencies into a single state, which can be calculated by Monte Carlo simulation (MCS). Then a state mergence technique is developed to integrate all normal states to further reduce the dimension of the CTMC model. Moreover, a time discretization method is presented for the CTMC model calculation. Case studies are performed on the RBTS and a modified IEEE 300-bus test system. The results indicate that sequential reliability assessment can be performed by the proposed approach. Comparing with the traditional sequential Monte Carlo simulation method, the proposed method is more efficient, especially in small scale or very reliable power systems.
Resumo:
Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multiline effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach are demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally, we discuss some of the current limitations of our method, some possible extensions and potential future applications.
Resumo:
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Resumo:
Objective To present a first and second trimester Down syndrome screening strategy, whereby second-trimester marker determination is contingent on the first-trimester results. Unlike non-disclosure sequential screening (the Integrated test), which requires all women to have markers in both trimesters, this allows a large proportion of the women to complete screening in the first trimester. Methods Two first-trimester risk cut-offs defined three types of results: positive and referred for early diagnosis; negative with screening complete; and intermediate, needing second-trimester markers. Multivariate Gaussian modelling with Monte Carlo simulation was used to estimate the false-positive rate for a fixed 85% detection rate. The false-positive rate was evaluated for various early detection rates and early test completion rates. Model parameters were taken from the SURUSS trial. Results Completion of screening in the first trimester for 75% of women resulted in a 30% early detection rate and a 55% second trimester detected rate (net 85%) with a false-positive rate only 0.1% above that achievable by the Integrated test. The screen-positive rate was 0.1% in the first trimester and 4.7% for those continuing to be tested in the second trimester. If the early detection rate were to be increased to 45% or the early completion rate were to be increased to 80%, there would be a further 0.1% increase in the false-positive rate. Conclusion Contingent screening can achieve results comparable with the Integrated test but with earlier completion of screening for most women. Both strategies need to be evaluated in large-scale prospective studies particularly in relation to psychological impact and practicability.
Resumo:
Objective To demonstrate the potential value of three-stage sequential screening for Down syndrome. Methods Protocols were considered in which maternal serum pregnancy associated plasma protein-A (PAPP-A) and free -human chorionic gonadotropin (hCG) measurements were taken on all women in the first trimester. Those women with very low Down syndrome risks were screened negative at that stage and nuchal translucency (NT) was measured on the remainder and the risk reassessed. Those with very low risk were then screened negative and those with very high risk were offered early diagnostic testing. Those with intermediate risks received second-trimester maternal serum -fetoprotein, free -hCG, unconjugated estriol and inhibin-A. Risk was then reassessed and those with high risk were offered diagnosis. Detection rates and false-positive rates were estimated by multivariate Gaussian modelling using Monte-Carlo simulation. Results The modelling suggests that, with full adherence to a three-stage policy, overall detection rates of nearly 90% and false-positive rates below 2.0% can be achieved. Approximately two-thirds of pregnancies are screened on the basis of first-trimester biochemistry alone, five out of six women complete their screening in the first trimester, and the first-trimester detection rate is over 60%. Conclusion Three-stage contingent sequential screening is potentially highly effective for Down syndrome screening. The acceptability of this protocol and its performance in practice, should be tested in prospective studies. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
In the highly competitive world of modern finance, new derivatives are continually required to take advantage of changes in financial markets, and to hedge businesses against new risks. The research described in this paper aims to accelerate the development and pricing of new derivatives in two different ways. Firstly, new derivatives can be specified mathematically within a general framework, enabling new mathematical formulae to be specified rather than just new parameter settings. This Generic Pricing Engine (GPE) is expressively powerful enough to specify a wide range of stand¬ard pricing engines. Secondly, the associated price simulation using the Monte Carlo method is accelerated using GPU or multicore hardware. The parallel implementation (in OpenCL) is automatically derived from the mathematical description of the derivative. As a test, for a Basket Option Pricing Engine (BOPE) generated using the GPE, on the largest problem size, an NVidia GPU runs the generated pricing engine at 45 times the speed of a sequential, specific hand-coded implementation of the same BOPE. Thus a user can more rapidly devise, simulate and experiment with new derivatives without actual programming.