64 resultados para Chaîne de Markov Monte Carlo
Resumo:
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body leading to systematic errors and inaccuracies when deployed in this context. In this paper RSSI-based Monte Carlo Localisation was implemented using commercial 868 MHz off the shelf hardware and empirical data was gathered across a relatively large number of scenarios within a single indoor office environment. This data showed that the body shadowing effect caused by the human body introduced path skew into location estimates. It was also shown that, by using two body-worn nodes in concert, the effect of body shadowing can be mitigated by averaging the estimated position of the two nodes worn on either side of the body. © Springer Science+Business Media, LLC 2012.
Resumo:
A numerical method is developed to simulate complex two-dimensional crack propagation in quasi-brittle materials considering random heterogeneous fracture properties. Potential cracks are represented by pre-inserted cohesive elements with tension and shear softening constitutive laws modelled by spatially varying Weibull random fields. Monte Carlo simulations of a concrete specimen under uni-axial tension were carried out with extensive investigation of the effects of important numerical algorithms and material properties on numerical efficiency and stability, crack propagation processes and load-carrying capacities. It was found that the homogeneous model led to incorrect crack patterns and load–displacement curves with strong mesh-dependence, whereas the heterogeneous model predicted realistic, complicated fracture processes and load-carrying capacity of little mesh-dependence. Increasing the variance of the tensile strength random fields with increased heterogeneity led to reduction in the mean peak load and increase in the standard deviation. The developed method provides a simple but effective tool for assessment of structural reliability and calculation of characteristic material strength for structural design.
Resumo:
In this paper, we report a fully ab initio variational Monte Carlo study of the linear and periodic chain of hydrogen atoms, a prototype system providing the simplest example of strong electronic correlation in low dimensions. In particular, we prove that numerical accuracy comparable to that of benchmark density-matrix renormalization-group calculations can be achieved by using a highly correlated Jastrow-antisymmetrized geminal power variational wave function. Furthermore, by using the so-called "modern theory of polarization" and by studying the spin-spin and dimer-dimer correlations functions, we have characterized in detail the crossover between the weakly and strongly correlated regimes of this atomic chain. Our results show that variational Monte Carlo provides an accurate and flexible alternative to highly correlated methods of quantum chemistry which, at variance with these methods, can be also applied to a strongly correlated solid in low dimensions close to a crossover or a phase transition.
Resumo:
Research into localization has produced a wealth of algorithms and techniques to estimate the location of wireless network nodes, however the majority of these schemes do not explicitly account for non-line of sight conditions. Disregarding this common situation reduces their accuracy and their potential for exploitation in real world applications. This is a particular problem for personnel tracking where the user's body itself will inherently cause time-varying blocking according to their movements. Using empirical data, this paper demonstrates that, by accounting for non-line of sight conditions and using received signal strength based Monte Carlo localization, meter scale accuracy can be achieved for a wrist-worn personnel tracking tag in a 120 m indoor office environment. © 2012 IEEE.
Resumo:
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Resumo:
Monte Carlo calculations of quantum yield in PtSi/p-Si infrared detectors are carried out taking into account the presence of a spatially distributed barrier potential. In the 1-4 mu m wavelength range it is found that the spatial inhomogeneity of the barrier has no significant effect on the overall device photoresponse. However, above lambda = 4.0 mu m and particularly as the cut-off wavelength (lambda approximate to 5.5 mu m) is approached, these calculations reveal a difference between the homogeneous and inhomogeneous barrier photoresponse which becomes increasingly significant and exceeds 50% at lambda = 5.3 mu m. It is, in fact, the inhomogeneous barrier which displays an increased photoyield, a feature that is confirmed by approximate analytical calculations assuming a symmetric Gaussian spatial distribution of the barrier. Furthermore, the importance of the silicide layer thickness in optimizing device efficiency is underlined as a trade-off between maximizing light absorption in the silicide layer and optimizing the internal yield. The results presented here address important features which determine the photoyield of PtSi/Si Schottky diodes at energies below the Si absorption edge and just above the Schottky barrier height in particular.
Resumo:
Classification methods with embedded feature selection capability are very appealing for the analysis of complex processes since they allow the analysis of root causes even when the number of input variables is high. In this work, we investigate the performance of three techniques for classification within a Monte Carlo strategy with the aim of root cause analysis. We consider the naive bayes classifier and the logistic regression model with two different implementations for controlling model complexity, namely, a LASSO-like implementation with a L1 norm regularization and a fully Bayesian implementation of the logistic model, the so called relevance vector machine. Several challenges can arise when estimating such models mainly linked to the characteristics of the data: a large number of input variables, high correlation among subsets of variables, the situation where the number of variables is higher than the number of available data points and the case of unbalanced datasets. Using an ecological and a semiconductor manufacturing dataset, we show advantages and drawbacks of each method, highlighting the superior performance in term of classification accuracy for the relevance vector machine with respect to the other classifiers. Moreover, we show how the combination of the proposed techniques and the Monte Carlo approach can be used to get more robust insights into the problem under analysis when faced with challenging modelling conditions.
Resumo:
Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multiline effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach are demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally, we discuss some of the current limitations of our method, some possible extensions and potential future applications.
Resumo:
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Resumo:
This paper proposes a continuous time Markov chain (CTMC) based sequential analytical approach for composite generation and transmission systems reliability assessment. The basic idea is to construct a CTMC model for the composite system. Based on this model, sequential analyses are performed. Various kinds of reliability indices can be obtained, including expectation, variance, frequency, duration and probability distribution. In order to reduce the dimension of the state space, traditional CTMC modeling approach is modified by merging all high order contingencies into a single state, which can be calculated by Monte Carlo simulation (MCS). Then a state mergence technique is developed to integrate all normal states to further reduce the dimension of the CTMC model. Moreover, a time discretization method is presented for the CTMC model calculation. Case studies are performed on the RBTS and a modified IEEE 300-bus test system. The results indicate that sequential reliability assessment can be performed by the proposed approach. Comparing with the traditional sequential Monte Carlo simulation method, the proposed method is more efficient, especially in small scale or very reliable power systems.
Resumo:
Raised bog peat deposits form important archives for reconstructing past changes in climate. Precise and reliable age models are of vital importance for interpreting such archives. We propose enhanced, Markov chain Monte Carlo based methods for obtaining age models from radiocarbon-dated peat cores, based on the assumption of piecewise linear accumulation. Included are automatic choice of sections, a measure of the goodness of fit and outlier downweighting. The approach is illustrated by using a peat core from the Netherlands.
Resumo:
Some of the first results are reported from RISE - a new fast camera mounted on the Liverpool Telescope primarily designed to obtain high time-resolution light curves of transiting extrasolar planets for the purpose of transit timing. A full and partial transit of WASP-3 are presented, and a Markov-Chain Monte Carlo analysis is used to update the parameters from the discovery paper. This results in a planetary radius of 1.29(-0.12)(+0.05) R-J and therefore a density of 0.82(-0.09)(+0.14) rho(J), consistent with previous results. The inclination is 85.06(-0.15)(+0.16) deg, in agreement (but with a significant improvement in the precision) with the previously determined value. Central transit times are found to be consistent with the ephemeris given in the discovery paper; however, a new ephemeris calculated using the longer baseline results in T-c(0) = 2 454 605.55915 +/- 0.00023 HJD and P = 1.846835 +/- 0.000002 days.
Resumo:
We present nine newly observed transits of TrES-3, taken as part of a transit timing program using the RISE instrument on the Liverpool Telescope. A Markov-Chain Monte Carlo analysis was used to determine the planet star radius ratio and inclination of the system, which were found to be R-p/R-star = 0.1664(-0.0018)(+0.0011) and i = 81.73(-0.04)(+0.13), respectively, consistent with previous results. The central transit times and uncertainties were also calculated, using a residual-permutation algorithm as an independent check on the errors. A re-analysis of eight previously published TrES-3 light curves was conducted to determine the transit times and uncertainties using consistent techniques. Whilst the transit times were not found to be in agreement with a linear ephemeris, giving chi(2) = 35.07 for 15 degrees of freedom, we interpret this to be the result of systematics in the light curves rather than a real transit timing variation. This is because the light curves that show the largest deviation from a constant period either have relatively little out-of-transit coverage or have clear systematics. A new ephemeris was calculated using the transit times and was found to be T-c(0) = 2454632.62610 +/- 0.00006 HJD and P = 1.3061864 +/- 0.0000005 days. The transit times were then used to place upper mass limits as a function of the period ratio of a potential perturbing planet, showing that our data are sufficiently sensitive to have probed sub-Earth mass planets in both interior and exterior 2:1 resonances, assuming that the additional planet is in an initially circular orbit.
Resumo:
We report the discovery of WASP-10b, a new transiting extrasolar planet (ESP) discovered by the Wide Angle Search for Planets ( WASP) Consortium and confirmed using Nordic Optical Telescope FIbre-fed Echelle Spectrograph and SOPHIE radial velocity data. A 3.09-d period, 29 mmag transit depth and 2.36 h duration are derived for WASP-10b using WASP and high-precision photometric observations. Simultaneous fitting to the photometric and radial velocity data using a Markov Chain Monte Carlo procedure leads to a planet radius of 1.28R(J), a mass of 2.96M(J) and eccentricity of approximate to 0.06. WASP-10b is one of the more massive transiting ESPs, and we compare its characteristics to the current sample of transiting ESP, where there is currently little information for masses greater than approximate to 2M(J) and non-zero eccentricities. WASP-10's host star, GSC 2752-00114 (USNO-B1.0 1214-0586164) is among the fainter stars in the WASP sample, with V = 12.7 and a spectral type of K5. This result shows promise for future late-type dwarf star surveys.