950 resultados para Monte Carlo sampling
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
Resumo:
In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation.
Resumo:
The direct simulation Monte Carlo (DSMC) method is a widely used approach for flow simulations having rarefied or nonequilibrium effects. It involves heavily to sample instantaneous values from prescribed distributions using random numbers. In this note, we briefly review the sampling techniques typically employed in the DSMC method and present two techniques to speedup related sampling processes. One technique is very efficient for sampling geometric locations of new particles and the other is useful for the Larsen-Borgnakke energy distribution.
Resumo:
A new approach to treating large Z systems by quantum Monte Carlo has been developed. It naturally leads to notion of the 'valence energy'. Possibilities of the new approach has been explored by optimizing the wave function for CuH and Cu and computing dissociation energy and dipole moment of CuH using variational Monte Carlo. The dissociation energy obtained is about 40% smaller than the experimental value; the method is comparable with SCF and simple pseudopotential calculations. The dipole moment differs from the best theoretical estimate by about 50% what is again comparable with other methods (Complete Active Space SCF and pseudopotential methods).
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
In this study, a method for vehicle tracking through video analysis based on Markov chain Monte Carlo (MCMC) particle filtering with metropolis sampling is proposed. The method handles multiple targets with low computational requirements and is, therefore, ideally suited for advanced-driver assistance systems that involve real-time operation. The method exploits the removed perspective domain given by inverse perspective mapping (IPM) to define a fast and efficient likelihood model. Additionally, the method encompasses an interaction model using Markov Random Fields (MRF) that allows treatment of dependencies between the motions of targets. The proposed method is tested in highway sequences and compared to state-of-the-art methods for vehicle tracking, i.e., independent target tracking with Kalman filtering (KF) and joint tracking with particle filtering. The results showed fewer tracking failures using the proposed method.
Resumo:
A Monte Carlo simulation method for globular proteins, called extended-scaled-collective-variable (ESCV) Monte Carlo, is proposed. This method combines two Monte Carlo algorithms known as entropy-sampling and scaled-collective-variable algorithms. Entropy-sampling Monte Carlo is able to sample a large configurational space even in a disordered system that has a large number of potential barriers. In contrast, scaled-collective-variable Monte Carlo provides an efficient sampling for a system whose dynamics is highly cooperative. Because a globular protein is a disordered system whose dynamics is characterized by collective motions, a combination of these two algorithms could provide an optimal Monte Carlo simulation for a globular protein. As a test case, we have carried out an ESCV Monte Carlo simulation for a cell adhesive Arg-Gly-Asp-containing peptide, Lys-Arg-Cys-Arg-Gly-Asp-Cys-Met-Asp, and determined the conformational distribution at 300 K. The peptide contains a disulfide bridge between the two cysteine residues. This bond mimics the strong geometrical constraints that result from a protein's globular nature and give rise to highly cooperative dynamics. Computation results show that the ESCV Monte Carlo was not trapped at any local minimum and that the canonical distribution was correctly determined.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.