960 resultados para Monte-Carlo approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new transdimensional Sequential Monte Carlo (SMC) algorithm called SM- CVB is proposed. In an SMC approach, a weighted sample of particles is generated from a sequence of probability distributions which ‘converge’ to the target distribution of interest, in this case a Bayesian posterior distri- bution. The approach is based on the use of variational Bayes to propose new particles at each iteration of the SMCVB algorithm in order to target the posterior more efficiently. The variational-Bayes-generated proposals are not limited to a fixed dimension. This means that the weighted particle sets that arise can have varying dimensions thereby allowing us the option to also estimate an appropriate dimension for the model. This novel algorithm is outlined within the context of finite mixture model estimation. This pro- vides a less computationally demanding alternative to using reversible jump Markov chain Monte Carlo kernels within an SMC approach. We illustrate these ideas in a simulated data analysis and in applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method combining the Monte Carlo technique and the simple fragment approach has been developed for simulating network formation in amine-catalysed epoxy-anhydride systems. The method affords a detailed insight into the nature and composition of the network, showing the distribution of various fragments. It has been used to characterize the network formation in the reaction of the diglycidyl ester of isophthalic acid with hexahydrophthalic anhydride, catalysed by benzyldimethylamine. Pre-gel properties like number and weight distributions and average molecular weights have been calculated as a function of epoxy conversion, leading to a prediction of the gel-point conversion. Analysis of the simulated network further yields other characteristic properties such as concentration of crosslink points, distribution and concentration of elastically active chains, average molecular weight between crosslinks, sol content and mass fraction of pendent chains. A comparison has been made of the properties obtained through simulation with those predicted by the fragment approach alone, which, however, gives only average properties. The Monte Carlo simulation results clearly show that loops and other cyclic structures occur in the gel. This may account for the differences observed between the results of the simulation and the fragment model in the post-gel phase. Copyright (C) 1996 Elsevier Science Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a stochastic simulation technique for subset selection in time series models, based on the use of indicator variables with the Gibbs sampler within a hierarchical Bayesian framework. As an example, the method is applied to the selection of subset linear AR models, in which only significant lags are included. Joint sampling of the indicators and parameters is found to speed convergence. We discuss the possibility of model mixing where the model is not well determined by the data, and the extension of the approach to include non-linear model terms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the valuation of energy assets related to natural gas. In particular, we evaluate a baseload Natural Gas Combined Cycle (NGCC) power plant and an ancillary instalation, namely a Liquefied Natural Gas (LNG) facility, in a realistic setting; specifically, these investments enjoy a long useful life but require some non-negligible time to build. Then we focus on the valuation of several investment options again in a realistic setting. These include the option to invest in the power plant when there is uncertainty concerning the initial outlay, or the option's time to maturity, or the cost of CO2 emission permits, or when there is a chance to double the plant size in the future. Our model comprises three sources of risk. We consider uncertain gas prices with regard to both the current level and the long-run equilibrium level; the current electricity price is also uncertain. They all are assumed to show mean reversion. The two-factor model for natural gas price is calibrated using data from NYMEX NG futures contracts. Also, we calibrate the one-factor model for electricity price using data from the Spanish wholesale electricity market, respectively. Then we use the estimated parameter values alongside actual physical parameters from a case study to value natural gas plants. Finally, the calibrated parameters are also used in a Monte Carlo simulation framework to evaluate several American-type options to invest in these energy assets. We accomplish this by following the least squares MC approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The direct simulation Monte Carlo (DSMC) method is a widely used approach for flow simulations having rarefied or nonequilibrium effects. It involves heavily to sample instantaneous values from prescribed distributions using random numbers. In this note, we briefly review the sampling techniques typically employed in the DSMC method and present two techniques to speedup related sampling processes. One technique is very efficient for sampling geometric locations of new particles and the other is useful for the Larsen-Borgnakke energy distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho apresenta um estudo referente à aplicação da abordagem Bayesiana como técnica de solução do problema inverso de identificação de danos estruturais, onde a integridade da estrutura é continuamente descrita por um parâmetro estrutural denominado parâmetro de coesão. A estrutura escolhida para análise é uma viga simplesmente apoiada do tipo Euler-Bernoulli. A identificação de danos é baseada em alterações na resposta impulsiva da estrutura, provocadas pela presença dos mesmos. O problema direto é resolvido através do Método de Elementos Finitos (MEF), que, por sua vez, é parametrizado pelo parâmetro de coesão da estrutura. O problema de identificação de danos é formulado como um problema inverso, cuja solução, do ponto de vista Bayesiano, é uma distribuição de probabilidade a posteriori para cada parâmetro de coesão da estrutura, obtida utilizando-se a metodologia de amostragem de Monte Carlo com Cadeia de Markov. As incertezas inerentes aos dados medidos serão contempladas na função de verossimilhança. Três estratégias de solução são apresentadas. Na Estratégia 1, os parâmetros de coesão da estrutura são amostrados de funções densidade de probabilidade a posteriori que possuem o mesmo desvio padrão. Na Estratégia 2, após uma análise prévia do processo de identificação de danos, determina-se regiões da viga potencialmente danificadas e os parâmetros de coesão associados à essas regiões são amostrados a partir de funções de densidade de probabilidade a posteriori que possuem desvios diferenciados. Na Estratégia 3, após uma análise prévia do processo de identificação de danos, apenas os parâmetros associados às regiões identificadas como potencialmente danificadas são atualizados. Um conjunto de resultados numéricos é apresentado levando-se em consideração diferentes níveis de ruído para as três estratégias de solução apresentadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new approach for estimating mixing between populations based on non-recombining markers, specifically Y-chromosome microsatellites. A Markov chain Monte Carlo (MCMC) Bayesian statistical approach is used to calculate the posterior probability

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BGCore reactor analysis system was recently developed at Ben-Gurion University for calculating in-core fuel composition and spent fuel emissions following discharge. It couples the Monte Carlo transport code MCNP with an independently developed burnup and decay module SARAF. Most of the existing MCNP based depletion codes (e.g. MOCUP, Monteburns, MCODE) tally directly the one-group fluxes and reaction rates in order to prepare one-group cross sections necessary for the fuel depletion analysis. BGCore, on the other hand, uses a multi-group (MG) approach for generation of one group cross-sections. This coupling approach significantly reduces the code execution time without compromising the accuracy of the results. Substantial reduction in the BGCore code execution time allows consideration of problems with much higher degree of complexity, such as introduction of thermal hydraulic (TH) feedback into the calculation scheme. Recently, a simplified TH feedback module, THERMO, was developed and integrated into the BGCore system. To demonstrate the capabilities of the upgraded BGCore system, a coupled neutronic TH analysis of a full PWR core was performed. The BGCore results were compared with those of the state of the art 3D deterministic nodal diffusion code DYN3D (Grundmann et al.; 2000). Very good agreement in major core operational parameters including k-eff eigenvalue, axial and radial power profiles, and temperature distributions between the BGCore and DYN3D results was observed. This agreement confirms the consistency of the implementation of the TH feedback module. Although the upgraded BGCore system is capable of performing both, depletion and TH analyses, the calculations in this study were performed for the beginning of cycle state with pre-generated fuel compositions. © 2011 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coupled Monte Carlo depletion systems provide a versatile and an accurate tool for analyzing advanced thermal and fast reactor designs for a variety of fuel compositions and geometries. The main drawback of Monte Carlo-based systems is a long calculation time imposing significant restrictions on the complexity and amount of design-oriented calculations. This paper presents an alternative approach to interfacing the Monte Carlo and depletion modules aimed at addressing this problem. The main idea is to calculate the one-group cross sections for all relevant isotopes required by the depletion module in a separate module external to Monte Carlo calculations. Thus, the Monte Carlo module will produce the criticality and neutron spectrum only, without tallying of the individual isotope reaction rates. The onegroup cross section for all isotopes will be generated in a separate module by collapsing a universal multigroup (MG) cross-section library using the Monte Carlo calculated flux. Here, the term "universal" means that a single MG cross-section set will be applicable for all reactor systems and is independent of reactor characteristics such as a neutron spectrum; fuel composition; and fuel cell, assembly, and core geometries. This approach was originally proposed by Haeck et al. and implemented in the ALEPH code. Implementation of the proposed approach to Monte Carlo burnup interfacing was carried out through the BGCORE system. One-group cross sections generated by the BGCORE system were compared with those tallied directly by the MCNP code. Analysis of this comparison was carried out and led to the conclusion that in order to achieve the accuracy required for a reliable core and fuel cycle analysis, accounting for the background cross section (σ0) in the unresolved resonance energy region is essential. An extension of the one-group cross-section generation model was implemented and tested by tabulating and interpolating by a simplified σ0 model. A significant improvement of the one-group cross-section accuracy was demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on the use of a parallelised Model Predictive Control, Sequential Monte Carlo algorithm for solving the problem of conflict resolution and aircraft trajectory control in air traffic management specifically around the terminal manoeuvring area of an airport. The target problem is nonlinear, highly constrained, non-convex and uses a single decision-maker with multiple aircraft. The implementation includes a spatio-temporal wind model and rolling window simulations for realistic ongoing scenarios. The method is capable of handling arriving and departing aircraft simultaneously including some with very low fuel remaining. A novel flow field is proposed to smooth the approach trajectories for arriving aircraft and all trajectories are planned in three dimensions. Massive parallelisation of the algorithm allows solution speeds to approach those required for real-time use.