954 resultados para Monte Carlo method
Resumo:
Multi-label classification (MLC) is the supervised learning problem where an instance may be associated with multiple labels. Modeling dependencies between labels allows MLC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies. On the one hand, the original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors down the chain. On the other hand, a recent Bayes-optimal method improves the performance, but is computationally intractable in practice. Here we present a novel double-Monte Carlo scheme (M2CC), both for finding a good chain sequence and performing efficient inference. The M2CC algorithm remains tractable for high-dimensional data sets and obtains the best overall accuracy, as shown on several real data sets with input dimension as high as 1449 and up to 103 labels.
Resumo:
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Resumo:
A Monte Carlo simulation method for globular proteins, called extended-scaled-collective-variable (ESCV) Monte Carlo, is proposed. This method combines two Monte Carlo algorithms known as entropy-sampling and scaled-collective-variable algorithms. Entropy-sampling Monte Carlo is able to sample a large configurational space even in a disordered system that has a large number of potential barriers. In contrast, scaled-collective-variable Monte Carlo provides an efficient sampling for a system whose dynamics is highly cooperative. Because a globular protein is a disordered system whose dynamics is characterized by collective motions, a combination of these two algorithms could provide an optimal Monte Carlo simulation for a globular protein. As a test case, we have carried out an ESCV Monte Carlo simulation for a cell adhesive Arg-Gly-Asp-containing peptide, Lys-Arg-Cys-Arg-Gly-Asp-Cys-Met-Asp, and determined the conformational distribution at 300 K. The peptide contains a disulfide bridge between the two cysteine residues. This bond mimics the strong geometrical constraints that result from a protein's globular nature and give rise to highly cooperative dynamics. Computation results show that the ESCV Monte Carlo was not trapped at any local minimum and that the canonical distribution was correctly determined.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
We introduce a new class of quantum Monte Carlo methods, based on a Gaussian quantum operator representation of fermionic states. The methods enable first-principles dynamical or equilibrium calculations in many-body Fermi systems, and, combined with the existing Gaussian representation for bosons, provide a unified method of simulating Bose-Fermi systems. As an application relevant to the Fermi sign problem, we calculate finite-temperature properties of the two dimensional Hubbard model and the dynamics in a simple model of coherent molecular dissociation.
Resumo:
In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
2000 Mathematics Subject Classification: 91B28, 65C05.
Resumo:
An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
This work is an investigation into collimator designs for a deuterium-deuterium (DD) neutron generator for an inexpensive and compact neutron imaging system that can be implemented in a hospital. The envisioned application is for a spectroscopic imaging technique called neutron stimulated emission computed tomography (NSECT).
Previous NSECT studies have been performed using a Van-de-Graaff accelerator at the Triangle Universities Nuclear Laboratory (TUNL) in Duke University. This facility has provided invaluable research into the development of NSECT. To transition the current imaging method into a clinically feasible system, there is a need for a high-intensity fast neutron source that can produce collimated beams. The DD neutron generator from Adelphi Technologies Inc. is being explored as a possible candidate to provide the uncollimated neutrons. This DD generator is a compact source that produces 2.5 MeV fast neutrons with intensities of 1012 n/s (4π). The neutron energy is sufficient to excite most isotopes of interest in the body with the exception of carbon and oxygen. However, a special collimator is needed to collimate the 4π neutron emission into a narrow beam. This work describes the development and evaluation of a series of collimator designs to collimate the DD generator for narrow beams suitable for NSECT imaging.
A neutron collimator made of high-density polyethylene (HDPE) and lead was modeled and simulated using the GEANT4 toolkit. The collimator was designed as a 52 x 52 x 52 cm3 HDPE block coupled with 1 cm lead shielding. Non-tapering (cylindrical) and tapering (conical) opening designs were modeled into the collimator to permit passage of neutrons. The shape, size, and geometry of the aperture were varied to assess the effects on the collimated neutron beam. Parameters varied were: inlet diameter (1-5 cm), outlet diameter (1-5 cm), aperture diameter (0.5-1.5 cm), and aperture placement (13-39 cm). For each combination of collimator parameters, the spatial and energy distributions of neutrons and gammas were tracked and analyzed to determine three performance parameters: neutron beam-width, primary neutron flux, and the output quality. To evaluate these parameters, the simulated neutron beams are then regenerated for a NSECT breast scan. Scan involved a realistic breast lesion implanted into an anthropomorphic female phantom.
This work indicates potential for collimating and shielding a DD neutron generator for use in a clinical NSECT system. The proposed collimator designs produced a well-collimated neutron beam that can be used for NSECT breast imaging. The aperture diameter showed a strong correlation to the beam-width, where the collimated neutron beam-width was about 10% larger than the physical aperture diameter. In addition, a collimator opening consisting of a tapering inlet and cylindrical outlet allowed greater neutron throughput when compared to a simple cylindrical opening. The tapering inlet design can allow additional neutron throughput when the neck is placed farther from the source. On the other hand, the tapering designs also decrease output quality (i.e. increase in stray neutrons outside the primary collimated beam). All collimators are cataloged in measures of beam-width, neutron flux, and output quality. For a particular NSECT application, an optimal choice should be based on the collimator specifications listed in this work.
Resumo:
Doutoramento em Economia.