960 resultados para variational Monte-Carlo method
Resumo:
In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
The adsorption of gases on microporous carbons is still poorly understood, partly because the structure of these carbons is not well known. Here, a model of microporous carbons based on fullerene- like fragments is used as the basis for a theoretical study of Ar adsorption on carbon. First, a simulation box was constructed, containing a plausible arrangement of carbon fragments. Next, using a new Monte Carlo simulation algorithm, two types of carbon fragments were gradually placed into the initial structure to increase its microporosity. Thirty six different microporous carbon structures were generated in this way. Using the method proposed recently by Bhattacharya and Gubbins ( BG), the micropore size distributions of the obtained carbon models and the average micropore diameters were calculated. For ten chosen structures, Ar adsorption isotherms ( 87 K) were simulated via the hyper- parallel tempering Monte Carlo simulation method. The isotherms obtained in this way were described by widely applied methods of microporous carbon characterisation, i. e. Nguyen and Do, Horvath - Kawazoe, high- resolution alpha(a)s plots, adsorption potential distributions and the Dubinin - Astakhov ( DA) equation. From simulated isotherms described by the DA equation, the average micropore diameters were calculated using empirical relationships proposed by different authors and they were compared with those from the BG method.
Resumo:
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
Resumo:
Foram realizados quatro estudos de simulação para verificar a distribuição de inversas de variáveis com distribuição normal, em função de diferentes variâncias, médias, pontos de truncamentos e tamanhos amostrais. As variáveis simuladas foram GMD, com distribuição normal, representando o ganho médio diário e DIAS, obtido a partir da inversa de GMD, representando dias para se obter determinado peso. em todos os estudos, foi utilizado o sistema SAS® (1990) para simulação dos dados e para posterior análise dos resultados. As médias amostrais de DIAS foram dependentes dos desvios-padrão utilizados na simulação. As análises de regressão mostraram redução da média e do desvio-padrão de DIAS em função do aumento na média de GMD. A inclusão de um ponto de truncamento entre 10 e 25% do valor da média de GMD reduziu a média de GMD e aumentou a de DIAS, quando o coeficiente de variação de GMD foi superior a 25%. O efeito do tamanho dos grupos nas médias de GMD e DIAS não foi significativo, mas o desvio-padrão e CV amostrais médios de GMD aumentaram com o tamanho do grupo. em virtude da dependência entre a média e o desvio-padrão e da variação observada nos desvios-padrão de DIAS em função do tamanho do grupo, a utilização de DIAS como critério de seleção pode diminuir a acurácia da variação. Portanto, para a substituição de GMD por DIAS, é necessária a utilização de um método de análise robusto o suficiente para a eliminação da heterogeneidade de variância.
Resumo:
Using data from a single simulation we obtain Monte Carlo renormalization-group information in a finite region of parameter space by adapting the Ferrenberg-Swendsen histogram method. Several quantities are calculated in the two-dimensional N 2 Ashkin-Teller and Ising models to show the feasibility of the method. We show renormalization-group Hamiltonian flows and critical-point location by matching of correlations by doing just two simulations at a single temperature in lattices of different sizes to partially eliminate finite-size effects.
Analytical and Monte Carlo approaches to evaluate probability distributions of interruption duration
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties-and, in some cases, rewards-that introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the maximum continuous interruption duration (MCID) per customer.This parameter is responsible for the majority of penalties in many electric distribution utilities. This paper describes analytical and Monte Carlo simulation approaches to evaluate probability distributions of interruption duration indices. More emphasis will be given to the development of an analytical method to assess the probability distribution associated with the parameter MCID and the correspond ng penalties. Case studies on a simple distribution network and on a real Brazilian distribution system are presented and discussed.
Resumo:
Utilizou-se o método seqüencial Monte Carlo / Mecânica Quântica para obterem-se os desvios de solvatocromismo e os momentos de dipolo dos sistemas de moléculas orgânicas: Uracil em meio aquoso, -Caroteno em Ácido Oléico, Ácido Ricinoléico em metanol e em Etanol e Ácido Oléico em metanol e em Etanol. As otimizações das geometrias e as distribuições de cargas foram obtidas através da Teoria do Funcional Densidade com o funcional B3LYP e os conjuntos de funções de base 6-31G(d) para todas as moléculas exceto para a água e Uracil, as quais, foram utilizadas o conjunto de funções de base 6-311++G(d,p). No tratamento clássico, Monte Carlo, aplicou-se o algoritmo Metropólis através do programa DICE. A separação de configurações estatisticamente relevantes para os cálculos das propriedades médias foi implementada com a utilização da função de auto-correlação calculada para cada sistema. A função de distribuição radial dos líquidos moleculares foi utilizada para a separação da primeira camada de solvatação, a qual, estabelece a principal interação entre soluto-solvente. As configurações relevantes da primeira camada de solvatação de cada sistema foram submetidas a cálculos quânticos a nível semi-empírico com o método ZINDO/S-CI. Os espectros de absorção foram obtidos para os solutos em fase gasosa e para os sistemas de líquidos moleculares comentados. Os momentos de dipolo elétrico dos mesmos também foram obtidos. Todas as bandas dos espectros de absorção dos sistemas tiveram um desvio para o azul, exceto a segunda banda do sistema de Beta-Caroteno em Ácido Oléico que apresentou um desvio para o vermelho. Os resultados encontrados apresentam-se em excelente concordância com os valores experimentais encontrados na literatura. Todos os sistemas tiveram aumento no momento de dipolo elétrico devido às moléculas dos solventes serem moléculas polares. Os sistemas de ácidos graxos em álcoois apresentaram resultados muito semelhantes, ou seja, os ácidos graxos mencionados possuem comportamentos espectroscópicos semelhantes submetidos aos mesmos solventes. As simulações através do método seqüencial Monte Carlo / Mecânica Quântica estudadas demonstraram que a metodologia é eficaz para a obtenção das propriedades espectroscópicas dos líquidos moleculares analisados.
Resumo:
The reverse Monte Carlo (RMC) method generates sets of points in space which yield radial distribution functions (RDFS) that approximate those of the system of interest. Such sets of configurations should, in principle, be sufficient to determine the structural properties of the system. In this work we apply the RMC technique to fluids of hard diatomic molecules. The experimental RDFs of the hard-dimer fluid were generated by the conventional MC method and used as input in the RMC simulations. Our results indicate that the RMC method is only satisfactory in determining the local structure of the fluid studied by means of only mono-variable RDF. Also we suggest that the use of multi-variable RDFs would improve the technique significantly. However, the accuracy of the method turned out to be very sensitive to the variance of the input experimental RDF. © 1995.
Resumo:
Using fixed node diffusion quantum Monte Carlo (FN-DMC) simulations and density functional theory (DFT) within the generalized gradient approximations, we calculate the total energies of the relaxed and unrelaxed neutral, cationic, and anionic aluminum clusters, Al-n (n = 1-13). From the obtained total energies, we extract the ionization potential and electron detachment energy and compare with previous theoretical and experimental results. Our results for the electronic properties from both the FN-DMC and DFT calculations are in reasonably good agreement with the available experimental data. A comparison between the FN-DMC and DFT results reveals that their differences are a few tenths of electron volt for both the ionization potential and the electron detachment energy. We also observe two distinct behaviors in the electron correlation contribution to the total energies from smaller to larger clusters, which could be assigned to the structural transition of the clusters from planar to three-dimensional occurring at n = 4 to 5.
Resumo:
Monte Carlo (MC) simulation techniques are becoming very common in the Medical Physicists community. MC can be used for modeling Single Photon Emission Computed Tomography (SPECT) and for dosimetry calculations. 188Re, is a promising candidate for radiotherapeutic production and understanding the mechanisms of the radioresponse of tumor cells "in vitro" is of crucial importance as a first step before "in vivo" studies. The dosimetry of 188Re, used to target different lines of cancer cells, has been evaluated by the MC code GEANT4. The simulations estimate the average energy deposition/per event in the biological samples. The development of prototypes for medical imaging, based on LaBr3:Ce scintillation crystals coupled with a position sensitive photomultiplier, have been studied using GEANT4 simulations. Having tested, in the simulation, surface treatments different from the one applied to the crystal used in our experimental measurements, we found out that the Energy Resolution (ER) and the Spatial Resolution (SR) could be improved, in principle, by machining in a different way the lateral surfaces of the crystal. We have then studied a system able to acquire both echographic and scintigraphic images to let the medical operator obtain the complete anatomic and functional information for tumor diagnosis. The scintigraphic part of the detector is simulated by GEANT4 and first attempts to reconstruct tomographic images have been made using as method of reconstruction a back-projection standard algorithm. The proposed camera is based on slant collimators and LaBr3:Ce crystals. Within the Field of View (FOV) of the camera, it possible to distinguish point sources located in air at a distance of about 2 cm from each other. In particular conditions of uptake, tumor depth and dimension, the preliminary results show that the Signal to Noise Ratio (SNR) values obtained are higher than the standard detection limit.
Resumo:
A complete understanding of the glass transition isstill a challenging problem. Some researchers attributeit to the (hypothetical) occurrence of a static phasetransition, others emphasize the dynamical transitionof mode coupling-theory from an ergodic to a non ergodicstate. A class of disordered spin models has been foundwhich unifies both scenarios. One of these models isthe p-state infinite range Potts glass with p>4, whichexhibits in the thermodynamic limit both a dynamicalphase transition at a temperature T_D, and a static oneat T_0 < T_D. In this model every spins interacts withall the others, irrespective of distance. Interactionsare taken from a Gaussian distribution.In order to understand better its behavior forfinite number N of spins and the approach to thethermodynamic limit, we have performed extensive MonteCarlo simulations of the p=10 Potts glass up to N=2560.The time-dependent spin-autocorrelation function C(t)shows strong finite size effects and it does not showa plateau even for temperatures around the dynamicalcritical temperature T_D. We show that the N-andT-dependence of the relaxation time for T > T_D can beunderstood by means of a dynamical finite size scalingAnsatz.The behavior in the spin glass phase down to atemperature T=0.7 (about 60% of the transitiontemperature) is studied. Well equilibratedconfigurations are obtained with the paralleltempering method, which is also useful for properlyestablishing static properties, such as the orderparameter distribution function P(q). Evidence is givenfor the compatibility with a one step replica symmetrybreaking scenario. The study of the cumulants of theorder parameter does not permit a reliable estimation ofthe static transition temperature. The autocorrelationfunction at low T exhibits a two-step decay, and ascaling behavior typical of supercooled liquids, thetime-temperature superposition principle, is observed. Inthis region the dynamics is governed by Arrheniusrelaxations, with barriers growing like N^{1/2}.We analyzed the single spin dynamics down to temperaturesmuch lower than the dynamical transition temperature. We found strong dynamical heterogeneities, which explainthe non-exponential character of the spin autocorrelationfunction. The spins seem to relax according to dynamicalclusters. The model in three dimensions tends to acquireferromagnetic order for equal concentration of ferro-and antiferromagnetic bonds. The ordering has differentcharacteristics from the pure ferromagnet. The spinglass susceptibility behaves like chi_{SG} proportionalto 1/T in the region where a spin glass is predicted toexist in mean-field. Also the analysis of the cumulantsis consistent with the absence of spin glass orderingat finite temperature. The dynamics shows multi-scalerelaxations if a bimodal distribution of bonds isused. We propose to understand it with a model based onthe local spin configuration. This is consistent with theabsence of plateaus if Gaussian interactions are used.
Resumo:
Diese Arbeit beschäftigt sich mit Strukturbildung im schlechten Lösungsmittel bei ein- und zweikomponentigen Polymerbürsten, bei denen Polymerketten durch Pfropfung am Substrat verankert sind. Solche Systeme zeigen laterale Strukturbildungen, aus denen sich interessante Anwendungen ergeben. Die Bewegung der Polymere erfolgt durch Monte Carlo-Simulationen im Kontinuum, die auf CBMC-Algorithmen sowie lokalen Monomerverschiebungen basieren. Eine neu entwickelte Variante des CBMC-Algorithmus erlaubt die Bewegung innerer Kettenteile, da der bisherige Algorithmus die Monomere in Nähe des Pfropfmonomers nicht gut relaxiert. Zur Untersuchung des Phasenverhaltens werden mehrere Analysemethoden entwickelt und angepasst: Dazu gehören die Minkowski-Maße zur Strukturuntersuchung binären Bürsten und die Pfropfkorrelationen zur Untersuchung des Einflusses von Pfropfmustern. Bei einkomponentigen Bürsten tritt die Strukturbildung nur beim schwach gepfropften System auf, dichte Pfropfungen führen zu geschlossenen Bürsten ohne laterale Struktur. Für den graduellen Übergang zwischen geschlossener und aufgerissener Bürste wird ein Temperaturbereich bestimmt, in dem der Übergang stattfindet. Der Einfluss des Pfropfmusters (Störung der Ausbildung einer langreichweitigen Ordnung) auf die Bürstenkonfiguration wird mit den Pfropfkorrelationen ausgewertet. Bei unregelmäßiger Pfropfung sind die gebildeten Strukturen größer als bei regelmäßiger Pfropfung und auch stabiler gegen höhere Temperaturen. Bei binären Systemen bilden sich Strukturen auch bei dichter Pfropfung aus. Zu den Parametern Temperatur, Pfropfdichte und Pfropfmuster kommt die Zusammensetzung der beiden Komponenten hinzu. So sind weitere Strukturen möglich, bei gleicher Häufigkeit der beiden Komponenten bilden sich streifenförmige, lamellare Muster, bei ungleicher Häufigkeit formt die Minoritätskomponente Cluster, die in der Majoritätskomponente eingebettet sind. Selbst bei gleichmäßig gepfropften Systemen bildet sich keine langreichweitige Ordnung aus. Auch bei binären Bürsten hat das Pfropfmuster großen Einfluss auf die Strukturbildung. Unregelmäßige Pfropfmuster führen schon bei höheren Temperaturen zur Trennung der Komponenten, die gebildeten Strukturen sind aber ungleichmäßiger und etwas größer als bei gleichmäßig gepfropften Systemen. Im Gegensatz zur self consistent field-Theorie berücksichtigen die Simulationen Fluktuationen in der Pfropfung und zeigen daher bessere Übereinstimmungen mit dem Experiment.