954 resultados para Monte carlo method
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
A new approach to the study of the local organization in amorphous polymer materials is presented. The method couples neutron diffraction experiments that explore the structure on the spatial scale 1–20 Å with the reverse Monte Carlo fitting procedure to predict structures that accurately represent the experimental scattering results over the whole momentum transfer range explored. Molecular mechanics and molecular dynamics techniques are also used to produce atomistic models independently from any experimental input, thereby providing a test of the viability of the reverse Monte Carlo method in generating realistic models for amorphous polymeric systems. An analysis of the obtained models in terms of single chain properties and of orientational correlations between chain segments is presented. We show the viability of the method with data from molten polyethylene. The analysis derives a model with average C-C and C-H bond lengths of 1.55 Å and 1.1 Å respectively, average backbone valence angle of 112, a torsional angle distribution characterized by a fraction of trans conformers of 0.67 and, finally, a weak interchain orientational correlation at around 4 Å.
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A high incidence of waterborne diseases is observed worldwide and in order to address contamination problems prior to an outbreak, quantitative microbial risk assessment is a useful tool for estimating the risk of infection. The objective of this paper was to assess the probability of Giardia infection from consuming water from shallow wells in a peri-urban area. Giardia has been described as an important waterborne pathogen and reported in several water sources, including ground waters. Sixteen water samples were collected and examined according to the US EPA (1623, 2005). A Monte Carlo method was used to address the potential risk as described by the exponential dose response model. Giardia cysts occurred in 62.5% of the samples (0.1-36.1 cysts/l). A median risk of 10-1 for the population was estimated and the adult ingestion was the highest risk driver. This study illustrates the vulnerability of shallow well water supply systems in peri-urban areas.
Resumo:
This paper generalizes the HEGY-type test to detect seasonal unit roots in data at any frequency, based on the seasonal unit root tests in univariate time series by Hylleberg, Engle, Granger and Yoo (1990). We introduce the seasonal unit roots at first, and then derive the mechanism of the HEGY-type test for data with any frequency. Thereafter we provide the asymptotic distributions of our test statistics when different test regressions are employed. We find that the F-statistics for testing conjugation unit roots have the same asymptotic distributions. Then we compute the finite-sample and asymptotic critical values for daily and hourly data by a Monte Carlo method. The power and size properties of our test for hourly data is investigated, and we find that including lag augmentations in auxiliary regression without lag elimination have the smallest size distortion and tests with seasonal dummies included in auxiliary regression have more power than the tests without seasonal dummies. At last we apply the our test to hourly wind power production data in Sweden and shows there are no seasonal unit roots in the series.
Resumo:
Com o objetivo de mostrar uma aplicação dos modelos da família GARCH a taxas de câmbio, foram utilizadas técnicas estatísticas englobando análise multivariada de componentes principais e análise de séries temporais com modelagem de média e variância (volatilidade), primeiro e segundo momentos respectivamente. A utilização de análise de componentes principais auxilia na redução da dimensão dos dados levando a estimação de um menor número de modelos, sem contudo perder informação do conjunto original desses dados. Já o uso dos modelos GARCH justifica-se pela presença de heterocedasticidade na variância dos retornos das séries de taxas de câmbio. Com base nos modelos estimados foram simuladas novas séries diárias, via método de Monte Carlo (MC), as quais serviram de base para a estimativa de intervalos de confiança para cenários futuros de taxas de câmbio. Para a aplicação proposta foram selecionadas taxas de câmbio com maior market share de acordo com estudo do BIS, divulgado a cada três anos.
Resumo:
High-precision calculations of the correlation functions and order parameters were performed in order to investigate the critical properties of several two-dimensional ferro- magnetic systems: (i) the q-state Potts model; (ii) the Ashkin-Teller isotropic model; (iii) the spin-1 Ising model. We deduced exact relations connecting specific damages (the difference between two microscopic configurations of a model) and the above mentioned thermodynamic quanti- ties which permit its numerical calculation, by computer simulation and using any ergodic dynamics. The results obtained (critical temperature and exponents) reproduced all the known values, with an agreement up to several significant figures; of particular relevance were the estimates along the Baxter critical line (Ashkin-Teller model) where the exponents have a continuous variation. We also showed that this approach is less sensitive to the finite-size effects than the standard Monte-Carlo method. This analysis shows that the present approach produces equal or more accurate results, as compared to the usual Monte Carlo simulation, and can be useful to investigate these models in circumstances for which their behavior is not yet fully understood
Resumo:
We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models
Resumo:
The behavior of plasma and sheath characteristics under the action of an applied magnetic field is important in many applications including plasma probes and material processing. Plasma immersion ion implantation (PIII) has been developed as a fast and efficient surface modification technique of complex shaped three-dimensional objects. The PIII process relies on the acceleration of ions across a high-voltage plasma sheath that develops around the target. Recent studies have shown that the sheath dynamics is significantly affected by an external magnetic field. In this work we describe a two-dimensional computer simulation of magnetic field enhanced plasma immersion implantation system. Negative bias voltage is applied to a cylindrical target located on the axis of a grounded cylindrical vacuum chamber filled with uniform nitrogen plasma. An axial magnetic field is created by a solenoid installed inside the cylindrical target. The computer code employs the Monte Carlo method for collision of electrons and neutrals in the plasma and a particle-in-cell (PIC) algorithm for simulating the movement of charged particles in the electromagnetic field. Secondary electron emission from the target subjected to ion bombardment is also included. It is found that a high-density plasma region is formed around the cylindrical target due to the intense background gas ionization by the magnetized electrons drifting in the crossed ExB fields. An increase of implantation current density in front of high density plasma region is observed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper an efficient algorithm for probabilistic analysis of unbalanced three-phase weakly-meshed distribution systems is presented. This algorithm uses the technique of Two-Point Estimate Method for calculating the probabilistic behavior of the system random variables. Additionally, the deterministic analysis of the state variables is performed by means of a Compensation-Based Radial Load Flow (CBRLF). Such load flow efficiently exploits the topological characteristics of the network. To deal with distributed generation, a strategy to incorporate a simplified model of a generator in the CBRLF is proposed. Thus, depending on the type of control and generator operation conditions, the node with distributed generation can be modeled either as a PV or PQ node. To validate the efficiency of the proposed algorithm, the IEEE 37 bus test system is used. The probabilistic results are compared with those obtained using the Monte Carlo method.
Resumo:
Peng was the first to work with the Technical DFA (Detrended Fluctuation Analysis), a tool capable of detecting auto-long-range correlation in time series with non-stationary. In this study, the technique of DFA is used to obtain the Hurst exponent (H) profile of the electric neutron porosity of the 52 oil wells in Namorado Field, located in the Campos Basin -Brazil. The purpose is to know if the Hurst exponent can be used to characterize spatial distribution of wells. Thus, we verify that the wells that have close values of H are spatially close together. In this work we used the method of hierarchical clustering and non-hierarchical clustering method (the k-mean method). Then compare the two methods to see which of the two provides the best result. From this, was the parameter � (index neighborhood) which checks whether a data set generated by the k- average method, or at random, so in fact spatial patterns. High values of � indicate that the data are aggregated, while low values of � indicate that the data are scattered (no spatial correlation). Using the Monte Carlo method showed that combined data show a random distribution of � below the empirical value. So the empirical evidence of H obtained from 52 wells are grouped geographically. By passing the data of standard curves with the results obtained by the k-mean, confirming that it is effective to correlate well in spatial distribution
Resumo:
The diffusive epidemic process (PED) is a nonequilibrium stochastic model which, exhibits a phase trnasition to an absorbing state. In the model, healthy (A) and sick (B) individuals diffuse on a lattice with diffusion constants DA and DB, respectively. According to a Wilson renormalization calculation, the system presents a first-order phase transition, for the case DA > DB. Several researches performed simulation works for test this is conjecture, but it was not possible to observe this first-order phase transition. The explanation given was that we needed to perform simulation to higher dimensions. In this work had the motivation to investigate the critical behavior of a diffusive epidemic propagation with Lévy interaction(PEDL), in one-dimension. The Lévy distribution has the interaction of diffusion of all sizes taking the one-dimensional system for a higher-dimensional. We try to explain this is controversy that remains unresolved, for the case DA > DB. For this work, we use the Monte Carlo Method with resuscitation. This is method is to add a sick individual in the system when the order parameter (sick density) go to zero. We apply a finite size scalling for estimates the critical point and the exponent critical =, e z, for the case DA > DB