991 resultados para Sequential Monte Carlo
Resumo:
A theoretical model for the noise properties of n+nn+ diodes in the drift-diffusion framework is presented. In contrast with previous approaches, our model incorporates both the drift and diffusive parts of the current under inhomogeneous and hot-carrier conditions. Closed analytical expressions describing the transport and noise characteristics of submicrometer n+nn+ diodes, in which the diode base (n part) and the contacts (n+ parts) are coupled in a self-consistent way, are obtained
Resumo:
The ellipticines constitute a broad class of molecules with antitumor activity. In the present work we analyzed the structure and properties of a series of ellipticine derivatives in the gas phase and in solution using quantum mechanical and Monte Carlo methods. The results showed a good correlation between the solvation energies in water obtained with the continuum model and the Monte Carlo simulation. Molecular descriptors were considered in the development of QSAR models using the DNA association constant (log Kapp) as biological data. The results showed that the DNA binding is dominated by electronic parameters, with small contributions from the molecular volume and area.
Resumo:
A simple and didactic experiment was developed for image monitoring of the browning of fruit tissues caused by the enzyme polyphenol oxidase. The procedure, easy and inexpensive, is a valuable tool to teach and demonstrate the redox reaction between the enzyme and the natural polyphenols. To obtain the browning percentage for apple, pear and banana, digital photographs were employed, and the images were analyzed by means of Monte Carlo methods and digital analysis programs. The effects of several experimental conditions were studied, such as pH, light, temperature and the presence of oxygen or anti-oxidants. It was observed that each fruit presented a different condition that better minimized the oxidation process. The absence of oxygen and the application of a bissulphite solution were sufficient to keep the quality of all fruits tested.
Resumo:
We analyze the timing of photons observed by the MAGIC telescope during a flare of the active galactic nucleus Mkn 501 for a possible correlation with energy, as suggested by some models of quantum gravity (QG), which predict a vacuum refractive index similar or equal to 1 + (E/M-QGn)(n), n = 1, 2. Parametrizing the delay between gamma-rays of different energies as Delta t = +/-tau E-1 or Delta t = +/-tau E-q(2), we find tau(1) = (0.030 +/- 0.012) s/GeV at the 2.5-sigma level, and tau(q) = (3.71 +/- 2.57) x 10(-6) s/GeV2, respectively. We use these results to establish lower limits M-QG1 > 0.21 X 10(18) GeV and M-QG2 > 0.26 x 10(11) GeV at the 95% C.L. Monte Carlo studies confirm the MAGIC sensitivity to propagation effects at these levels. Thermal plasma effects in the source are negligible, but we cannot exclude the importance of some other source effect.
Resumo:
The author studies random walk estimators for radiosity with generalized absorption probabilities. That is, a path will either die or survive on a patch according to an arbitrary probability. The estimators studied so far, the infinite path length estimator and finite path length one, can be considered as particular cases. Practical applications of the random walks with generalized probabilities are given. A necessary and sufficient condition for the existence of the variance is given, together with heuristics to be used in practical cases. The optimal probabilities are also found for the case when one is interested in the whole scene, and are equal to the reflectivities
Resumo:
The nonequilibrium phase transitions occurring in a fast-ionic-conductor model and in a reaction-diffusion Ising model are studied by Monte Carlo finite-size scaling to reveal nonclassical critical behavior; our results are compared with those in related models.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
The aim of this paper is to present a simple way of treating the general equation for acid-base titrations based on the concept of degree of dissociation, and to propose a new spreadsheet approach for simulating the titration of mixtures of polyprotic compounds. The general expression, without any approximation, is calculated a simple iteration method, making number manipulation easy and painless. The user-friendly spreadsheet was developed by using MS-Excel and Visual-Basic-for-Excel. Several graphs are drawn for helping visualizing the titration behavior. A Monte Carlo function for error simulation was also implemented. Two examples for titration of alkalinity and McIlvaine buffer are presented.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
Eri tieteenalojen tutkijat ovat kiistelleet jo yli vuosisadan ajan ratiomuodossa olevien muuttujien käytön vaikutuksista korrelaatio- ja regressioanalyysien tuloksiin ja niiden oikeaan tulkintaan. Strategiatutkimuksen piirissä aiheeseen ei ole kuitenkaan kiinnitetty suuresti huomiota. Tämä on yllättävää, sillä ratiomuuttujat ovat hyvin yleisesti käytettyjä empiirisen strategiatutkimuksen piirissä. Tässä työssä luodaan katsaus ratiomuuttujien ympärillä käytyyn debattiin. Lisäksi selvitetään artikkelikatsauksen avulla niiden käytön yleisyyttä nykypäivän strategiatutkimuksessa. Työssä tutkitaan Monte Carlo –simulaatioiden avulla ratiomuuttujien ominaisuuksien vaikutuksia korrelaatio- ja regressioanalyysin tuloksiin erityisesti yhteisen nimittäjän tapauksissa.
Resumo:
The most widespread literature for the evaluation of uncertainty - GUM and Eurachem - does not describe explicitly how to deal with uncertainty of the concentration coming from non-linear calibration curves. This work had the objective of describing and validating a methodology, as recommended by the recent GUM Supplement approach, to evaluate the uncertainty through polynomial models of the second order. In the uncertainty determination of the concentration of benzatone (C) by chromatography, it is observed that the uncertainty of measurement between the methodology proposed and Monte Carlo Simulation, does not diverge by more than 0.0005 unit, thus validating the model proposed for one significant digit.
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
A combination of the variational principle, expectation value and Quantum Monte Carlo method is used to solve the Schrödinger equation for some simple systems. The results are accurate and the simplicity of this version of the Variational Quantum Monte Carlo method provides a powerful tool to teach alternative procedures and fundamental concepts in quantum chemistry courses. Some numerical procedures are described in order to control accuracy and computational efficiency. The method was applied to the ground state energies and a first attempt to obtain excited states is described.
Resumo:
The Practical Stochastic Model is a simple and robust method to describe coupled chemical reactions. The connection between this stochastic method and a deterministic method was initially established to understand how the parameters and variables that describe the concentration in both methods were related. It was necessary to define two main concepts to make this connection: the filling of compartments or dilutions and the rate of reaction enhancement. The parameters, variables, and the time of the stochastic methods were scaled with the size of the compartment and were compared with a deterministic method. The deterministic approach was employed as an initial reference to achieve a consistent stochastic result. Finally, an independent robust stochastic method was obtained. This method could be compared with the Stochastic Simulation Algorithm developed by Gillespie, 1977. The Practical Stochastic Model produced absolute values that were essential to describe non-linear chemical reactions with a simple structure, and allowed for a correct description of the chemical kinetics.
Resumo:
This work presents models and methods that have been used in producing forecasts of population growth. The work is intended to emphasize the reliability bounds of the model forecasts. Leslie model and various versions of logistic population models are presented. References to literature and several studies are given. A lot of relevant methodology has been developed in biological sciences. The Leslie modelling approach involves the use of current trends in mortality,fertility, migration and emigration. The model treats population divided in age groups and the model is given as a recursive system. Other group of models is based on straightforward extrapolation of census data. Trajectories of simple exponential growth function and logistic models are used to produce the forecast. The work presents the basics of Leslie type modelling and the logistic models, including multi- parameter logistic functions. The latter model is also analysed from model reliability point of view. Bayesian approach and MCMC method are used to create error bounds of the model predictions.