13 resultados para Metropolis Monte Carlo simulations
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
The purpose of this master thesis was to perform simulations that involve use of random number while testing hypotheses especially on two samples populations being compared weather by their means, variances or Sharpe ratios. Specifically, we simulated some well known distributions by Matlab and check out the accuracy of an hypothesis testing. Furthermore, we went deeper and check what could happen once the bootstrapping method as described by Effrons is applied on the simulated data. In addition to that, one well known RobustSharpe hypothesis testing stated in the paper of Ledoit and Wolf was applied to measure the statistical significance performance between two investment founds basing on testing weather there is a statistically significant difference between their Sharpe Ratios or not. We collected many literatures about our topic and perform by Matlab many simulated random numbers as possible to put out our purpose; As results we come out with a good understanding that testing are not always accurate; for instance while testing weather two normal distributed random vectors come from the same normal distribution. The Jacque-Berra test for normality showed that for the normal random vector r1 and r2, only 94,7% and 95,7% respectively are coming from normal distribution in contrast 5,3% and 4,3% failed to shown the truth already known; but when we introduce the bootstrapping methods by Effrons while estimating pvalues where the hypothesis decision is based, the accuracy of the test was 100% successful. From the above results the reports showed that bootstrapping methods while testing or estimating some statistics should always considered because at most cases the outcome are accurate and errors are minimized in the computation. Also the RobustSharpe test which is known to use one of the bootstrapping methods, studentised one, were applied first on different simulated data including distribution of many kind and different shape secondly, on real data, Hedge and Mutual funds. The test performed quite well to agree with the existence of statistical significance difference between their Sharpe ratios as described in the paper of Ledoit andWolf.
Resumo:
Tutkielman päätavoitteena oli selvittää, miten Monte Carlo –simulointi soveltuu strategisten reaalioptioiden arvonmääritykseen. Tutkielman teoriaosuudessa käytiin läpi reaalioptioteoriaa ja Monte Carlo –simulointimenetelmää toiminta-analyyttisella tutkimusotteella. Tuloksena todettiin, että simulointimenetelmää on reaalioptioiden yhteydessä yleensä käytetty, kun muu menetelmä ei ole ollut mahdollinen. Tutkielman pääpaino on tapaustutkimukseen pohjautuvassa empiriaosuudessa, jossa rakennettiin päätöksentekometodologista tutkimusotetta seuraten simulointimalli, jolla tutkittiin Voest Alpine Stahl Ag:n vaihtoehtoisten hinnoittelustrategioiden taloudellista vaikutusta. Mallin rakentaminen perustui yrityksen tilinpäätösaineistoon. Havaittiin, ettei yritys ole valitsemansa strategian vuoksi juurikaan menettänyt tuottoja, mutta toisaalta pelkkä tilinpäätösaineisto ei riitä kovin luotettavaan tarkasteluun. Vuosikertomusten antaman tiedon pohjalta analysoitiin lisäksi yrityksen toiminnassa havaittuja reaalioptioita. Monte Carlo –simulointimenetelmä sopii reaalioptioiden arvonmääritykseen, mutta kriittisiä tekijöitä ovat mallin rakentaminen ja lähtötietojen oikeellisuus. Numeerisen mallin rinnalla on siksi aiheellista suorittaa myös laadullista reaalioptioanalyysia.
Resumo:
Monte Carlo -reaktorifysiikkakoodit nykyisin käytettävissä olevilla laskentatehoilla tarjoavat mielenkiintoisen tavan reaktorifysiikan ongelmien ratkaisuun. Neljännen sukupolven ydinreaktoreissa käytettävät uudet rakenteet ja materiaalit ovat haasteellisia nykyisiin reaktoreihin suunnitelluille laskentaohjelmille. Tässä työssä Monte Carlo -reaktorifysiikkakoodi ja CFD-koodi yhdistetään kytkettyyn laskentaan kuulakekoreaktorissa, joka on yksi korkealämpötilareaktorityyppi. Työssä käytetty lähestymistapa on uutta maailmankin mittapuussa ajateltuna.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Eri tieteenalojen tutkijat ovat kiistelleet jo yli vuosisadan ajan ratiomuodossa olevien muuttujien käytön vaikutuksista korrelaatio- ja regressioanalyysien tuloksiin ja niiden oikeaan tulkintaan. Strategiatutkimuksen piirissä aiheeseen ei ole kuitenkaan kiinnitetty suuresti huomiota. Tämä on yllättävää, sillä ratiomuuttujat ovat hyvin yleisesti käytettyjä empiirisen strategiatutkimuksen piirissä. Tässä työssä luodaan katsaus ratiomuuttujien ympärillä käytyyn debattiin. Lisäksi selvitetään artikkelikatsauksen avulla niiden käytön yleisyyttä nykypäivän strategiatutkimuksessa. Työssä tutkitaan Monte Carlo –simulaatioiden avulla ratiomuuttujien ominaisuuksien vaikutuksia korrelaatio- ja regressioanalyysin tuloksiin erityisesti yhteisen nimittäjän tapauksissa.
Resumo:
Min avhandling behandlar hur oordnade material leder elektrisk ström. Bland materialen som studeras finns ledande polymerer, d.v.s. plaster som leder ström, och mer allmänt organiska halvledare. Av de här materialen har man kunnat bygga elektroniska komponenter, och man hoppas på att kunna trycka hela kretsar av organiska material. För de här tillämpningarna är det viktigt att förstå hur materialen själva leder elektrisk ström. Termen oordnade material syftar på material som saknar kristallstruktur. Oordningen gör att elektronernas tillstånd blir lokaliserade i rummet, så att en elektron i ett visst tillstånd är begränsad t.ex. till en molekyl eller ett segment av en polymer. Det här kan jämföras med kristallina material, där ett elektrontillstånd är utspritt över hela kristallen (men i stället har en väldefinierad rörelsemängd). Elektronerna (eller hålen) i det oordnade materialet kan röra sig genom att tunnelera mellan de lokaliserade tillstånden. Utgående från egenskaperna för den här tunneleringsprocessen, kan man bestämma transportegenskaperna för hela materialet. Det här är utgångspunkten för den så kallade hopptransportmodellen, som jag har använt mig av. Hopptransportmodellen innehåller flera drastiska förenklingar. Till exempel betraktas elektrontillstånden som punktformiga, så att tunneleringssannolikheten mellan två tillstånd endast beror på avståndet mellan dem, och inte på deras relativa orientation. En annan förenkling är att behandla det kvantmekaniska tunneleringsproblemet som en klassisk process, en slumpvandring. Trots de här grova approximationerna visar hopptransportmodellen ändå många av de fenomen som uppträder i de verkliga materialen som man vill modellera. Man kan kanske säga att hopptransportmodellen är den enklaste modell för oordnade material som fortfarande är intressant att studera. Man har inte hittat exakta analytiska lösningar för hopptransportmodellen, därför använder man approximationer och numeriska metoder, ofta i form av datorberäkningar. Vi har använt både analytiska metoder och numeriska beräkningar för att studera olika aspekter av hopptransportmodellen. En viktig del av artiklarna som min avhandling baserar sig på är att jämföra analytiska och numeriska resultat. Min andel av arbetet har främst varit att utveckla de numeriska metoderna och applicera dem på hopptransportmodellen. Därför fokuserar jag på den här delen av arbetet i avhandlingens introduktionsdel. Ett sätt att studera hopptransportmodellen numeriskt är att direkt utföra en slumpvandringsprocess med ett datorprogram. Genom att föra statisik över slumpvandringen kan man beräkna olika transportegenskaper i modellen. Det här är en så kallad Monte Carlo-metod, eftersom själva beräkningen är en slumpmässig process. I stället för att följa rörelsebanan för enskilda elektroner, kan man beräkna sannolikheten vid jämvikt för att hitta en elektron i olika tillstånd. Man ställer upp ett system av ekvationer, som relaterar sannolikheterna för att hitta elektronen i olika tillstånd i systemet med flödet, strömmen, mellan de olika tillstånden. Genom att lösa ekvationssystemet fås sannolikhetsfördelningen för elektronerna. Från sannolikhetsfördelningen kan sedan strömmen och materialets transportegenskaper beräknas. En aspekt av hopptransportmodellen som vi studerat är elektronernas diffusion, d.v.s. deras slumpmässiga rörelse. Om man betraktar en samling elektroner, så sprider den med tiden ut sig över ett större område. Det är känt att diffusionshastigheten beror av elfältet, så att elektronerna sprider sig fortare om de påverkas av ett elektriskt fält. Vi har undersökt den här processen, och visat att beteendet är väldigt olika i endimensionella system, jämfört med två- och tredimensionella. I två och tre dimensioner beror diffusionskoefficienten kvadratiskt av elfältet, medan beroendet i en dimension är linjärt. En annan aspekt vi studerat är negativ differentiell konduktivitet, d.v.s. att strömmen i ett material minskar då man ökar spänningen över det. Eftersom det här fenomenet har uppmätts i organiska minnesceller, ville vi undersöka om fenomenet också kan uppstå i hopptransportmodellen. Det visade sig att det i modellen finns två olika mekanismer som kan ge upphov till negativ differentiell konduktivitet. Dels kan elektronerna fastna i fällor, återvändsgränder i systemet, som är sådana att det är svårare att ta sig ur dem då elfältet är stort. Då kan elektronernas medelhastighet och därmed strömmen i materialet minska med ökande elfält. Elektrisk växelverkan mellan elektronerna kan också leda till samma beteende, genom en så kallad coulombblockad. En coulombblockad kan uppstå om antalet ledningselektroner i materialet ökar med ökande spänning. Elektronerna repellerar varandra och ett större antal elektroner kan leda till att transporten blir långsammare, d.v.s. att strömmen minskar.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).
Resumo:
Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.