7 resultados para cosmological parameters from CMBR

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pysyäkseen kilpailukykyisenä vapautuneilla sähkömarkkinoilla on voimalaitoksen energiantuotantokustannusten oltava mahdollisimman matalia, tinkimättä kuitenkaan korkeasta käytettävyydestä. Polttoaineen energiasisällön mahdollisimman hyvä hyödyntäminen on ratkaisevan tärkeää voimalaitoksen kannattavuudelle. Polttoainekustannusten osuus on konvektiivisilla laitoksilla yleensä yli puolet koko elinjakson kustannuksista. Kun vielä päästörajat tiukkenevat koko ajan, korostuu polttoaineen korkea hyötykäyttö entisestään. Korkea energiantuotannon luotettavuus ja käytettävyys ovat myös elintärkeitä pyrittäessä kustannusten minimointiin. Tässä työssä on käyty läpi voimalaitoksen kustannuksiin vaikuttavia käsitteitä, kuten hyötysuhdetta, käytettävyyttä, polttoaineen hintoja, ylös- ja alasajoja ja tärkeimpiä häviöitä. Ajostrategiassa ja poikkeamien hallinnassa pyritään hyvään hyötysuhteeseen ja alhaisiin päästöihin joka käyttötilanteessa. Lisäksi on tarkasteltu tiettyjen suureiden, eli höyryn lämpötilan ja paineen, savukaasun hapen pitoisuuden, savukaasun loppulämpötilan, sekä lauhduttimen paineen poikkeamien vaikutusta ohjearvostaan energiantuotantokustannuksiin. Happi / hiilimonoksidi optimoinnissa on otettu huomioon myös pohjatuhkan palamattomat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Waste combustion has gone from being a volume reducing discarding-method to an energy recovery process for unwanted material that cannot be reused or recycled. Different fractions of waste are used as fuel today, such as; municipal solid waste, refuse derived fuel, and solid recovered fuel. Furthermore, industrial waste, normally a mixture between commercial waste and building and demolition waste, is common, either as separate fuels or mixed with, for example, municipal solid waste. Compared to fossil or biomass fuels, waste mixtures are extremely heterogeneous, making it a complicated fuel. Differences in calorific values, ash content, moisture content, and changing levels of elements, such as Cl and alkali metals, are common in waste fuel. Moreover, waste contains much higher levels of troublesome trace elements, such as Zn, which is thought to accelerate a corrosion process. Varying fuel quality can be strenuous on the boiler system and may cause fouling and corrosion of heat exchanger surfaces. This thesis examines waste fuels and waste combustion from different angles, with the objective of giving a better understanding of waste as an important fuel in today’s fuel economy. Several chemical characterisation campaigns of waste fuels over longer time periods (10-12 months) was used to determine the fossil content of Swedish waste fuels, to investigate possible seasonal variations, and to study the presence of Zn in waste. Data from the characterisation campaigns were used for thermodynamic equilibrium calculations to follow trends and determine the effect of changing concentrations of various elements. The thesis also includes a study of the thermal behaviour of Zn and a full—scale study of how the bed temperature affects the volatilisation of alkali metals and Zn from the fuel. As mixed waste fuel contains considerable amounts of fresh biomass, such as wood, food waste, paper etc. it would be wrong to classify it as a fossil fuel. When Sweden introduced waste combustion as a part of the European Union emission trading system in the beginning of 2013 there was a need for combustion plants to find a usable and reliable method to determine the fossil content. Four different methods were studied in full-scale of seven combustion plants; 14Canalysis of solid waste, 14C-analysis of flue gas, sorting analysis followed by calculations, and a patented balance method that is using a software program to calculate the fossil content based on parameters from the plant. The study showed that approximately one third of the coal in Swedish waste mixtures has fossil origins and presented the plants with information about the four different methods and their advantages and disadvantages. Characterisation campaigns also showed that industrial waste contain higher levels of trace elements, such as Zn. The content of Zn in Swedish waste fuels was determined to be approximately 800 mg kg-1 on average, based on 42 samples of solid waste from seven different plants with varying mixtures between municipal solid waste and industrial waste. A review study of the occurrence of Zn in fuels confirmed that the highest amounts of Zn are present in waste fuels rather than in fossil or biomass fuels. In tires, Zn is used as a vulcanizing agent and can reach concentration values of 9600-16800 mg kg-1. Waste Electrical and Electronic Equipment is the second Zn-richest fuel and even though on average Zn content is around 4000 mg kg-1, the values of over 19000 mg kg-1 were also reported. The increased amounts of Zn, 3000-4000 mg kg-1, are also found in municipal solid waste, sludge with over 2000 mg kg-1 on average (some exceptions up to 49000 mg kg-1), and other waste derived fuels (over 1000 mg kg-1). Zn is also found in fossil fuels. In coal, the average level of Zn is 100 mg kg-1, the higher amount of Zn was only reported for oil shale with values between 20-2680 mg kg-1. The content of Zn in biomass is basically determined by its natural occurrence and it is typically 10-100 mg kg-1. The thermal behaviour of Zn is of importance to understand the possible reactions taking place in the boiler. By using thermal analysis three common Zn-compounds were studied (ZnCl2, ZnSO4, and ZnO) and compared to phase diagrams produced with thermodynamic equilibrium calculations. The results of the study suggest that ZnCl2(s/l) cannot exist readily in the boiler due to its volatility at high temperatures and its conversion to ZnO in oxidising conditions. Also, ZnSO4 decomposes around 680°C, while ZnO is relatively stable in the temperature range prevailing in the boiler. Furthermore, by exposing ZnO to HCl in a hot environment (240-330°C) it was shown that chlorination of ZnO with HCl gas is possible. Waste fuel containing high levels of elements known to be corrosive, for example, Na and K in combination with Cl, and also significant amounts of trace elements, such as Zn, are demanding on the whole boiler system. A full-scale study of how the volatilisation of Na, K, and Zn is affected by the bed temperature in a fluidised bed boiler was performed parallel with a lab-scale study with the same conditions. The study showed that the fouling rate on deposit probes were decreased by 20 % when the bed temperature was decreased from 870°C to below 720°C. In addition, the lab-scale experiments clearly indicated that the amount of alkali metals and Zn volatilised depends on the reactor temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diabetic retinopathy, age-related macular degeneration and glaucoma are the leading causes of blindness worldwide. Automatic methods for diagnosis exist, but their performance is limited by the quality of the data. Spectral retinal images provide a significantly better representation of the colour information than common grayscale or red-green-blue retinal imaging, having the potential to improve the performance of automatic diagnosis methods. This work studies the image processing techniques required for composing spectral retinal images with accurate reflection spectra, including wavelength channel image registration, spectral and spatial calibration, illumination correction, and the estimation of depth information from image disparities. The composition of a spectral retinal image database of patients with diabetic retinopathy is described. The database includes gold standards for a number of pathologies and retinal structures, marked by two expert ophthalmologists. The diagnostic applications of the reflectance spectra are studied using supervised classifiers for lesion detection. In addition, inversion of a model of light transport is used to estimate histological parameters from the reflectance spectra. Experimental results suggest that the methods for composing, calibrating and postprocessing spectral images presented in this work can be used to improve the quality of the spectral data. The experiments on the direct and indirect use of the data show the diagnostic potential of spectral retinal data over standard retinal images. The use of spectral data could improve automatic and semi-automated diagnostics for the screening of retinal diseases, for the quantitative detection of retinal changes for follow-up, clinically relevant end-points for clinical studies and development of new therapeutic modalities.