946 resultados para Covariance
Resumo:
Methane-rich landfill gas is generated when biodegradable organic wastes disposed of in landfills decompose under anaerobic conditions. Methane is a significant greenhouse gas, and landfills are its major source in Finland. Methane production in landfill depends on many factors such as the composition of waste and landfill conditions, and it can vary a lot temporally and spatially. Methane generation from waste can be estimated with various models. In this thesis three spreadsheet applications, a reaction equation and a triangular model for estimating the gas generation were introduced. The spreadsheet models introduced are IPCC Waste Model (2006), Metaanilaskentamalli by Jouko Petäjä of Finnish Environment Institute and LandGEM (3.02) of U.S. Environmental Protection Agency. All these are based on the first order decay (FOD) method. Gas recovery methods and gas emission measurements were also examined. Vertical wells and horizontal trenches are the most commonly used gas collection systems. Emission measurements chamber method, tracer method, soil core and isotope measurements, micrometeorological mass-balance and eddy covariance methods and gas measuring FID-technology were discussed. Methane production at Ämmässuo landfill of HSY Helsinki Region Environmental Services Authority was estimated with methane generation models and the results were compared with the volumes of collected gas. All spreadsheet models underestimated the methane generation at some point. LandGEM with default parameters and Metaanilaskentamalli with modified parameters corresponded best with the gas recovery numbers. Reason for the differences between evaluated and collected volumes could be e.g. that the parameter values of the degradable organic carbon (DOC) and the fraction of decomposable degradable organic carbon (DOCf) do not represent the real values well enough. Notable uncertainty is associated with the modelling results and model parameters. However, no simple explanation for the discovered differences can be given within this thesis.
Resumo:
Since its introduction, fuzzy set theory has become a useful tool in the mathematical modelling of problems in Operations Research and many other fields. The number of applications is growing continuously. In this thesis we investigate a special type of fuzzy set, namely fuzzy numbers. Fuzzy numbers (which will be considered in the thesis as possibility distributions) have been widely used in quantitative analysis in recent decades. In this work two measures of interactivity are defined for fuzzy numbers, the possibilistic correlation and correlation ratio. We focus on both the theoretical and practical applications of these new indices. The approach is based on the level-sets of the fuzzy numbers and on the concept of the joint distribution of marginal possibility distributions. The measures possess similar properties to the corresponding probabilistic correlation and correlation ratio. The connections to real life decision making problems are emphasized focusing on the financial applications. We extend the definitions of possibilistic mean value, variance, covariance and correlation to quasi fuzzy numbers and prove necessary and sufficient conditions for the finiteness of possibilistic mean value and variance. The connection between the concepts of probabilistic and possibilistic correlation is investigated using an exponential distribution. The use of fuzzy numbers in practical applications is demonstrated by the Fuzzy Pay-Off method. This model for real option valuation is based on findings from earlier real option valuation models. We illustrate the use of number of different types of fuzzy numbers and mean value concepts with the method and provide a real life application.
Resumo:
The Bartlett-Lewis Rectangular Pulse Modified (BLPRM) model simulates the precipitous slide in the hourly and sub-hourly and has six parameters for each of the twelve months of the year. This study aimed to evaluate the behavior of precipitation series in the duration of 15 min, obtained by simulation using the model BLPRM in situations: (a) where the parameters are estimated from a combination of statistics, creating five different sets; (b) suitability of the model to generate rain. To adjust the parameters were used rain gauge records of Pelotas/RS/Brazil, which statistics were estimated - mean, variance, covariance, autocorrelation coefficient of lag 1, the proportion of dry days in the period considered. The results showed that the parameters related to the time of onset of precipitation (λ) and intensities (μx) were the most stable and the most unstable were ν parameter, related to rain duration. The BLPRM model adequately represented the mean, variance, and proportion of the dry period of the series of precipitation lasting 15 min and, the time dependence of the heights of rain, represented autocorrelation coefficient of the first retardation was statistically less simulated series suitability for the duration of 15 min.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
The theme of this thesis is context-speci c independence in graphical models. Considering a system of stochastic variables it is often the case that the variables are dependent of each other. This can, for instance, be seen by measuring the covariance between a pair of variables. Using graphical models, it is possible to visualize the dependence structure found in a set of stochastic variables. Using ordinary graphical models, such as Markov networks, Bayesian networks, and Gaussian graphical models, the type of dependencies that can be modeled is limited to marginal and conditional (in)dependencies. The models introduced in this thesis enable the graphical representation of context-speci c independencies, i.e. conditional independencies that hold only in a subset of the outcome space of the conditioning variables. In the articles included in this thesis, we introduce several types of graphical models that can represent context-speci c independencies. Models for both discrete variables and continuous variables are considered. A wide range of properties are examined for the introduced models, including identi ability, robustness, scoring, and optimization. In one article, a predictive classi er which utilizes context-speci c independence models is introduced. This classi er clearly demonstrates the potential bene ts of the introduced models. The purpose of the material included in the thesis prior to the articles is to provide the basic theory needed to understand the articles.
Resumo:
Rodents submitted to restraint stress show decreased activity in an elevated plus-maze (EPM) 24 h later. The objective of the present study was to determine if a certain amount of time is needed after stress for the development of these changes. We also wanted to verify if behavioral tolerance of repeated daily restraint would be detectable in this model. Male Wistar rats were restrained for 2 h and tested in the EPM 1, 2, 24 or 48 h later. Another group of animals was immobilized daily for 2 h for 7 days, being tested in the EPM 24 h after the last restraint period. Restraint induced a significant decrease in the percent of entries and time spent in the open arms, as well as a decrease in the number of enclosed arm entries. The significant effect in the number of entries and the percentage of time spent in the open arms disappeared when the data were submitted to analysis of covariance using the number of enclosed arm entries as a covariate. This suggests that the restraint-induced hypoactivity influences the measures of open arm exploration. The modifications of restraint-induced hypoactivity are evident 24 or 48 h, but not 1 or 2 h, after stress. In addition, rats stressed daily for seven days became tolerant to this effect.
Resumo:
The reasons for the inconsistent association between salt consumption and blood pressure levels observed in within-society surveys are not known. A total of 157 normotensive subjects aged 18 to 35 years, selected at random in a cross-sectional population-based survey, answered a structured questionnaire. They were classified as strongly predisposed to hypertension when two or more first-degree relatives had a diagnosis of hypertension. Anthropometric parameters were obtained and sitting blood pressure was determined with aneroid sphygmomanometers. Sodium and potassium excretion was measured by flame spectrophotometry in an overnight urine sample. A positive correlation between blood pressure and urinary sodium excretion was detected only in the group of individuals strongly predisposed to hypertension, both for systolic blood pressure (r = 0.51, P<0.01) and diastolic blood pressure (r = 0.50, P<0.01). In a covariance analysis, after controlling for age, skin color and body mass index, individuals strongly predisposed to hypertension who excreted amounts of sodium above the median of the entire sample had higher systolic and diastolic blood pressure than subjects classified into the remaining conditions. The influence of familial predisposition to hypertension on the association between salt intake and blood pressure may be an additional explanation for the weak association between urinary sodium excretion and blood pressure observed in within-population studies, since it can influence the association between salt consumption and blood pressure in some but not all inhabitants.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The aim of this work is to invert the ionospheric electron density profile from Riometer (Relative Ionospheric opacity meter) measurement. The newly Riometer instrument KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) is used to measure the cosmic HF radio noise absorption that taking place in the D-region ionosphere between 50 to 90 km. In order to invert the electron density profile synthetic data is used to feed the unknown parameter Neq using spline height method, which works by taking electron density profile at different altitude. Moreover, smoothing prior method also used to sample from the posterior distribution by truncating the prior covariance matrix. The smoothing profile approach makes the problem easier to find the posterior using MCMC (Markov Chain Monte Carlo) method.
Resumo:
This thesis examines the suitability of VaR in foreign exchange rate risk management from the perspective of a European investor. The suitability of four different VaR models is evaluated in respect to have insight if VaR is a valuable tool in managing foreign exchange rate risk. The models evaluated are historical method, historical bootstrap method, variance-covariance method and Monte Carlo simulation. The data evaluated are divided into emerging and developed market currencies to have more intriguing analysis. The foreign exchange rate data in this thesis is from 31st January 2000 to 30th April 2014. The results show that the previously mentioned VaR models performance in foreign exchange risk management is not to be considered as a single tool in foreign exchange rate risk management. The variance-covariance method and Monte Carlo simulation performs poorest in both currency portfolios. Both historical methods performed better but should also be considered as an additional tool along with other more sophisticated analysis tools. A comparative study of VaR estimates and forward prices is also included in the thesis. The study reveals that regardless of the expensive hedging cost of emerging market currencies the risk captured by VaR is more expensive and thus FX forward hedging is recommended
Resumo:
The contribution of genetic factors to the development of obesity has been widely recognized, but the identity of the genes involved has not yet been fully clarified. Variation in genes involved in adipocyte differentiation and energy metabolism is expected to have a role in the etiology of obesity. We assessed the potential association of a polymorphism in one candidate gene, peroxisome proliferator-activated receptor-gamma (PPARGg), involved in these pathways and obesity-related phenotypes in 335 Brazilians of European descent. All individuals included in the sample were adults. Pregnant women, as well as those individuals with secondary hyperlipidemia due to renal, liver or thyroid disease, and diabetes, were not invited to participate in the study; all other individuals were included. The gene variant PPARG Pro12Ala was studied by a PCR-based method and the association between this genetic polymorphism and obesity-related phenotypes was evaluated by analysis of covariance. Variant allele frequency was PPARG Ala12 = 0.09 which is in the same range as described for European and European-derived populations. No statistically significant differences were observed for mean total cholesterol, LDL cholesterol, HDL cholesterol, or triglyceride levels among PPARG genotypes in either gender. In the male sample, an association between the PPARG Pro12Ala variant and body mass index was detected, with male carriers of the Ala variant presenting a higher mean body mass index than wild-type homozygotes (28.3 vs 26.2 kg/m², P = 0.037). No effect of this polymorphism was detected in women. This finding suggests that the PPARG gene has a gender-specific effect and contributes to the susceptibility to obesity in this population.
Resumo:
The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.
Resumo:
For my Licentiate thesis, I conducted research on risk measures. Continuing with this research, I now focus on capital allocation. In the proportional capital allocation principle, the choice of risk measure plays a very important part. In the chapters Introduction and Basic concepts, we introduce three definitions of economic capital, discuss the purpose of capital allocation, give different viewpoints of capital allocation and present an overview of relevant literature. Risk measures are defined and the concept of coherent risk measure is introduced. Examples of important risk measures are given, e. g., Value at Risk (VaR), Tail Value at Risk (TVaR). We also discuss the implications of dependence and review some important distributions. In the following chapter on Capital allocation we introduce different principles for allocating capital. We prefer to work with the proportional allocation method. In the following chapter, Capital allocation based on tails, we focus on insurance business lines with heavy-tailed loss distribution. To emphasize capital allocation based on tails, we define the following risk measures: Conditional Expectation, Upper Tail Covariance and Tail Covariance Premium Adjusted (TCPA). In the final chapter, called Illustrative case study, we simulate two sets of data with five insurance business lines using Normal copulas and Cauchy copulas. The proportional capital allocation is calculated using TCPA as risk measure. It is compared with the result when VaR is used as risk measure and with covariance capital allocation. In this thesis, it is emphasized that no single allocation principle is perfect for all purposes. When focusing on the tail of losses, the allocation based on TCPA is a good one, since TCPA in a sense includes features of TVaR and Tail covariance.