965 resultados para a posteriori error estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multivariate skew-t distribution (J Multivar Anal 79:93-113, 2001; J R Stat Soc, Ser B 65:367-389, 2003; Statistics 37:359-363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew-normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763-771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scale mixtures of the skew-normal (SMSN) distribution is a class of asymmetric thick-tailed distributions that includes the skew-normal (SN) distribution as a special case. The main advantage of these classes of distributions is that they are easy to simulate and have a nice hierarchical representation facilitating easy implementation of the expectation-maximization algorithm for the maximum-likelihood estimation. In this paper, we assume an SMSN distribution for the unobserved value of the covariates and a symmetric scale mixtures of the normal distribution for the error term of the model. This provides a robust alternative to parameter estimation in multivariate measurement error models. Specific distributions examined include univariate and multivariate versions of the SN, skew-t, skew-slash and skew-contaminated normal distributions. The results and methods are applied to a real data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general, the normal distribution is assumed for the surrogate of the true covariates in the classical error model. This paper considers a class of distributions, which includes the normal one, for the variables subject to error. An estimation approach yielding consistent estimators is developed and simulation studies reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Leaf area is often related to plant growth, development, physiology and yield. Many non-destructive models have been proposed for leaf area estimation of several plant genotypes, demonstrating that leaf length, leaf width and leaf area are closely correlated. Thus, the objective of our study was to develop a reliable model for leaf area estimation from linear measurements of leaf dimensions for citrus genotypes. Materials and methods. Leaves of citrus genotypes were harvested, and their dimensions (length, width and area) were measured. Values of leaf area were regressed against length, width, the square of length, the square of width and the product (length x width). The most accurate equations, either linear or second-order polynomial, were regressed again with a new data set; then the most reliable equation was defined. Results and discussion. The first analysis showed that the variables length, width and the square of length gave better results in second-order polynomial equations, while the linear equations were more suitable and accurate when the width and the product (length x width) were used. When these equations were regressed with the new data set, the coefficient of determination (R(2)) and the agreement index 'd' were higher for the one that used the variable product (length x width), while the Mean Absolute Percentage Error was lower. Conclusion. The product of the simple leaf dimensions (length x width) can provide a reliable and simple non-destructive model for leaf area estimation across citrus genotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O desenvolvimento de projetos relacionados ao desempenho de diversas culturas tem recebido aperfeiçoamento cada vez maior, incorporado a modelos matemáticos sendo indispensável à utilização de equações cada vez mais consistentes que possibilitem previsão e maior aproximação do comportamento real, diminuindo o erro na obtenção das estimativas. Entre as operações unitárias que demandam maior estudo estão aquelas relacionadas com o crescimento da cultura, caracterizadas pela temperatura ideal para o acréscimo de matéria seca. Pelo amplo uso dos métodos matemáticos na representação, análise e obtenção de estimativas de graus-dia, juntamente com a grande importância que a cultura da cana-de-açúcar tem para a economia brasileira, foi realizada uma avaliação dos modelos matemáticos comumente usados e dos métodos numéricos de integração na estimativa da disponibilidade de graus-dia para essa cultura, na região de Botucatu, Estado de São Paulo. Os modelos de integração, com discretização de 6 em 6 h, apresentaram resultados satisfatórios na estimativa de graus-dia. As metodologias tradicionais apresentaram desempenhos satisfatórios quanto à estimativa de grausdia com base na curva de temperatura horária para cada dia e para os agrupamentos de três, sete, 15 e 30 dias. Pelo método numérico de integração, a região de Botucatu, Estado de São Paulo, apresentou disponibilidade térmica anual média de 1.070,6 GD para a cultura da cana-de-açúcar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. This paper presents a novel approach to solve robust parameter estimation problem for nonlinear model with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A partir de perfis populacionais experimentais de linhagens do díptero forídeo Megaselia scalaris, foi determinado o número mínimo de perfis amostrais que devem ser repetidos, via processo de simulação bootstrap, para se ter uma estimativa confiável do perfil médio populacional e apresentar estimativas do erro-padrão como medida da precisão das simulações realizadas. Os dados originais são provenientes de populações experimentais fundadas com as linhagens SR e R4, com três réplicas cada, e que foram mantidas por 33 semanas pela técnica da transferência seriada em câmara de temperatura constante (25 ± 1,0ºC). A variável usada foi tamanho populacional e o modelo adotado para cada perfíl foi o de um processo estocástico estacionário. Por meio das simulações, os perfis de três populações experimentais foram amplificados, determinando-se, dessa forma, o tamanho mínimo de amostra. Fixado o tamanho de amostra, simulações bootstrap foram realizadas para construção de intervalos de confiança e comparação dos perfis médios populacionais das duas linhagens. Os resultados mostram que com o tamanho de amostra igual a 50 inicia-se o processo de estabilização dos valores médios.