972 resultados para Fisher information matrix


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Se describe la evolución del proyecto MIAR (Matriu d'Informació per l'Avaluació de Revistes), un sistema originalmente diseñado para cuantificar la indización en bases de datos de revistas de humanidades y ciencias sociales. Sin embargo, a la vista del panorama de recursos de evaluación actualmente disponibles en España, los autores plantean la transformación de MIAR hacia un portal colaborativo en el que todos los interesados puedan difundir las principales características de las revistas en las que participan directa o indirectamente. Se estudia una transformación del proyecto contemplando el uso de redes sociales, sistemas de votación y de sugerencia, y la aplicación de tecnologías como open linked data que permiten una mayor difusión y socialización de los datos recogidos para cada publicación. De esta manera los datos podrían ser mejor aprovechados por los tres colectivos más directamente interesados: evaluadores, editores y autores/lectores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we study some results related to a specific class of distributions, called skew-curved-symmetric family of distributions that depends on a parameter controlling the skewness and kurtosis at the same time. Special elements of this family which are studied include symmetric and well-known asymmetric distributions. General results are given for the score function and the observed information matrix. It is shown that the observed information matrix is always singular for some special cases. We illustrate the flexibility of this class of distributions with an application to a real dataset on characteristics of Australian athletes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce in this paper the class of linear models with first-order autoregressive elliptical errors. The score functions and the Fisher information matrices are derived for the parameters of interest and an iterative process is proposed for the parameter estimation. Some robustness aspects of the maximum likelihood estimates are discussed. The normal curvatures of local influence are also derived for some usual perturbation schemes whereas diagnostic graphics to assess the sensitivity of the maximum likelihood estimates are proposed. The methodology is applied to analyse the daily log excess return on the Microsoft whose empirical distributions appear to have AR(1) and heavy-tailed errors. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Influence diagnostics methods are extended in this article to the Grubbs model when the unknown quantity x (latent variable) follows a skew-normal distribution. Diagnostic measures are derived from the case-deletion approach and the local influence approach under several perturbation schemes. The observed information matrix to the postulated model and Delta matrices to the corresponding perturbed models are derived. Results obtained for one real data set are reported, illustrating the usefulness of the proposed methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we discuss bias-corrected estimators for the regression and the dispersion parameters in an extended class of dispersion models (Jorgensen, 1997b). This class extends the regular dispersion models by letting the dispersion parameter vary throughout the observations, and contains the dispersion models as particular case. General formulae for the O(n(-1)) bias are obtained explicitly in dispersion models with dispersion covariates, which generalize previous results obtained by Botter and Cordeiro (1998), Cordeiro and McCullagh (1991), Cordeiro and Vasconcellos (1999), and Paula (1992). The practical use of the formulae is that we can derive closed-form expressions for the O(n(-1)) biases of the maximum likelihood estimators of the regression and dispersion parameters when the information matrix has a closed-form. Various expressions for the O(n(-1)) biases are given for special models. The formulae have advantages for numerical purposes because they require only a supplementary weighted linear regression. We also compare these bias-corrected estimators with two different estimators which are also bias-free to order O(n(-1)) that are based on bootstrap methods. These estimators are compared by simulation. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Laplace distribution is one of the earliest distributions in probability theory. For the first time, based on this distribution, we propose the so-called beta Laplace distribution, which extends the Laplace distribution. Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters and derive the observed information matrix. The usefulness of the new model is illustrated by means of a real data set. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tese apresenta duas contribuições para a interpretação geofísica, voltadas ao Método Magnetotelúrico. A primeira trata de uma nova abordagem para a interpretação MT, denominada MÉTODO DESCRITIVO-GEOLÓGICO (DesG), em alusão à incorporação explícita de informação a priori de correlação fácil com a descrição geológica tradicional. O intérprete define por meio de elementos geométricos (pontos e linhas) o arcabouço de feições geológicas bem como fornece valores de resistividade aos corpos geológicos presumidos. O método estima a distribuição de resistividade subsuperficial em termos de fontes anômalas próximas aos elementos geométricos, ajustando as respostas produzidas por estes corpos às medidas de campo. A solução obtida fornece então informações que podem auxiliar na modificação de algumas informações a priori imprecisas, permitindo que sucessivas inversões sejam realizadas até que a solução ajuste os dados e faça sentido geológico. Entre as características relevantes do método destacam-se: (i) os corpos podem apresentar resistividade maior ou menor do que a resistividade do meio encaixante, (ii) vários meios encaixantes contendo ou não corpos anômalos podem ser cortados pelo perfil e (iii) o contraste de resistividade entre corpo e encaixante pode ser abrupto ou gradativo. A aplicação do método a dados sintéticos evidencia, entre outras vantagens, a sua potencialidade para estimar o mergulho de falhas com inclinação variável, que merece especial interesse em Tectônica, e delinear soleiras de diabásio em bacias sedimentares, um sério problema para a prospecção de petróleo. O método permite ainda a interpretação conjunta das causas do efeito estático e das fontes de interesse. A aplicação a dados reais é ilustrada tomando-se como exemplo dados do COPROD2, cuja inversão produziu soluções compatíveis com o conhecimento geológico sobre a área. A segunda contribuição refere-se a desenho de experimento geofísico. Por meio de indicadores diversos, em especial a matriz densidade de informação, é mostrado que a resolução teórica dos dados pode ser estudada, que guia o planejamento da prospecção. A otimização do levantamento permite determinar os períodos e as posições das estações de medida mais adequados ao delineamento mais preciso de corpos cujas localizações são conhecidas aproximadamente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of markers distributed all long the genome may increase the accuracy of the predicted additive genetic value of young animals that are candidates to be selected as reproducers. In commercial herds, due to the cost of genotyping, only some animals are genotyped and procedures, divided in two or three steps, are done in order to include these genomic data in genetic evaluation. However, genomic evaluation may be calculated using one unified step that combines phenotypic data, pedigree and genomics. The aim of the study was to compare a multiple-trait model using only pedigree information with another using pedigree and genomic data. In this study, 9,318 lactations from 3061 buffaloes were used, 384 buffaloes were genotyped using a Illumina bovine chip (Illumina Infinium (R) bovineHD BeadChip). Seven traits were analyzed milk yield (MY), fat yield (FY), protein yield (PY), lactose yield (LY), fat percentage (F%), protein percentage (P%) and somatic cell score (SCSt). Two analyses were done: one using phenotypic and pedigree information (matrix A) and in the other using a matrix based in pedigree and genomic information (one step, matrix H). The (co) variance components were estimated using multiple-trait analysis by Bayesian inference method, applying an animal model, through Gibbs sampling. The model included the fixed effects of contemporary groups (herd-year-calving season), number of milking (2 levels), and age of buffalo at calving as (co) variable (quadratic and linear effect). The additive genetic, permanent environmental, and residual effects were included as random effects in the model. The heritability estimates using matrix A were 0.25, 0.22, 0.26, 0.17, 0.37, 0.42 and 0.26 and using matrix H were 0.25, 0.24, 0.26, 0.18, 0.38, 0.46 and 0.26 for MY, FY, PY, LY, % F, % P and SCCt, respectively. The estimates of the additive genetic effect for the traits were similar in both analyses, but the accuracy were bigger using matrix H (superior to 15% for traits studied). The heritability estimates were moderated indicating genetic gain under selection. The use of genomic information in the analyses increases the accuracy. It permits a better estimation of the additive genetic value of the animals.