949 resultados para Maximum likelihood estimator (MLE)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho examinou as características de carteiras compostas por ações e otimizadas segundo o critério de média-variância e formadas através de estimativas robustas de risco e retorno. A motivação para isto é a distribuição típica de ativos financeiros (que apresenta outliers e mais curtose que a distribuição normal). Para comparação entre as carteiras, foram consideradas suas propriedades: estabilidade, variabilidade e os índices de Sharpe obtidos pelas mesmas. O resultado geral mostra que estas carteiras obtidas através de estimativas robustas de risco e retorno apresentam melhoras em sua estabilidade e variabilidade, no entanto, esta melhora é insuficiente para diferenciar os índices de Sharpe alcançados pelas mesmas das carteiras obtidas através de método de máxima verossimilhança para estimativas de risco e retorno.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the association between two random variables that have a joint normal distribution is of interest in applied statistics; for example, in statistical genetics. This article, targeted to applied statisticians, addresses inferences about the coefficient of correlation (ρ) in the bivariate normal and standard bivariate normal distributions using likelihood, frequentist, and Baycsian perspectives. Some results are surprising. For instance, the maximum likelihood estimator and the posterior distribution of ρ in the standard bivariate normal distribution do not follow directly from results for a general bivariate normal distribution. An example employing bootstrap and rejection sampling procedures is used to illustrate some of the peculiarities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A data set based on 50 studies including feed intake and utilization traits was used to perform a meta-analysis to obtain pooled estimates using the variance between studies of genetic parameters for average daily gain (ADG); residual feed intake (RFI); metabolic body weight (MBW); feed conversion ratio (FCR); and daily dry matter intake (DMI) in beef cattle. The total data set included 128 heritability and 122 genetic correlation estimates published in the literature from 1961 to 2012. The meta-analysis was performed using a random effects model where the restricted maximum likelihood estimator was used to evaluate variances among clusters. Also, a meta-analysis using the method of cluster analysis was used to group the heritability estimates. Two clusters were obtained for each trait by different variables. It was observed, for all traits, that the heterogeneity of variance was significant between clusters and studies for genetic correlation estimates. The pooled estimates, adding the variance between clusters, for direct heritability estimates for ADG, DMI, RFI, MBW and FCR were 0.32 +/- 0.04, 0.39 +/- 0.03, 0.31 +/- 0.02, 0.31 +/- 0.03 and 0.26 +/- 0.03, respectively. Pooled genetic correlation estimates ranged from -0.15 to 0.67 among ADG, DMI, RFI, MBW and FCR. These pooled estimates of genetic parameters could be used to solve genetic prediction equations in populations where data is insufficient for variance component estimation. Cluster analysis is recommended as a statistical procedure to combine results from different studies to account for heterogeneity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of many genetic studies is to locate the genomic regions (called quantitative trait loci, QTLs) that contribute to variation in a quantitative trait (such as body weight). Confidence intervals for the locations of QTLs are particularly important for the design of further experiments to identify the gene or genes responsible for the effect. Likelihood support intervals are the most widely used method to obtain confidence intervals for QTL location, but the non-parametric bootstrap has also been recommended. Through extensive computer simulation, we show that bootstrap confidence intervals are poorly behaved and so should not be used in this context. The profile likelihood (or LOD curve) for QTL location has a tendency to peak at genetic markers, and so the distribution of the maximum likelihood estimate (MLE) of QTL location has the unusual feature of point masses at genetic markers; this contributes to the poor behavior of the bootstrap. Likelihood support intervals and approximate Bayes credible intervals, on the other hand, are shown to behave appropriately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.