945 resultados para Maximum likelihood estimate
Resumo:
We investigate full-field detection-based maximum-likelihood sequence estimation (MLSE) for chromatic dispersion compensation in 10 Gbit/s OOK optical communication systems. Important design criteria are identified to optimize the system performance. It is confirmed that approximately 50% improvement in transmission reach can be achieved compared to conventional direct-detection MLSE at both 4 and 16 states. It is also shown that full-field MLSE is more robust to the noise and the associated noise amplifications in full-field reconstruction, and consequently exhibits better tolerance to nonoptimized system parameters than full-field feedforward equalizer. Experiments over 124 km spans of field-installed single-mode fiber without optical dispersion compensation using full-field MLSE verify the theoretically predicted performance benefits.
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
This report reviews literature on the rate of convergence of maximum likelihood estimators and establishes a Central Limit Theorem, which yields an O(1/sqrt(n)) rate of convergence of the maximum likelihood estimator under somewhat relaxed smoothness conditions. These conditions include the existence of a one-sided derivative in θ of the pdf, compared to up to three that are classically required. A verification through simulation is included in the end of the report.
Resumo:
Using the classical twin design, this study investigates the influence of genetic factors on the large phenotypic variance in inspection time (IT), and whether the well established IT-IQ association can be explained by a common genetic factor. Three hundred ninety pairs of twins (184 monozygotic, MZ; 206 dizygotic, DZ) with a mean age of 16 years participated, and 49 pairs returned approximately 3 months, later for retesting. As in many IT studies, the pi figure stimulus was used and IT was estimated from the cumulative normal ogive. IT ranged from 39.4 to 774.1 ms (159 +/- 110.1 ms) with faster ITs (by an average of 26.9 ms) found in the retest session from which a reliability of .69 was estimated. Full-scale IQ (FIQ) was assessed by the Multidimensional Aptitude Battery (MAB) and ranged from 79 to 145 (111 +/- 13). The phenotypic association between IT and FIQ was confirmed (- .35) and bivariate results showed that a common genetic factor accounted for 36% of the variance in IT and 32% of the variance in FIQ. The maximum likelihood estimate of the genetic correlation was - .63. When performance and verbal IQ (PIQ & VIQ) were analysed with IT, a stronger phenotypic and genetic relationship was found between PIQ and IT than with VIQ. A large part of the IT variance (64%) was accounted for by a unique environmental factor. Further genetic factors were needed to explain the remaining variance in IQ with a small component of unique environmental variance present. The separability of a shared genetic factor influencing IT and IQ from the total genetic variance in IQ suggests that IT affects a specific subcomponent of intelligence rather than a generalised efficiency. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
La régression logistique est un modèle de régression linéaire généralisée (GLM) utilisé pour des variables à expliquer binaires. Le modèle cherche à estimer la probabilité de succès de cette variable par la linéarisation de variables explicatives. Lorsque l’objectif est d’estimer le plus précisément l’impact de différents incitatifs d’une campagne marketing (coefficients de la régression logistique), l’identification de la méthode d’estimation la plus précise est recherchée. Nous comparons, avec la méthode MCMC d’échantillonnage par tranche, différentes densités a priori spécifiées selon différents types de densités, paramètres de centralité et paramètres d’échelle. Ces comparaisons sont appliquées sur des échantillons de différentes tailles et générées par différentes probabilités de succès. L’estimateur du maximum de vraisemblance, la méthode de Gelman et celle de Genkin viennent compléter le comparatif. Nos résultats démontrent que trois méthodes d’estimations obtiennent des estimations qui sont globalement plus précises pour les coefficients de la régression logistique : la méthode MCMC d’échantillonnage par tranche avec une densité a priori normale centrée en 0 de variance 3,125, la méthode MCMC d’échantillonnage par tranche avec une densité Student à 3 degrés de liberté aussi centrée en 0 de variance 3,125 ainsi que la méthode de Gelman avec une densité Cauchy centrée en 0 de paramètre d’échelle 2,5.
Resumo:
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
In this article, we give an asymptotic formula of order n(-1/2), where n is the sample size, for the skewness of the distributions of the maximum likelihood estimates of the parameters in exponencial family nonlinear models. We generalize the result by Cordeiro and Cordeiro ( 2001). The formula is given in matrix notation and is very suitable for computer implementation and to obtain closed form expressions for a great variety of models. Some special cases and two applications are discussed.
Resumo:
In this paper, we derive score test statistics to discriminate between proportional hazards and proportional odds models for grouped survival data. These models are embedded within a power family transformation in order to obtain the score tests. In simple cases, some small-sample results are obtained for the score statistics using Monte Carlo simulations. Score statistics have distributions well approximated by the chi-squared distribution. Real examples illustrate the proposed tests.
Resumo:
Considerando a importância sócio-econômica da região de Presidente Prudente, este estudo teve como objetivo estimar a precipitação pluvial máxima esperada para diferentes níveis de probabilidade e verificar o grau de ajuste dos dados ao modelo Gumbel, com as estimativas dos parâmetros obtidas pelo método de máxima verossimilhança. Pelos resultados, o teste de Kolmogorov-Sminorv (K-S) mostrou que a distribuição Gumbel testada se ajustou com p-valor maior que 0.28 para todos os períodos de tempo considerados, comprovando que a distribuição Gumbel apresenta um bom ajustamento aos dados observados para representar as precipitações pluviais máximas. As estimativas de precipitação obtidas pelo método de máxima verossimilhança são consistentes, conseguindo reproduzir com bastante fidelidade o regime de chuvas da região de Presidente Prudente. Assim, o conhecimento da distribuição da precipitação pluvial máxima mensal e das estimativas das precipitações diárias máximas esperadas, possibilita um planejamento estratégico melhor, minimizando assim o risco de ocorrência de perdas econômicas para essa região.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. © 2010 Elsevier B.V. All rights reserved.
Resumo:
The exponential-logarithmic is a new lifetime distribution with decreasing failure rate and interesting applications in the biological and engineering sciences. Thus, a Bayesian analysis of the parameters would be desirable. Bayesian estimation requires the selection of prior distributions for all parameters of the model. In this case, researchers usually seek to choose a prior that has little information on the parameters, allowing the data to be very informative relative to the prior information. Assuming some noninformative prior distributions, we present a Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. Jeffreys prior is derived for the parameters of exponential-logarithmic distribution and compared with other common priors such as beta, gamma, and uniform distributions. In this article, we show through a simulation study that the maximum likelihood estimate may not exist except under restrictive conditions. In addition, the posterior density is sometimes bimodal when an improper prior density is used. © 2013 Copyright Taylor and Francis Group, LLC.