82 resultados para Maximum pseudo-likelihood

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Precise estimation of propagation parameters inprecipitation media is of interest to improve the performanceof communications systems and in remote sensing applications.In this paper, we present maximum-likelihood estimators ofspecific attenuation and specific differential phase in rain. Themodel used for obtaining the cited estimators assumes coherentpropagation, reflection symmetry of the medium, and Gaussianstatistics of the scattering matrix measurements. No assumptionsabout the microphysical properties of the medium are needed.The performance of the estimators is evaluated through simulateddata. Results show negligible estimators bias and variances closeto Cramer–Rao bounds.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The restricted maximum likelihood is preferred by many to the full maximumlikelihood for estimation with variance component and other randomcoefficientmodels, because the variance estimator is unbiased. It is shown that thisunbiasednessis accompanied in some balanced designs by an inflation of the meansquared error.An estimator of the cluster-level variance that is uniformly moreefficient than the fullmaximum likelihood is derived. Estimators of the variance ratio are alsostudied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper is concerned with the derivation of new estimators and performance bounds for the problem of timing estimation of (linearly) digitally modulated signals. The conditional maximum likelihood (CML) method is adopted, in contrast to the classical low-SNR unconditional ML (UML) formulationthat is systematically applied in the literature for the derivationof non-data-aided (NDA) timing-error-detectors (TEDs). A new CML TED is derived and proved to be self-noise free, in contrast to the conventional low-SNR-UML TED. In addition, the paper provides a derivation of the conditional Cramér–Rao Bound (CRB ), which is higher (less optimistic) than the modified CRB (MCRB)[which is only reached by decision-directed (DD) methods]. It is shown that the CRB is a lower bound on the asymptotic statisticalaccuracy of the set of consistent estimators that are quadratic with respect to the received signal. Although the obtained boundis not general, it applies to most NDA synchronizers proposed in the literature. A closed-form expression of the conditional CRBis obtained, and numerical results confirm that the CML TED attains the new bound for moderate to high Eg/No.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper provides a systematic approach to theproblem of nondata aided symbol-timing estimation for linearmodulations. The study is performed under the unconditionalmaximum likelihood framework where the carrier-frequencyerror is included as a nuisance parameter in the mathematicalderivation. The second-order moments of the received signal arefound to be the sufficient statistics for the problem at hand and theyallow the provision of a robust performance in the presence of acarrier-frequency error uncertainty. We particularly focus on theexploitation of the cyclostationary property of linear modulations.This enables us to derive simple and closed-form symbol-timingestimators which are found to be based on the well-known squaretiming recovery method by Oerder and Meyr. Finally, we generalizethe OM method to the case of linear modulations withoffset formats. In this case, the square-law nonlinearity is foundto provide not only the symbol-timing but also the carrier-phaseerror.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En l’anàlisi de la supervivència el problema de les dades censurades en un interval es tracta, usualment,via l’estimació per màxima versemblança. Amb l’objectiu d’utilitzar una expressió simplificada de la funció de versemblança, els mètodes estàndards suposen que les condicions que produeixen la censura no afecten el temps de fallada. En aquest article formalitzem les condicions que asseguren la validesa d’aquesta versemblança simplificada. Així, precisem diferents condicions de censura no informativa i definim una condició de suma constant anàloga a la derivada en el context de censura per la dreta. També demostrem que les inferències obtingudes amb la versemblançaa simplificada són correctes quan aquestes condicions són certes. Finalment, tractem la identificabilitat de la funció distribució del temps de fallada a partir de la informació observada i estudiem la possibilitat de contrastar el compliment de la condició de suma constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.