899 resultados para asymptotic inference


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The generalized Birnbaum-Saunders (GBS) distribution is a new class of positively skewed models with lighter and heavier tails than the traditional Birnbaum-Saunders (BS) distribution, which is largely applied to study lifetimes. However, the theoretical argument and the interesting properties of the GBS model have made its application possible beyond the lifetime analysis. The aim of this paper is to present the GBS distribution as a useful model for describing pollution data and deriving its positive and negative moments. Based on these moments, we develop estimation and goodness-of-fit methods. Also, some properties of the proposed estimators useful for developing asymptotic inference are presented. Finally, an application with real data from Environmental Sciences is given to illustrate the methodology developed. This example shows that the empirical fit of the GBS distribution to the data is very good. Thus, the GBS model is appropriate for describing air pollutant concentration data, which produces better results than the lognormal model when the administrative target is determined for abating air pollution. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new anytime algorithm for the marginal MAP problem in graphical models of bounded treewidth. We show asymptotic convergence and theoretical error bounds for any fixed step. Experiments show that it compares well to a state-of-the-art systematic search algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose methods for testing hypotheses of non-causality at various horizons, as defined in Dufour and Renault (1998, Econometrica). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we obtain asymptotic expansions, up to order n(-1/2) and under a sequence of Pitman alternatives, for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of symmetric linear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.