9 resultados para nonparametric inference

em Reposit


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study semiparametric two-step estimators which have the same structure as parametric doubly robust estimators in their second step. The key difference is that we do not impose any parametric restriction on the nuisance functions that are estimated in a first stage, but retain a fully nonparametric model instead. We call these estimators semiparametric doubly robust estimators (SDREs), and show that they possess superior theoretical and practical properties compared to generic semiparametric two-step estimators. In particular, our estimators have substantially smaller first-order bias, allow for a wider range of nonparametric first-stage estimates, rate-optimal choices of smoothing parameters and data-driven estimates thereof, and their stochastic behavior can be well-approximated by classical first-order asymptotics. SDREs exist for a wide range of parameters of interest, particularly in semiparametric missing data and causal inference models. We illustrate our method with a simulation exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the testing of autoregressive conditional duration (ACD) models by gauging the distance between the parametric density and hazard rate functions implied by the duration process and their non-parametric estimates. We derive the asymptotic justification using the functional delta method for fixed and gamma kernels, and then investigate the finite-sample properties through Monte Carlo simulations. Although our tests display some size distortion, bootstrapping suffices to correct the size without compromising their excellent power. We show the practical usefulness of such testing procedures for the estimation of intraday volatility patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes unit tests based on partially adaptive estimation. The proposed tests provide an intermediate class of inference procedures that are more efficient than the traditional OLS-based methods and simpler than unit root tests based on fully adptive estimation using nonparametric methods. The limiting distribution of the proposed test is a combination of standard normal and the traditional Dickey-Fuller (DF) distribution, including the traditional ADF test as a special case when using Gaussian density. Taking into a account the well documented characteristic of heavy-tail behavior in economic and financial data, we consider unit root tests coupled with a class of partially adaptive M-estimators based on the student-t distributions, wich includes te normal distribution as a limiting case. Monte Carlo Experiments indicate that, in the presence of heavy tail distributions or innovations that are contaminated by outliers, the proposed test is more powerful than the traditional ADF test. We apply the proposed test to several macroeconomic time series that have heavy-tailed distributions. The unit root hypothesis is rejected in U.S. real GNP, supporting the literature of transitory shocks in output. However, evidence against unit roots is not found in real exchange rate and nominal interest rate even haevy-tail is taken into a account.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops nonparametric tests of independence between two stationary stochastic processes. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, I take advantage of a generalized entropic measure so as to build a class of nonparametric tests of independence. Asymptotic normality and local power are derived using the functional delta method for kernels, whereas finite sample properties are investigated through Monte Carlo simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides a systematic and unified treatment of the developments in the area of kernel estimation in econometrics and statistics. Both the estimation and hypothesis testing issues are discussed for the nonparametric and semiparametric regression models. A discussion on the choice of windowwidth is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a new novel to calculate tail risks incorporating risk-neutral information without dependence on options data. Proceeding via a non parametric approach we derive a stochastic discount factor that correctly price a chosen panel of stocks returns. With the assumption that states probabilities are homogeneous we back out the risk neutral distribution and calculate five primitive tail risk measures, all extracted from this risk neutral probability. The final measure is than set as the first principal component of the preliminary measures. Using six Fama-French size and book to market portfolios to calculate our tail risk, we find that it has significant predictive power when forecasting market returns one month ahead, aggregate U.S. consumption and GDP one quarter ahead and also macroeconomic activity indexes. Conditional Fama-Macbeth two-pass cross-sectional regressions reveal that our factor present a positive risk premium when controlling for traditional factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).