76 resultados para Inference
Resumo:
We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of di fferent sizes. Since choice distributions over di fferent choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms.
Inference for nonparametric high-frequency estimators with an application to time variation in betas
Resumo:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
This paper develops a model of money demand where the opportunity cost of holding money is subject to regime changes. The regimes are fully characterized by the mean and variance of inflation and are assumed to be the result of alternative government policies. Agents are unable to directly observe whether government actions are indeed consistent with the inflation rate targeted as part of a stabilization program but can construct probability inferences on the basis of available observations of inflation and money growth. Government announcements are assumed to provide agents with additional, possibly truthful information regarding the regime. This specification is estimated and tested using data from the Israeli and Argentine high inflation periods. Results indicate the successful stabilization program implemented in Israel in July 1985 was more credible than either the earlier Israeli attempt in November 1984 or the Argentine programs. Government’s signaling might substantially simplify the inference problem and increase the speed of learning on the part of the agents. However, under certain conditions, it might increase the volatility of inflation. After the introduction of an inflation stabilization plan, the welfare gains from a temporary increase in real balances might be high enough to induce agents to raise their real balances in the short-term, even if they are uncertain about the nature of government policy and the eventual outcome of the stabilization attempt. Statistically, the model restrictions cannot be rejected at the 1% significance level.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
Ce texte propose des méthodes d’inférence exactes (tests et régions de confiance) sur des modèles de régression linéaires avec erreurs autocorrélées suivant un processus autorégressif d’ordre deux [AR(2)], qui peut être non stationnaire. L’approche proposée est une généralisation de celle décrite dans Dufour (1990) pour un modèle de régression avec erreurs AR(1) et comporte trois étapes. Premièrement, on construit une région de confiance exacte pour le vecteur des coefficients du processus autorégressif (φ). Cette région est obtenue par inversion de tests d’indépendance des erreurs sur une forme transformée du modèle contre des alternatives de dépendance aux délais un et deux. Deuxièmement, en exploitant la dualité entre tests et régions de confiance (inversion de tests), on détermine une région de confiance conjointe pour le vecteur φ et un vecteur d’intérêt M de combinaisons linéaires des coefficients de régression du modèle. Troisièmement, par une méthode de projection, on obtient des intervalles de confiance «marginaux» ainsi que des tests à bornes exacts pour les composantes de M. Ces méthodes sont appliquées à des modèles du stock de monnaie (M2) et du niveau des prix (indice implicite du PNB) américains
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
This paper presents a new theory of random consumer demand. The primitive is a collection of probability distributions, rather than a binary preference. Various assumptions constrain these distributions, including analogues of common assumptions about preferences such as transitivity, monotonicity and convexity. Two results establish a complete representation of theoretically consistent random demand. The purpose of this theory of random consumer demand is application to empirical consumer demand problems. To this end, the theory has several desirable properties. It is intrinsically stochastic, so the econometrician can apply it directly without adding extrinsic randomness in the form of residuals. Random demand is parsimoniously represented by a single function on the consumption set. Finally, we have a practical method for statistical inference based on the theory, described in McCausland (2004), a companion paper.
Resumo:
We propose an alternate parameterization of stationary regular finite-state Markov chains, and a decomposition of the parameter into time reversible and time irreversible parts. We demonstrate some useful properties of the decomposition, and propose an index for a certain type of time irreversibility. Two empirical examples illustrate the use of the proposed parameter, decomposition and index. One involves observed states; the other, latent states.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
We introduce a procedure to infer the repeated-game strategies that generate actions in experimental choice data. We apply the technique to set of experiments where human subjects play a repeated Prisoner's Dilemma. The technique suggests that two types of strategies underly the data.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.
Resumo:
In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.