34 resultados para qualitative and quantitative methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose two axiomatic theories of cost sharing with the common premise that agents demand comparable -though perhaps different- commodities and are responsible for their own demand. Under partial responsibility the agents are not responsible for the asymmetries of the cost function: two agents consuming the same amount of output always pay the same price; this holds true under full responsibility only if the cost function is symmetric in all individual demands. If the cost function is additively separable, each agent pays her stand alone cost under full responsibility; this holds true under partial responsibility only if, in addition, the cost function is symmetric. By generalizing Moulin and Shenker’s (1999) Distributivity axiom to cost-sharing methods for heterogeneous goods, we identify in each of our two theories a different serial method. The subsidy-free serial method (Moulin, 1995) is essentially the only distributive method meeting Ranking and Dummy. The cross-subsidizing serial method (Sprumont, 1998) is the only distributive method satisfying Separability and Strong Ranking. Finally, we propose an alternative characterization of the latter method based on a strengthening of Distributivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A group of agents participate in a cooperative enterprise producing a single good. Each participant contributes a particular type of input; output is nondecreasing in these contributions. How should it be shared? We analyze the implications of the axiom of Group Monotonicity: if a group of agents simultaneously decrease their input contributions, not all of them should receive a higher share of output. We show that in combination with other more familiar axioms, this condition pins down a very small class of methods, which we dub nearly serial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a systematic framework for analyzing the dynamic effects of permanent and transitory shocks on a system of n economic variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We instillate rational cognition and learning in seemingly riskless choices and judgments. Preferences and possibilities are given in a stochastic sense and based on revisable expectations. the theory predicts experimental preference reversals and passes a sharp econometric test of the status quo bias drawn from a field study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the analysis of tax reform, when equity is traded off against efficiency, the measurement of the latter requires us to know how tax-induced price changes affect quantities supplied and demanded. in this paper, we present various econometric procedures for estimating how taxes affect demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La réadaptation pulmonaire est une intervention dont l’efficacité est largement reconnue. Cette efficacité a été établie grâce à l’utilisation d’instruments de mesure d’impact global. Les patients bénéficiant des programmes de réadaptation pulmonaire possèdent des caractéristiques variées et souffrent généralement de maladie pulmonaire obstructive chronique à différents degrés. En fonction de leurs différents besoins, les patients répondent de façon variable aux composantes d’un programme de réadaptation pulmonaire. Il est recommandé d’individualiser les programmes en fonction des besoins des patients afin d’en optimiser les effets. À cette fin, l’évaluation des besoins de réadaptation des patients est nécessaire. Comme il n’existe actuellement aucun instrument standardisé pour procéder à cette évaluation, nous avons entrepris d’en développer un à l’aide de méthodes qualitatives et quantitatives de recherche. Un modèle conceptuel d’évaluation des besoins de réadaptation des patients a été élaboré suite aux résultats tirés de groupes de discussion, de la consultation de dossiers médicaux et d’une recension des écrits. À partir de ce modèle, des items devant être sélectionnés de façon individualisée parmi cinq domaines (reconnaissance des besoins, connaissance, motivation, attentes et buts) ont été pré-testés. Les tendances générales concernant la validité des items contenus dans le prototype d’instrument ont été vérifiées lors d’une étude pilote auprès de 50 répondants en réadaptation. Les pistes d’investigation dégagées dans ce mémoire serviront aux études de validation plus approfondies dont devrait faire l’objet ce prototype d’instrument dans le futur.