994 resultados para Ile de Montréal
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.
Resumo:
We characterize Paretian quasi-orders in the two-agent continuous case.
Resumo:
This article studies mobility patterns of German workers in light of a model of sector-specific human capital. Furthermore, I employ and describe little-used data on continuous on-the-job training occurring after apprenticeships. Results are presented describing the incidence and duration of continuous training. Continuous training is quite common, despite the high incidence of apprenticeships which precedes this part of a worker's career. Most previous studies have only distinguished between firm-specific and general human capital, usually concluding that training was general. Inconsistent with those conclusions, I show that German men are more likely to find a job within the same sector if they have received continuous training in that sector. These results are similar to those obtained for young U.S. workers, and suggest that sector-specific capital is an important feature of very different labor markets. In addition, they suggest that the observed effect of training on mobility is sensible to the state of the business cycle, indicating a more complex interaction between supply and demand that most theoretical models allow for.
Resumo:
In this paper, we look at how labor market conditions at different points during the tenure of individuals with firms are correlated with current earnings. Using data on individuals from the German Socioeconomic Panel for the 1985-1994 period, we find that both the contemporaneous unemployment rate and prior values of the unemployment rate are significantly correlated with current earnings, contrary to results for the American labor market. Estimated elasticities vary between 9 and 15 percent for the elasticity of earnings with respect to current unemployment rates, and between 6 and 10 percent with respect to unemployment rates at the start of current firm tenure. Moreover, whereas local unemployment rates determine levels of earnings, national rates influence contemporaneous variations in earnings. We interpret this result as evidence that German unions do, in fact, bargain over wages and employment, but that models of individualistic contracts, such as the implicit contract model, may explain some of the observed wage drift and longer-term wage movements reasonably well. Furthermore, we explore the heterogeneity of contracts over a variety of worker and job characteristics. In particular, we find evidence that contracts differ across firm size and worker type. Workers of large firms are remarkably more insulated from the job market than workers for any other type of firm, indicating the importance of internal job markets.
Resumo:
Using data from the National Longitudinal Survey of Youth (NLSY), we re-examine the effect of formal on-the-job training on mobility patterns of young American workers. By employing parametric duration models, we evaluate the economic impact of training on productive time with an employer. Confirming previous studies, we find a positive and statistically significant impact of formal on-the-job training on tenure with the employer providing the training. However, the expected net duration of the time spent in the training program is generally not significantly increased. We proceed to document and analyze intra-sectoral and cross-sectoral mobility patterns in order to infer whether training provides firm-specific, industry-specific, or general human capital. The econometric analysis rejects a sequential model of job separation in favor of a competing risks specification. We find significant evidence for the industry-specificity of training. The probability of sectoral mobility upon job separation decreases with training received in the current industry, whether with the last employer or previous employers, and employment attachment increases with on-the-job training. These results are robust to a number of variations on the base model.
Resumo:
Ce rapport s'inscrit dans la problematique soulevée par l'insistance des milieux financiers américains et du Trésor américain à libéraliser et à globaliser les mouvements internationaux de capitaux. Dans cette perspective, il tente de répondre à trois questions, à savoir 1) est-ce que la finance internationale en général et les institutions monétaires et financières nationales et internationales sont sources d'instabilité financière et économique pour les pays? 2) est-il possible d'éviter que les bénéfices découlant de l'accès accru aux marchés internationaux des capitaux soient diminués et même renversés par des crises monétaires et financières internationales et nationales et, comme corollaire, 3) est-ce que le systême actuel des monnaies nationales est dépassé ?