996 resultados para Sensation de présence
Resumo:
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
This article studies mobility patterns of German workers in light of a model of sector-specific human capital. Furthermore, I employ and describe little-used data on continuous on-the-job training occurring after apprenticeships. Results are presented describing the incidence and duration of continuous training. Continuous training is quite common, despite the high incidence of apprenticeships which precedes this part of a worker's career. Most previous studies have only distinguished between firm-specific and general human capital, usually concluding that training was general. Inconsistent with those conclusions, I show that German men are more likely to find a job within the same sector if they have received continuous training in that sector. These results are similar to those obtained for young U.S. workers, and suggest that sector-specific capital is an important feature of very different labor markets. In addition, they suggest that the observed effect of training on mobility is sensible to the state of the business cycle, indicating a more complex interaction between supply and demand that most theoretical models allow for.
Resumo:
In this paper, we look at how labor market conditions at different points during the tenure of individuals with firms are correlated with current earnings. Using data on individuals from the German Socioeconomic Panel for the 1985-1994 period, we find that both the contemporaneous unemployment rate and prior values of the unemployment rate are significantly correlated with current earnings, contrary to results for the American labor market. Estimated elasticities vary between 9 and 15 percent for the elasticity of earnings with respect to current unemployment rates, and between 6 and 10 percent with respect to unemployment rates at the start of current firm tenure. Moreover, whereas local unemployment rates determine levels of earnings, national rates influence contemporaneous variations in earnings. We interpret this result as evidence that German unions do, in fact, bargain over wages and employment, but that models of individualistic contracts, such as the implicit contract model, may explain some of the observed wage drift and longer-term wage movements reasonably well. Furthermore, we explore the heterogeneity of contracts over a variety of worker and job characteristics. In particular, we find evidence that contracts differ across firm size and worker type. Workers of large firms are remarkably more insulated from the job market than workers for any other type of firm, indicating the importance of internal job markets.
Resumo:
Recent work suggests that the conditional variance of financial returns may exhibit sudden jumps. This paper extends a non-parametric procedure to detect discontinuities in otherwise continuous functions of a random variable developed by Delgado and Hidalgo (1996) to higher conditional moments, in particular the conditional variance. Simulation results show that the procedure provides reasonable estimates of the number and location of jumps. This procedure detects several jumps in the conditional variance of daily returns on the S&P 500 index.
Resumo:
Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.
Resumo:
Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.
Resumo:
La lecture numérique prend de plus en plus de place dans l'espace global de la lecture des étudiants. Bien que les premiers systèmes de lecture numérique, communément appelés livres électroniques, datent déjà de plusieurs années, les opinions quant à leur potentiel divergent encore. Une variété de contenus universitaires numériques s’offre aujourd’hui aux étudiants, entraînant par le fait même une multiplication d'usages ainsi qu'une variété de modes de lecture. Les systèmes de lecture numérique font maintenant partie intégrante de l’environnement électronique auquel les étudiants ont accès et méritent d’être étudiés plus en profondeur. Maintes expérimentations ont été menées dans des bibliothèques publiques et dans des bibliothèques universitaires sur les livres électroniques. Des recherches ont été conduites sur leur utilisabilité et sur le degré de satisfaction des lecteurs dans le but d’en améliorer le design. Cependant, très peu d’études ont porté sur les pratiques de lecture proprement dites des universitaires (notamment les étudiants) et sur leurs perceptions de ces nouveaux systèmes de lecture. Notre recherche s’intéresse à ces aspects en étudiant deux systèmes de lecture numérique, une Tablet PC (dispositif nomade) et un système de livres-Web, NetLibrary (interface de lecture intégrée à un navigateur Web). Notre recherche étudie les pratiques de lecture des étudiants sur ces systèmes de lecture numérique. Elle est guidée par trois questions de recherche qui s’articulent autour (1) des stratégies de lecture employées par des étudiants (avant, pendant et après la lecture), (2) des éléments du système de lecture qui influencent (positivement ou négativement) le processus de lecture et (3) des perceptions des étudiants vis-à-vis la technologie du livre électronique et son apport à leur travail universitaire. Pour mener cette recherche, une approche méthodologique mixte a été retenue, utilisant trois modes de collecte de données : un questionnaire, des entrevues semi-structurées avec les étudiants ayant utilisé l’un ou l’autre des systèmes étudiés, et le prélèvement des traces de lecture laissées par les étudiants dans les systèmes, après usage. Les répondants (n=46) étaient des étudiants de l’Université de Montréal, provenant de trois départements (Bibliothéconomie & sciences de l’information, Communication et Linguistique & traduction). Près de la moitié d’entre eux (n=21) ont été interviewés. Parallèlement, les traces de lecture laissées dans les systèmes de lecture par les étudiants (annotations, surlignages, etc.) ont été prélevées et analysées. Les données des entrevues et des réponses aux questions ouvertes du questionnaire ont fait l'objet d'une analyse de contenu et un traitement statistique a été réservé aux données des questions fermées du questionnaire et des traces de lecture. Les résultats obtenus montrent que, d’une façon générale, l’objectif de lecture, la nouveauté du contenu, les habitudes de lecture de l’étudiant de même que les possibilités du système de lecture sont les éléments qui orientent le choix et l’application des stratégies de lecture. Des aides et des obstacles à la lecture ont été identifiés pour chacun des systèmes de lecture étudiés. Les aides consistent en la présence de certains éléments de la métaphore du livre papier dans le système de lecture numérique (notion de page délimitée, pagination, etc.), le dictionnaire intégré au système, et le fait que les systèmes de lecture étudiés facilitent la lecture en diagonale. Pour les obstacles, l’instrumentation de la lecture a rendu l’appropriation du texte par le lecteur difficile. De plus, la lecture numérique (donc « sur écran ») a entraîné un manque de concentration et une fatigue visuelle notamment avec NetLibrary. La Tablet PC, tout comme NetLibrary, a été perçue comme facile à utiliser mais pas toujours confortable, l’inconfort étant davantage manifeste dans NetLibrary. Les étudiants considèrent les deux systèmes de lecture comme des outils pratiques pour le travail universitaire, mais pour des raisons différentes, spécifiques à chaque système. L’évaluation globale de l’expérience de lecture numérique des répondants s’est avérée, dans l’ensemble, positive pour la Tablet PC et plutôt mitigée pour NetLibrary. Cette recherche contribue à enrichir les connaissances sur (1) la lecture numérique, notamment celle du lectorat universitaire étudiant, et (2) l’impact d’un système de lecture sur l’efficacité de la lecture, sur les lecteurs, sur l’atteinte de l’objectif de lecture, et sur les stratégies de lecture utilisées. Outre les limites de l’étude, des pistes pour des recherches futures sont présentées.
Resumo:
Un résumé en anglais est également disposnible.