7 resultados para hétéroscédasticité
Resumo:
The public primary school system in the State of Geneva, Switzerland, is characterized by centrally evaluated pupil performance measured with the use of standardized tests. As a result, consistent data are collected among the system. The 2010-2011 dataset is used to develop a two-stage data envelopment analysis (DEA) of school efficiency. In the first stage, DEA is employed to calculate an individual efficiency score for each school. It shows that, on average, each school could reduce its inputs by 7% whilst maintaining the same quality of pupil performance. The cause of inefficiency lies in perfectible management. In the second stage, efficiency is regressed on school characteristics and environmental variables;external factors outside of the control of headteachers. The model is tested for multicollinearity, heteroskedasticity and endogeneity. Four variables are identified as statistically significant. School efficiency is negatively influenced by (1) the provision of special education, (2) the proportion of disadvantaged pupils enrolled at the school and (3) operations being held on multiple sites, but positively influenced by school size (captured by the number of pupils). The proportion of allophone pupils; schools located in urban areas and the provision of reception classes for immigrant pupils are not significant. Although the significant variables influencing school efficiency are outside of the control of headteachers, it is still possible to either boost the positive impact or curb the negative impact. Dans le canton de Genève (Suisse), les écoles publiques primaires sont caractérisées par un financement assuré par les collectivités publiques (canton et communes) et par une évaluation des élèves à l'aide d'épreuves standardisées à trois moments distincts de leur scolarité. Cela permet de réunir des informations statistiques consistantes. La base de données de l'année 2010-2011 est utilisée dans une analyse en deux étapes de l'efficience des écoles. Dans une première étape, la méthode d'analyse des données par enveloppement (DEA) est utilisée pour calculer un score d'efficience pour chaque école. Cette analyse démontre que l'efficience moyenne des écoles s'élève à 93%. Chaque école pourrait, en moyenne, réduire ses ressources de 7% tout en conservant constants les résultats des élèves aux épreuves standardisées. La source de l'inefficience réside dans un management des écoles perfectible. Dans une seconde étape, les scores d'efficience sont régressés sur les caractéristiques des écoles et sur des variables environnementales. Ces variables ne sont pas sous le contrôle (ou l'influence) des directeurs d'école. Le modèle est testé pour la multicolinéartié, l'hétéroscédasticité et l'endogénéité. Quatre variables sont statistiquement significatives. L'efficience des écoles est influencée négativement par (1) le fait d'offrir un enseignement spécialisé en classe séparée, (2) la proporition d'élèves défavorisés et (3) le fait d'opérer sur plusieurs sites différents. L'efficience des écoles est influencée positivement par la taille de l'école, mesurée par le nombre d'élèves. La proporition d'élèves allophones, le fait d'être situé dans une zone urbaine et d'offrir des classes d'accueil pour les élèves immigrants constituent autant de variables non significatives. Le fait que les variables qui influencent l'efficience des écoles ne soient pas sous le contrôle des directeurs ne signifie pas qu'il faille céder au fatalisme. Différentes pistes sont proposées pour permettre soit de réduire l'impact négatif soit de tirer parti de l'impact positif des variables significatives.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.