978 resultados para Loi de puissance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’Islam interdit le riba , mot arabe signifiant à la fois usure et intérêt. L’interdiction du rib figure dans la loi islamique, née dans l’Arabie du Moyen Âge. Elle est à la base de la finance islamique qui connut une expansion remarquable durant la deuxième moitié du XX e siècle. Nous nous interrogeons sur les origines de cette interdiction, sur les problèmes que connaît actuellement la finance islamique et sur ses perspectives d’avenir.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an economy where cash can be stored costlessly (in nominal terms), the nominal interest rate is bounded below by zero. This paper derives the implications of this nonnegativity constraint for the term structure and shows that it induces a nonlinear and convex relation between short- and long-term interest rates. As a result, the long-term rate responds asymmetrically to changes in the short-term rate, and by less than predicted by a benchmark linear model. In particular, a decrease in the short-term rate leads to a decrease in the long-term rate that is smaller in magnitude than the increase in the long-term rate associated with an increase in the short-term rate of the same size. Up to the extent that monetary policy acts by affecting long-term rates through the term structure, its power is considerably reduced at low interest rates. The empirical predictions of the model are examined using data from Japan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans Ce Texte Nous Examinons les Effets de la Loi du Zonage Agricole du Quebec, Proclame En Decembre 1978 Sur le Prix du Sol Dans une Banlieu de Montreal. a L'aide de Donnees Sur les Transactions Normales Faites a Carignan et Saint-Mathias de 1975 a 1981, Nous Estimons, a L'aide des Moindres Carrees Ordinaires, une Equation de Determination du Prix Par Acre Avec Comme Variables Independantes la Dimension du Lot, la Distance de Montreal, les Services Disponibles (Egouts,...) et le Zonage Agricole (Ou Non) du Sol. Nos Resultats Nous Indiquent Que le Zonage Agricole Reduit le Prix D'un Acre de Sol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce Texte Presente Plusieurs Resultats Exacts Sur les Seconds Moments des Autocorrelations Echantillonnales, Pour des Series Gaussiennes Ou Non-Gaussiennes. Nous Donnons D'abord des Formules Generales Pour la Moyenne, la Variance et les Covariances des Autocorrelations Echantillonnales, Dans le Cas Ou les Variables de la Serie Sont Interchangeables. Nous Deduisons de Celles-Ci des Bornes Pour les Variances et les Covariances des Autocorrelations Echantillonnales. Ces Bornes Sont Utilisees Pour Obtenir des Limites Exactes Sur les Points Critiques Lorsqu'on Teste le Caractere Aleatoire D'une Serie Chronologique, Sans Qu'aucune Hypothese Soit Necessaire Sur la Forme de la Distribution Sous-Jacente. Nous Donnons des Formules Exactes et Explicites Pour les Variances et Covariances des Autocorrelations Dans le Cas Ou la Serie Est un Bruit Blanc Gaussien. Nous Montrons Que Ces Resultats Sont Aussi Valides Lorsque la Distribution de la Serie Est Spheriquement Symetrique. Nous Presentons les Resultats D'une Simulation Qui Indiquent Clairement Qu'on Approxime Beaucoup Mieux la Distribution des Autocorrelations Echantillonnales En Normalisant Celles-Ci Avec la Moyenne et la Variance Exactes et En Utilisant la Loi N(0,1) Asymptotique, Plutot Qu'en Employant les Seconds Moments Approximatifs Couramment En Usage. Nous Etudions Aussi les Variances et Covariances Exactes D'autocorrelations Basees Sur les Rangs des Observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend the class of M-tests for a unit root analyzed by Perron and Ng (1996) and Ng and Perron (1997) to the case where a change in the trend function is allowed to occur at an unknown time. These tests M(GLS) adopt the GLS detrending approach of Dufour and King (1991) and Elliott, Rothenberg and Stock (1996) (ERS). Following Perron (1989), we consider two models : one allowing for a change in slope and the other for both a change in intercept and slope. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal tests PT(GLS) suggested by ERS. The asymptotic critical values of the tests are tabulated. Also, we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. We show that the M(GLS) and PT(GLS) tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. An empirical application is also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies testing for a unit root for large n and T panels in which the cross-sectional units are correlated. To model this cross-sectional correlation, we assume that the data is generated by an unknown number of unobservable common factors. We propose unit root tests in this environment and derive their (Gaussian) asymptotic distribution under the null hypothesis of a unit root and local alternatives. We show that these tests have significant asymptotic power when the model has no incidental trends. However, when there are incidental trends in the model and it is necessary to remove heterogeneous deterministic components, we show that these tests have no power against the same local alternatives. Through Monte Carlo simulations, we provide evidence on the finite sample properties of these new tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un résumé en anglais est également disposnible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un résumé en anglais est également disponible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Article publié avec l'autorisation de la Chambre des notaires du Québec et dans le cadre des cours de perfectionnement du notariat.