971 resultados para Sequential Monte Carlo methods
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
We introduce a procedure to infer the repeated-game strategies that generate actions in experimental choice data. We apply the technique to set of experiments where human subjects play a repeated Prisoner's Dilemma. The technique suggests that two types of strategies underly the data.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.
Resumo:
Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.
Resumo:
Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.
Resumo:
Les processus Markoviens continus en temps sont largement utilisés pour tenter d’expliquer l’évolution des séquences protéiques et nucléotidiques le long des phylogénies. Des modèles probabilistes reposant sur de telles hypothèses sont conçus pour satisfaire la non-homogénéité spatiale des contraintes fonctionnelles et environnementales agissant sur celles-ci. Récemment, des modèles Markov-modulés ont été introduits pour décrire les changements temporels dans les taux d’évolution site-spécifiques (hétérotachie). Des études ont d’autre part démontré que non seulement la force mais également la nature de la contrainte sélective agissant sur un site peut varier à travers le temps. Ici nous proposons de prendre en charge cette réalité évolutive avec un modèle Markov-modulé pour les protéines sous lequel les sites sont autorisés à modifier leurs préférences en acides aminés au cours du temps. L’estimation a posteriori des différents paramètres modulants du noyau stochastique avec les méthodes de Monte Carlo est un défi de taille que nous avons su relever partiellement grâce à la programmation parallèle. Des réglages computationnels sont par ailleurs envisagés pour accélérer la convergence vers l’optimum global de ce paysage multidimensionnel relativement complexe. Qualitativement, notre modèle semble être capable de saisir des signaux d’hétérogénéité temporelle à partir d’un jeu de données dont l’histoire évolutive est reconnue pour être riche en changements de régimes substitutionnels. Des tests de performance suggèrent de plus qu’il serait mieux ajusté aux données qu’un modèle équivalent homogène en temps. Néanmoins, les histoires substitutionnelles tirées de la distribution postérieure sont bruitées et restent difficilement interprétables du point de vue biologique.
Resumo:
This paper reports an uncertainty analysis of critical loads for acid deposition for a site in southern England, using the Steady State Mass Balance Model. The uncertainty bounds, distribution type and correlation structure for each of the 18 input parameters was considered explicitly, and overall uncertainty estimated by Monte Carlo methods. Estimates of deposition uncertainty were made from measured data and an atmospheric dispersion model, and hence the uncertainty in exceedance could also be calculated. The uncertainties of the calculated critical loads were generally much lower than those of the input parameters due to a "compensation of errors" mechanism - coefficients of variation ranged from 13% for CLmaxN to 37% for CL(A). With 1990 deposition, the probability that the critical load was exceeded was > 0.99; to reduce this probability to 0.50, a 63% reduction in deposition is required; to 0.05, an 82% reduction. With 1997 deposition, which was lower than that in 1990, exceedance probabilities declined and uncertainties in exceedance narrowed as deposition uncertainty had less effect. The parameters contributing most to the uncertainty in critical loads were weathering rates, base cation uptake rates, and choice of critical chemical value, indicating possible research priorities. However, the different critical load parameters were to some extent sensitive to different input parameters. The application of such probabilistic results to environmental regulation is discussed.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The Boltzmann equation in presence of boundary and initial conditions, which describes the general case of carrier transport in microelectronic devices is analysed in terms of Monte Carlo theory. The classical Ensemble Monte Carlo algorithm which has been devised by merely phenomenological considerations of the initial and boundary carrier contributions is now derived in a formal way. The approach allows to suggest a set of event-biasing algorithms for statistical enhancement as an alternative of the population control technique, which is virtually the only algorithm currently used in particle simulators. The scheme of the self-consistent coupling of Boltzmann and Poisson equation is considered for the case of weighted particles. It is shown that particles survive the successive iteration steps.
Resumo:
This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.
Resumo:
We exploit a theory of price linkages that lends itself readily to empirical examination using Markovchain, Monte Carlo methods. The methodology facilitates classification and discrimination among alternative regimes in economic time series. The theory and procedures are applied to annual series (1955-1992) on the U.S. beef sector