82 resultados para heteroskedasticity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper confirms presence of GARCH(1,1) effect on stock return time series of Vietnam’s newborn stock market. We performed tests on four different time series, namely market returns (VN-Index), and return series of the first four individual stocks listed on the Vietnamese exchange (the Ho Chi Minh City Securities Trading Center) since August 2000. The results have been quite relevant to previously reported empirical studies on different markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a quantifying measure for heteroskedasticity of a time series. In this research, heteroskedasticity levels are measured by decomposing the examined time series recursively into homoskedastic segments. Each segment of the examined time series is decomposed into smaller segments if it tests positively to heteroskedasticity tests. The final quantified value of the heteroskedasticity level is the number of homoskedastic segments. The proposed measure is robust and detects heteroskedasticity in small average variance datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In econometrics, heteroskedasticity refers to the case when the variances of the error terms of the data in hand are not equal. Heteroskedastic time series are challenging to different forecasting models. However, all available solutions adopt the strategy of accommodating heteroskedasticity in the time series and consider it as a type of noise. Some statistical tests were developed over the past three decades to determine whether a time series features heteroskedastic behaviour. This paper presents a novel strategy to handle this problem by deriving a quantifying measure for heteroskedasticity. The proposed measure relies on the definition of heteroskedasticity as a time-variant variance in the time series. In this work, heteroskedasticity is measured by calculating local variances using linear filters, estimating variance trends, calculating changes in variance slopes, and finally obtaining the average slope angle. The results confirm that the proposed index complies with the widely popular heteroskedasticity tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When testing for a unit root in a time series, in spite of the well-known power problem of univariate tests, it is quite common to use only the information regarding the autoregressive behaviour contained in that series. In a series of influential papers, Elliott et al. (Efficient tests for an autoregressive unit root, Econometrica 64, 813–836, 1996), Hansen (Rethinking the univariate approach to unit root testing: using covariates to increase power, Econometric Theory 11, 1148–1171, 1995a) and Seo (Distribution theory for unit root tests with conditional heteroskedasticity, Journal of Econometrics 91, 113–144, 1999) showed that this practice can be rather costly and that the inclusion of the extraneous information contained in the near-integratedness of many economic variables, their heteroskedasticity and their correlation with other covariates can lead to substantial power gains. In this article, we show how these information sets can be combined into a single unit root test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The heteroskedasticity-consistent covariance matrix estimator proposed by White (1980), also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto, Ferrari & Cordeiro (2000) have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that also commonly used by practitioners. These include the HC1, HC2 and HC3 estimators, which have proven to have superior small-sample behavior relative to White’s estimator. This paper defines a general bias-correction mechamism that can be applied not only to White’s estimator, but to variants of this estimator as well, such as HC1, HC2 and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis doctoral nace con el propósito de entender, analizar y sobre todo modelizar el comportamiento estadístico de las series financieras. En este sentido, se puede afirmar que los modelos que mejor recogen las especiales características de estas series son los modelos de heterocedasticidad condicionada en tiempo discreto,si los intervalos de tiempo en los que se recogen los datos lo permiten, y en tiempo continuo si tenemos datos diarios o datos intradía. Con esta finalidad, en esta tesis se proponen distintos estimadores bayesianos para la estimación de los parámetros de los modelos GARCH en tiempo discreto (Bollerslev (1986)) y COGARCH en tiempo continuo (Kluppelberg et al. (2004)). En el capítulo 1 se introducen las características de las series financieras y se presentan los modelos ARCH, GARCH y COGARCH, así como sus principales propiedades. Mandelbrot (1963) destacó que las series financieras no presentan estacionariedad y que sus incrementos no presentan autocorrelación, aunque sus cuadrados sí están correlacionados. Señaló también que la volatilidad que presentan no es constante y que aparecen clusters de volatilidad. Observó la falta de normalidad de las series financieras, debida principalmente a su comportamiento leptocúrtico, y también destacó los efectos estacionales que presentan las series, analizando como se ven afectadas por la época del año o el día de la semana. Posteriormente Black (1976) completó la lista de características especiales incluyendo los denominados leverage effects relacionados con como las fluctuaciones positivas y negativas de los precios de los activos afectan a la volatilidad de las series de forma distinta.