949 resultados para Variance equation
Resumo:
This paper revisits the issue of conditional volatility in real GDP growth rates for Canada, Japan, the United Kingdom, and the United States. Previous studies find high persistence in the volatility. This paper shows that this finding largely reflects a nonstationary variance. Output growth in the four countries became noticeably less volatile over the past few decades. In this paper, we employ the modified ICSS algorithm to detect structural change in the unconditional variance of output growth. One structural break exists in each of the four countries. We then use generalized autoregressive conditional heteroskedasticity (GARCH) specifications modeling output growth and its volatility with and without the break in volatility. The evidence shows that the time-varying variance falls sharply in Canada, Japan, and the U.K. and disappears in the U.S., excess kurtosis vanishes in Canada, Japan, and the U.S. and drops substantially in the U.K., once we incorporate the break in the variance equation of output for the four countries. That is, the integrated GARCH (IGARCH) effect proves spurious and the GARCH model demonstrates misspecification, if researchers neglect a nonstationary unconditional variance.
Resumo:
The objective of the present work is to propose a numerical and statistical approach, using computational fluid dynamics, for the study of the atmospheric pollutant dispersion. Modifications in the standard k-epsilon turbulence model and additional equations for the calculation of the variance of concentration are introduced to enhance the prediction of the flow field and scalar quantities. The flow field, the mean concentration and the variance of a flow over a two-dimensional triangular hill, with a finite-size point pollutant source, are calculated by a finite volume code and compared with published experimental results. A modified low Reynolds k-epsilon turbulence model was employed in this work, using the constant of the k-epsilon model C(mu)=0.03 to take into account the inactive atmospheric turbulence. The numerical results for the velocity profiles and the position of the reattachment point are in good agreement with the experimental results. The results for the mean and the variance of the concentration are also in good agreement with experimental results from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.
Resumo:
Previous studies (e.g., Hamori, 2000; Ho and Tsui, 2003; Fountas et al., 2004) find high volatility persistence of economic growth rates using generalized autoregressive conditional heteroskedasticity (GARCH) specifications. This paper reexamines the Japanese case, using the same approach and showing that this finding of high volatility persistence reflects the Great Moderation, which features a sharp decline in the variance as well as two falls in the mean of the growth rates identified by Bai and Perronâs (1998, 2003) multiple structural change test. Our empirical results provide new evidence. First, excess kurtosis drops substantially or disappears in the GARCH or exponential GARCH model that corrects for an additive outlier. Second, using the outlier-corrected data, the integrated GARCH effect or high volatility persistence remains in the specification once we introduce intercept-shift dummies into the mean equation. Third, the time-varying variance falls sharply, only when we incorporate the break in the variance equation. Fourth, the ARCH in mean model finds no effects of our more correct measure of output volatility on output growth or of output growth on its volatility.
Resumo:
Recently, Fagiolo et al. (2008) find fat tails of economic growth rates after adjusting outliers, autocorrelation and heteroskedasticity. This paper employs US quarterly real output growth, showing that this finding of fat tails may reflect the Great Moderation. That is, leptokurtosis disappears after GARCH adjustment once we incorporate the break in the variance equation.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
In this article, the results of a modified SERVQUAL questionnaire (Parasuraman et al., 1991) are reported. The modifications consisted in substituting questionnaire items particularly suited to a specific service (banking) and context (county of Girona, Spain) for the original rather general and abstract items. These modifications led to more interpretable factors which accounted for a higher percentage of item variance. The data were submitted to various structural equation models which made it possible to conclude that the questionnaire contains items with a high measurement quality with respect to five identified dimensions of service quality which differ from those specified by Parasuraman et al. And are specific to the banking service. The two dimensions relating to the behaviour of employees have the greatest predictive power on overall quality and satisfaction ratings, which enables managers to use a low-cost reduced version of the questionnaire to monitor quality on a regular basis. It was also found that satisfaction and overall quality were perfectly correlated thus showing that customers do not perceive these concepts as being distinct
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this article, the results of a modified SERVQUAL questionnaire (Parasuraman et al., 1991) are reported. The modifications consisted in substituting questionnaire items particularly suited to a specific service (banking) and context (county of Girona, Spain) for the original rather general and abstract items. These modifications led to more interpretable factors which accounted for a higher percentage of item variance. The data were submitted to various structural equation models which made it possible to conclude that the questionnaire contains items with a high measurement quality with respect to five identified dimensions of service quality which differ from those specified by Parasuraman et al. And are specific to the banking service. The two dimensions relating to the behaviour of employees have the greatest predictive power on overall quality and satisfaction ratings, which enables managers to use a low-cost reduced version of the questionnaire to monitor quality on a regular basis. It was also found that satisfaction and overall quality were perfectly correlated thus showing that customers do not perceive these concepts as being distinct
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.
Resumo:
OBJETIVO: comparar medidas de tamanhos dentários, suas reprodutibilidades e a aplicação da equação de regressão de Tanaka e Johnston na predição do tamanho dos caninos e pré-molares em modelos de gesso e digital. MÉTODOS: trinta modelos de gesso foram escaneados para obtenção dos modelos digitais. As medidas do comprimento mesiodistal dos dentes foram obtidas com paquímetro digital nos modelos de gesso e nos modelos digitais utilizando o software O3d (Widialabs). A somatória do tamanho dos incisivos inferiores foi utilizada para obter os valores de predição do tamanho dos pré-molares e caninos utilizando equação de regressão, e esses valores foram comparados ao tamanho real dos dentes. Os dados foram analisados estatisticamente, aplicando-se aos resultados o teste de correlação de Pearson, a fórmula de Dahlberg, o teste t pareado e a análise de variância (p < 0,05). RESULTADOS: excelente concordância intraexaminador foi observada nas medidas realizadas em ambos os modelos. O erro aleatório não esteve presente nas medidas obtidas com paquímetro, e o erro sistemático foi mais frequente no modelo digital. A previsão de espaço obtida pela aplicação da equação de regressão foi maior que a somatória dos pré-molares e caninos presentes nos modelos de gesso e nos modelos digitais. CONCLUSÃO: apesar da boa reprodutibilidade das medidas realizadas em ambos os modelos, a maioria das medidas dos modelos digitais foram superiores às do modelos de gesso. O espaço previsto foi superestimado em ambos os modelos e significativamente maior nos modelos digitais.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Cognitive impairments are currently regarded as important determinants of functional domains and are promising treatment goals in schizophrenia. Nevertheless, the exact nature of the interdependent relationship between neurocognition and social cognition as well as the relative contribution of each of these factors to adequate functioning remains unclear. The purpose of this article is to systematically review the findings and methodology of studies that have investigated social cognition as a mediator variable between neurocognitive performance and functional outcome in schizophrenia. Moreover, we carried out a study to evaluate this mediation hypothesis by the means of structural equation modeling in a large sample of 148 schizophrenia patients. The review comprised 15 studies. All but one study provided evidence for the mediating role of social cognition both in cross-sectional and in longitudinal designs. Other variables like motivation and social competence additionally mediated the relationship between social cognition and functional outcome. The mean effect size of the indirect effect was 0.20. However, social cognitive domains were differentially effective mediators. On average, 25% of the variance in functional outcome could be explained in the mediation model. The results of our own statistical analysis are in line with these conclusions: Social cognition mediated a significant indirect relationship between neurocognition and functional outcome. These results suggest that research should focus on differential mediation pathways. Future studies should also consider the interaction with other prognostic factors, additional mediators, and moderators in order to increase the predictive power and to target those factors relevant for optimizing therapy effects.