26 resultados para out-of-sample forecast
Resumo:
Based on the emergent findings of a pilot study which examined the issues around introducing Peer Mentoring into an Engineering School, this paper, which is very much a 'work in progress', describes and discusses results from the first year of what will be a three year exploratory study. Focusing on three distinctive concepts integral to the student experience, Relationships, Variety and Synergy, the study follows an Action Research Design in that it aims to find a realistic and workable solution to issues of attrition within the Engineering School in which the Project and Study are set. Starting with the research question "Does Peer Mentoring improve engineering students' transition into university?"', the Pilot Project and Study will run for three years, each year building on the lessons of the previous year.
Resumo:
This article presents out-of-sample inflation forecasting results based on relative price variability and skewness. It is demonstrated that forecasts on long horizons of 1.5-2 years are significantly improved if the forecast equation is augmented with skewness. © 2010 Taylor & Francis.
Resumo:
We use the Fleissig and Whitney (2003) weak separability test to determine admissible levels of monetary aggregation for the Euro area. We find that the Euro area monetary assets in M2 and M3 are weakly separable and construct admissible Divisia monetary aggregates for these assets. We evaluate the Divisia aggregates as indicator variables, building on Nelson (2002), Reimers (2002), and Stracca (2004). Specifically, we show that real growth of the admissible Divisia aggregates enter the Euro area IS curve positively and significantly for the period from 1980 to 2005. Out of sample, we show that Divisia M2 and M3 appear to contain useful information for forecasting Euro area inflation.
Resumo:
This empirical study examines the extent of non-linearity in a multivariate model of monthly financial series. To capture the conditional heteroscedasticity in the series, both the GARCH(1,1) and GARCH(1,1)-in-mean models are employed. The conditional errors are assumed to follow the normal and Student-t distributions. The non-linearity in the residuals of a standard OLS regression are also assessed. It is found that the OLS residuals as well as conditional errors of the GARCH models exhibit strong non-linearity. Under the Student density, the extent of non-linearity in the GARCH conditional errors was generally similar to those of the standard OLS. The GARCH-in-mean regression generated the worse out-of-sample forecasts.
Resumo:
This study seeks to explain the leverage in UK stock returns by reference to the return volatility, leverage and size characteristics of UK companies. A leverage effect is found that is stronger for smaller companies and has greater explanatory power over the returns of smaller companies. The properties of a theoretical model that predicts that companies with higher leverage ratios will experience greater leverage effects are explored. On examining leverage ratio data, it is found that there is a propensity for smaller companies to have higher leverage ratios. The transmission of volatility shocks between the companies is also examined and it is found that the volatility of larger firm returns is important in determining both the volatility and returns of smaller firms, but not the reverse. Moreover, it is found that where volatility spillovers are important, they improve out-of-sample volatility forecasts. © 2005 Taylor & Francis Group Ltd.
Resumo:
Two main questions are addressed here: is there a long-run relationship between trade balance and real exchange rate for the bilateral trade between Mauritius and UK? Does a J-curve exist for this bilateral trade? Our findings suggest that the real exchange rate is cointegrated with the trade balance and we find evidence of a J-curve effect. We also find bidirectional causality between the trade balance and the real exchange rate in the long-run. The real exchange rate also causes the trade balance in the short-run. In an out-of-sample forecasting experiment, we also find that real exchange rate contains useful information that can explain future movements in the trade balance.
Resumo:
We use the Fleissig and Whitney [Fleissig, A.R., Whitney, G.A., 2003. A new PC-based test for Varian's weak separability conditions. Journal of Business and Economics Statistics 21 (1), 133–144] weak separability test to determine admissible levels of monetary aggregation for the Euro area. We find that the Euro area monetary assets in M2 and M3 are weakly separable and construct admissible Divisia monetary aggregates for these assets. We show that real growth of the admissible Divisia aggregates enters the Euro area IS curve positively and significantly for the period from 1980 to 2005. Out of sample, we show that Divisia M2 and M3 appear to contain useful information for forecasting Euro area inflation.
Resumo:
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.