901 resultados para Forecast
Resumo:
Numerous econometric models have been proposed for forecasting property market performance, but limited success has been achieved in finding a reliable and consistent model to predict property market movements over a five to ten year timeframe. This research focuses on office rental growth forecasts and overviews many of the office rent models that have evolved over the past 20 years. A model by DiPasquale and Wheaton is selected for testing in the Brisbane, Australia office market. The adaptation of this study did not provide explanatory variables that could assist in developing a reliable, predictive model of office rental growth. In light of this result, the paper suggests a system dynamics framework that includes an econometric model based on historical data as well as user input guidance for the primary variables. The rent forecast outputs would be assessed having regard to market expectations and probability profiling undertaken for use in simulation exercises. The paper concludes with ideas for ongoing research.
Resumo:
This paper investigates how best to forecast optimal portfolio weights in the context of a volatility timing strategy. It measures the economic value of a number of methods for forming optimal portfolios on the basis of realized volatility. These include the traditional econometric approach of forming portfolios from forecasts of the covariance matrix, and a novel method, where a time series of optimal portfolio weights are constructed from observed realized volatility and directly forecast. The approach proposed here of directly forecasting portfolio weights shows a great deal of merit. Resulting portfolios are of equivalent economic benefit to a number of competing approaches and are more stable across time. These findings have obvious implications for the manner in which volatility timing is undertaken in a portfolio allocation context.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
Resumo:
Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.
Resumo:
This paper proposes a method of short term load forecasting with limited data, applicable even at 11 kV substation levels where total power demand is relatively low and somewhat random and weather data are usually not available as in most developing countries. Kalman filtering technique has been modified and used to forecast daily and hourly load. Planning generation and interstate energy exchange schedule at load dispatch centre and decentralized Demand Side Management at substation level are intended to be carried out with the help of this short term load forecasting technique especially to achieve peak power control without enforcing load-shedding.
Resumo:
In order to meet the ever growing demand for the prediction of oceanographic parametres in the Indian Ocean for a variety of applications, the Indian National Centre for Ocean Information Services (INCOIS) has recently set-up an operational ocean forecast system, viz. the Indian Ocean Forecast System (INDOFOS). This fully automated system, based on a state-of-the-art ocean general circulation model issues six-hourly forecasts of the sea-surface temperature, surface currents and depths of the mixed layer and the thermocline up to five-days of lead time. A brief account of INDOFOS and a statistical validation of the forecasts of these parametres using in situ and remote sensing data are presented in this article. The accuracy of the sea-surface temperature forecasts by the system is high in the Bay of Bengal and the Arabian Sea, whereas it is moderate in the equatorial Indian Ocean. On the other hand, the accuracy of the depth of the thermocline and the isothermal layers and surface current forecasts are higher near the equatorial region, while it is relatively lower in the Bay of Bengal.
Resumo:
A Data Mining model that is able to predict if a flight is going to leave late due to a weather delay. It is used, to be able to get a later connection if you have a connecting flight.
Resumo:
The paper demonstrates the nonstationarity of algal population behaviors by analyzing the historical populations of Nostocales spp. in the River Darling, Australia. Freshwater ecosystems are more likely to be nonstationary, instead of stationary. Nonstationarity implies that only the near past behaviors could forecast the near future for the system. However, nonstionarity was not considered seriously in previous research efforts for modeling and predicting algal population behaviors. Therefore the moving window technique was incorporated with radial basis function neural network (RBFNN) approach to deal with nonstationarity when modeling and forecasting the population behaviors of Nostocales spp. in the River Darling. The results showed that the RBFNN model could predict the timing and magnitude of algal blooms of Nostocales spp. with high accuracy. Moreover, a combined model based on individual RBFNN models was implemented, which showed superiority over the individual RBFNN models. Hence, the combined model was recommended for the modeling and forecasting of the phytoplankton populations, especially for the forecasting.
Resumo:
In this paper, the codes of Pattern Informatics (PI) method put forward by Rundle et al. have been worked out according to their algorithm published, and the retrospective forecast of PI method to North China (28.0 degrees-42.0 degrees N, 108.0 degrees-125.0 degrees E) and to Southwest China (22.0 degrees-28.3 degrees N, 98.0 degrees-106.0 degrees E) has been tested. The results show that the hit rates in different regions show a great difference. In Southwest China, 32 earthquakes with M(L)5.0 or larger have occurred during the predicted time period 2000-2007, and 26 out of the 32 earthquakes occurred in or near the hot spots. In North China, the total number of M(L)5.0 or larger was 12 during the predicted time period 2000-2007, and only 3 out of the 12 earthquakes occurred in or near the hot spots. From our results, we hold that if the PI method could be applied to all kinds of regions, the parameters associated with time points and time windows should be chosen carefully to obtain the higher hit rate. We also found that the aftershocks in a strong earthquake sequence affect the PI results obviously. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper, the codes of Pattern Informatics (PI) method put forward by Rundle et al. have been worked out according to their algorithm published, and the retrospective forecast of PI method to North China (28.0 degrees-42.0 degrees N, 108.0 degrees-125.0 degrees E) and to Southwest China (22.0 degrees-28.3 degrees N, 98.0 degrees-106.0 degrees E) has been tested. The results show that the hit rates in different regions show a great difference. In Southwest China, 32 earthquakes with M(L)5.0 or larger have occurred during the predicted time period 2000-2007, and 26 out of the 32 earthquakes occurred in or near the hot spots. In North China, the total number of M(L)5.0 or larger was 12 during the predicted time period 2000-2007, and only 3 out of the 12 earthquakes occurred in or near the hot spots. From our results, we hold that if the PI method could be applied to all kinds of regions, the parameters associated with time points and time windows should be chosen carefully to obtain the higher hit rate. We also found that the aftershocks in a strong earthquake sequence affect the PI results obviously. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
We develop general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy-to-implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability.
Resumo:
We show that "commodity currency" exchange rates have surprisingly robust power in predicting global commodity prices, both in-sample and out-of-sample, and against a variety of alternative benchmarks. This result is of particular interest to policy makers, given the lack of deep forward markets in many individual commodities, and broad aggregate commodity indices in particular. We also explore the reverse relationship (commodity prices forecasting exchange rates) but find it to be notably less robust. We offer a theoretical resolution, based on the fact that exchange rates are strongly forward-looking, whereas commodity price fluctuations are typically more sensitive to short-term demand imbalances. © 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.