77 resultados para Forecasts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predicting ecological response to climate change is often limited by a lack of relevant local data from which directly applicable mechanistic models can be developed. This limits predictions to qualitative assessments or simplistic rules of thumb in data-poor regions, making management of the relevant systems difficult. We demonstrate a method for developing quantitative predictions of ecological response in data-poor ecosystems based on a space-for-time substitution, using distant, well-studied systems across an inherent climatic gradient to predict ecological response. Changes in biophysical data across the spatial gradient are used to generate quantitative hypotheses of temporal ecological responses that are then tested in a target region. Transferability of predictions among distant locations, the novel outcome of this method, is demonstrated via simple quantitative relationships that identify direct and indirect impacts of climate change on physical, chemical and ecological variables using commonly available data sources. Based on a limited subset of data, these relationships were demonstrably plausible in similar yet distant (>2000 km) ecosystems. Quantitative forecasts of ecological change based on climate-ecosystem relationships from distant regions provides a basis for research planning and informed management decisions, especially in the many ecosystems for which there are few data. This application of gradient studies across domains - to investigate ecological response to climate change - allows for the quantification of effects on potentially numerous, interacting and complex ecosystem components and how they may vary, especially over long time periods (e.g. decades). These quantitative and integrated long-term predictions will be of significant value to natural resource practitioners attempting to manage data-poor ecosystems to prevent or limit the loss of ecological value. The method is likely to be applicable to many ecosystem types, providing a robust scientific basis for estimating likely impacts of future climate change in ecosystems where no such method currently exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding how agents formulate their expectations about Fed behavior is important for market participants because they can potentially use this information to make more accurate estimates of stock and bond prices. Although it is commonly assumed that agents learn over time, there is scant empirical evidence in support of this assumption. Thus, in this paper we test if the forecast of the three month T-bill rate in the Survey of Professional Forecasters (SPF) is consistent with least squares learning when there are discrete shifts in monetary policy. We first derive the mean, variance and autocovariances of the forecast errors from a recursive least squares learning algorithm when there are breaks in the structure of the model. We then apply the Bai and Perron (1998) test for structural change to a forecasting model for the three month T-bill rate in order to identify changes in monetary policy. Having identified the policy regimes, we then estimate the implied biases in the interest rate forecasts within each regime. We find that when the forecast errors from the SPF are corrected for the biases due to shifts in policy, the forecasts are consistent with least squares learning. © 2014 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Species distribution models have come under criticism for being too simplistic for making robust future forecasts, partly because they assume that climate is the main determinant of geographical range at large spatial extents and coarse resolutions, with non-climate predictors being important only at finer scales. We suggest that this paradigm might be obscured by species movement patterns. To explore this we used contrasting kangaroo (family Macropodidae) case studies: two species with relatively small, stable home ranges (Macropus giganteus and M.robustus) and three species with more extensive, adaptive ranging behaviour (M.antilopinus, M.fuliginosus and M.rufus). We predicted that non-climate predictors will be most influential to model fit and predictive performance at local spatial resolution for the former species and at landscape resolution for the latter species. We compared residuals autocovariate - boosted regression tree (RAC-BRT) model statistics with and without species-specific non-climate predictors (habitat, soil, fire, water and topography), at local- and landscape-level spatial resolutions (5 and 50km). As predicted, the influence of non-climate predictors on model fit and predictive performance (compared with climate-only models) was greater at 50 compared with 5km resolution for M.rufus and M.fuliginosus and the opposite trend was observed for M.giganteus. The results for M.robustus and M.antilopinus were inconclusive. Also notable was the difference in inter-scale importance of climate predictors in the presence of non-climate predictors. In conclusion, differences in autecology, particularly relating to space use, may contribute to the importance of non-climate predictors at a given scale, not model scale per se. Further exploration of this concept across a range of species is encouraged and findings may contribute to more effective conservation and management of species at ecologically meaningful scales. © 2014 Ecological Society of Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In contrast to point forecast, prediction interval-based neural network offers itself as an effective tool to quantify the uncertainty and disturbances that associated with process data. However, single best neural network (NN) does not always guarantee to predict better quality of forecast for different data sets or a whole range of data set. Literature reported that ensemble of NNs using forecast combination produces stable and consistence forecast than single best NN. In this work, a NNs ensemble procedure is introduced to construct better quality of Pis. Weighted averaging forecasts combination mechanism is employed to combine the Pi-based forecast. As the key contribution of this paper, a new Pi-based cost function is proposed to optimize the individual weights for NN in combination process. An optimization algorithm, named simulated annealing (SA) is used to minimize the PI-based cost function. Finally, the proposed method is examined in two different case studies and compared the results with the individual best NNs and available simple averaging Pis aggregating method. Simulation results demonstrated that the proposed method improved the quality of Pis than individual best NNs and simple averaging ensemble method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The European Carbon Emissions Trading Scheme introduced in 2005 has led to both spot and futures market trading of carbon emissions. However, despite seven years of trading, we have no knowledge on how profitable carbon emissions trading is. In this paper, we first test whether carbon forward returns predict carbon spot returns. We find strong evidence on both in-sample and out-of-sample predictability. Based on this evidence, we forecast carbon spot returns using both carbon forward returns and a constant. We consider a mean-variance investor and a CRRA investor, and show that they have higher utility and can make more statistically significant profits by following forecasts generated from the forward returns model than from a constant returns model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty is known to be a concomitant factor of almost all the real world commodities such as oil prices, stock prices, sales and demand of products. As a consequence, forecasting problems are becoming more and more challenging and ridden with uncertainty. Such uncertainties are generally quantified by statistical tools such as prediction intervals (Pis). Pis quantify the uncertainty related to forecasts by estimating the ranges of the targeted quantities. Pis generated by traditional neural network based approaches are limited by high computational burden and impractical assumptions about the distribution of the data. A novel technique for constructing high quality Pis using support vector machines (SVMs) is being proposed in this paper. The proposed technique directly estimates the upper and lower bounds of the PI in a short time and without any assumptions about the data distribution. The SVM parameters are tuned using particle swarm optimization technique by minimization of a modified Pi-based objective function. Electricity price and demand data of the Ontario electricity market is used to validate the performance of the proposed technique. Several case studies for different months indicate the superior performance of the proposed method in terms of high quality PI generation and shorter computational times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding how agents formulate their expectations about Fed behavior is important for market participants because they can potentially use this information to make more accurate estimates of stock and bond prices. Although it is commonly assumed that agents learn over time, there is scant empirical evidence in support of this assumption. Thus, in this paper we test if the forecast of the three month T-bill rate in the Survey of Professional Forecasters (SPF) is consistent with least squares learning when there are discrete shifts in monetary policy. We first derive the mean, variance and autocovariances of the forecast errors from a recursive least squares learning algorithm when there are breaks in the structure of the model. We then apply the Bai and Perron (1998) test for structural change to a forecasting model for the three month T-bill rate in order to identify changes in monetary policy. Having identified the policy regimes, we then estimate the implied biases in the interest rate forecasts within each regime. We find that when the forecast errors from the SPF are corrected for the biases due to shifts in policy, the forecasts are consistent with least squares learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The penetration of intermittent renewable energy sources (IRESs) into power grids has increased in the last decade. Integration of wind farms and solar systems as the major IRESs have significantly boosted the level of uncertainty in operation of power systems. This paper proposes a comprehensive computational framework for quantification and integration of uncertainties in distributed power systems (DPSs) with IRESs. Different sources of uncertainties in DPSs such as electrical load, wind and solar power forecasts and generator outages are covered by the proposed framework. Load forecast uncertainty is assumed to follow a normal distribution. Wind and solar forecast are implemented by a list of prediction intervals (PIs) ranging from 5% to 95%. Their uncertainties are further represented as scenarios using a scenario generation method. Generator outage uncertainty is modeled as discrete scenarios. The integrated uncertainties are further incorporated into a stochastic security-constrained unit commitment (SCUC) problem and a heuristic genetic algorithm is utilized to solve this stochastic SCUC problem. To demonstrate the effectiveness of the proposed method, five deterministic and four stochastic case studies are implemented. Generation costs as well as different reserve strategies are discussed from the perspectives of system economics and reliability. Comparative results indicate that the planned generation costs and reserves are different from the realized ones. The stochastic models show better robustness than deterministic ones. Power systems run a higher level of risk during peak load hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The value of accurate weather forecast information is substantial. In this paper we examine competition among forecast providers and its implications for the quality of forecasts. A simple economic model shows that an economic bias geographical inequality in forecast accuracy arises due to the extent of the market. Using the unique data on daily high temperature forecasts for 704 U.S. cities, we find that forecast accuracy increases with population and income. Furthermore, the economic bias gets larger when the day of forecasting is closer to the target day; i.e. when people are more concerned about the quality of forecasts. The results hold even after we control for location-specific heterogeneity and difficulty of forecasting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty of the electricity prices makes the task of accurate forecasting quite difficult for the electricity market participants. Prediction intervals (PIs) are statistical tools which quantify the uncertainty related to forecasts by estimating the ranges of the future electricity prices. Traditional approaches based on neural networks (NNs) generate PIs at the cost of high computational burden and doubtful assumptions about data distributions. In this work, we propose a novel technique that is not plagued with the above limitations and it generates high-quality PIs in a short time. The proposed method directly generates the lower and upper bounds of the future electricity prices using support vector machines (SVM). Optimal model parameters are obtained by the minimization of a modified PI-based objective function using a particle swarm optimization (PSO) technique. The efficiency of the proposed method is illustrated using data from Ontario, Pennsylvania-New Jersey-Maryland (PJM) interconnection day-ahead and real-time markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neural networks (NNs) are an effective tool to model nonlinear systems. However, their forecasting performance significantly drops in the presence of process uncertainties and disturbances. NN-based prediction intervals (PIs) offer an alternative solution to appropriately quantify uncertainties and disturbances associated with point forecasts. In this paper, an NN ensemble procedure is proposed to construct quality PIs. A recently developed lower-upper bound estimation method is applied to develop NN-based PIs. Then, constructed PIs from the NN ensemble members are combined using a weighted averaging mechanism. Simulated annealing and a genetic algorithm are used to optimally adjust the weights for the aggregation mechanism. The proposed method is examined for three different case studies. Simulation results reveal that the proposed method improves the average PI quality of individual NNs by 22%, 18%, and 78% for the first, second, and third case studies, respectively. The simulation study also demonstrates that a 3%-4% improvement in the quality of PIs can be achieved using the proposed method compared to the simple averaging aggregation method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this research is to examine the efficiency of different aggregation algorithms to the forecasts obtained from individual neural network (NN) models in an ensemble. In this study an ensemble of 100 NN models are constructed with a heterogeneous architecture. The outputs from NN models are combined by three different aggregation algorithms. These aggregation algorithms comprise of a simple average, trimmed mean, and a Bayesian model averaging. These methods are utilized with certain modifications and are employed on the forecasts obtained from all individual NN models. The output of the aggregation algorithms is analyzed and compared with the individual NN models used in NN ensemble and with a Naive approach. Thirty-minutes interval electricity demand data from Australian Energy Market Operator (AEMO) and the New York Independent System Operator's web site (NYISO) are used in the empirical analysis. It is observed that the aggregation algorithm perform better than many of the individual NN models. In comparison with the Naive approach, the aggregation algorithms exhibit somewhat better forecasting performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The bulk of existing work on the statistical forecasting of air quality is based on either neural networks or linear regressions, which are both subject to important drawbacks. In particular, while neural networks are complicated and prone to in-sample overfitting, linear regressions are highly dependent on the specification of the regression function. The present paper shows how combining linear regression forecasts can be used to circumvent all of these problems. The usefulness of the proposed combination approach is verified using both Monte Carlo simulation and an extensive application to air quality in Bogota, one of the largest and most polluted cities in Latin America. © 2014 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to demonstrate the ability of visualization and simulation techniques to aid and simulate current and future directions in coastal planning. The process of visualization will interrogate the coastal cities of Portland, Apollo Bay, Anglesea and Hobsons Bay in south-eastern Australian coastal seaboard through a progression of projections and simulated forecasts from 2014 to 2050 to see if a process(s) or methodology could help in planning the future growth of coastal settlements. The analysis uses Geographic Information Systems (GIS) associated with planning application software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines volatility asymmetry in a financial market using a stochastic volatility framework. We use the MCMC method for model estimations. There is evidence of volatility asymmetry in the data. Our asymmetric stochastic volatility in mean model, which nests both asymmetric stochastic volatility (ASV) and stochastic volatility in mean models (SVM), indicates ASV sufficiently captures the risk-return relationship; therefore, augmenting it with volatility in mean does not improve its performance. ASV fits the data better and yields more accurate out-of-sample forecasts than alternatives. We also demonstrate that asymmetry mainly emanates from the systematic parts of returns. As a result, it is more pronounced at the market level and the volatility feedback effect dominates the leverage effect.