915 resultados para Forecasting Volatility


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Doutoramento em Economia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extreme learning machine (ELM) is originally proposed for single- hidden layer feed-forward neural networks (SLFN). From the functional equivalence of fuzzy logic systems and SLFN, the fuzzy logic systems can be interpreted as a special case of SLFN under some mild conditions. Hence the fuzzy logic systems can be trained using SLFN's learning algorithms. Considering the same equivalence, ELM is utilized here to train interval type-2 fuzzy logic systems (IT2FLSs). Based on the working principle of the ELM, the parameters of the antecedent of IT2FLSs are randomly generated while the consequent part of IT2FLSs is optimized using Moore-Penrose generalized inverse of ELM. Application of the developed model to electricity load forecasting is another novelty of the research work. Experimental results shows better forecasting performance of the proposed model over the two frequently used forecasting models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate and timely traffic flow prediction is crucial to proactive traffic management and control in data-driven intelligent transportation systems (D2ITS), which has attracted great research interest in the last few years. In this paper, we propose a Spatial-Temporal Weighted K-Nearest Neighbor model, named STW-KNN, in a general MapReduce framework of distributed modeling on a Hadoop platform, to enhance the accuracy and efficiency of short-term traffic flow forecasting. More specifically, STW-KNN considers the spatial-temporal correlation and weight of traffic flow with trend adjustment features, to optimize the search mechanisms containing state vector, proximity measure, prediction function, and K selection. urthermore, STW-KNN is implemented on a widely adopted Hadoop distributed computing platform with the MapReduce parallel processing paradigm, for parallel prediction of traffic flow in real time. inally, with extensive experiments on real-world big taxi trajectory data, STW-KNN is compared with the state-of-the-art prediction models including conventional K-Nearest Neighbor (KNN), Artificial Neural Networks (ANNs), Naïve Bayes (NB), Random orest (R), and C4.. The results demonstrate that the proposed model is superior to existing models on accuracy by decreasing the mean absolute percentage error (MAPE) value more than 11.9% only in time domain and even achieves 89.71% accuracy improvement with the MAPEs of between 4% and 6.% in both space and time domains, and also significantly improves the efficiency and scalability of short-term traffic flow forecasting over existing approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel path planning method for minimizing the energy consumption of an autonomous underwater vehicle subjected to time varying ocean disturbances and forecast model uncertainty. The algorithm determines 4-Dimensional path candidates using Nonlinear Robust Model Predictive Control (NRMPC) and solutions optimised using A∗-like algorithms. Vehicle performance limits are incorporated into the algorithm with disturbances represented as spatial and temporally varying ocean currents with a bounded uncertainty in their predictions. The proposed algorithm is demonstrated through simulations using a 4-Dimensional, spatially distributed time-series predictive ocean current model. Results show the combined NRMPC and A∗ approach is capable of generating energy-efficient paths which are resistant to both dynamic disturbances and ocean model uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity load forecasting has become one of the most functioning tools in energy efficiency and load management and utility companies which has been made very complex due to deregulation. Due to the importance of providing a secure and economic electricty for the consumers, having a reliable and robust enough forecast engine in short-term load management is very needful. Fuzzy inference system is one of primal branches of Artificial Intelligence techniques which has been widely used for different applications of decision making in complex systems. This paper aims to develop a Fuzzy inference system as a main forecast engine for Short term Load Forecasting (STLF) of a city in Iran. However, the optimization of this platform for this special case remains a basic problem. Hence, to address this issue, the Radial Movement Optimization (RMO) technique is proposed to optimize the whole Fuzzy platform. To support this idea, the accuracy of the proposed model is analyzed using MAPE index and an average error of 1.38% is obtained for the forecast load demand which represents the reliability of the proposed method. Finally, results achieved by this method, demonstrate that an adaptive two-stage hybrid system consisting of Fuzzy & RMO can be an accurate and robust enough choice for STLF problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data.

METHODS: We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features.

RESULTS: Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014.

CONCLUSIONS: In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To compare the accuracy of different forecasting approaches an error measure is required. Many error measures have been proposed in the literature, however in practice there are some situations where different measures yield different decisions on forecasting approach selection and there is no agreement on which approach should be used. Generally forecasting measures represent ratios or percentages providing an overall image of how well fitted the forecasting technique is to the observations. This paper proposes a multiplicative Data Envelopment Analysis (DEA) model in order to rank several forecasting techniques. We demonstrate the proposed model by applying it to the set of yearly time series of the M3 competition. The usefulness of the proposed approach has been tested using the M3-competition where five error measures have been applied in and aggregated to a single DEA score.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t + 1, using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper empirically investigates volatility transmission among stock and foreign exchange markets in seven major world economies during the period July 1988 to January 2015. To this end, we first perform a static and dynamic analysis to measure the total volatility connectedness in the entire period (the system-wide approach) using a framework recently proposed by Diebold and Yilmaz (2014). Second, we make use of a dynamic analysis to evaluate the net directional connectedness for each market. To gain further insights, we examine the time-varying behaviour of net pair-wise directional connectedness during the financial turmoil periods experienced in the sample period Our results suggest that slightly more than half of the total variance of the forecast errors is explained by shocks across markets rather than by idiosyncratic shocks. Furthermore, we find that volatility connectedness varies over time, with a surge during periods of increasing economic and financial instability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper estimates Bejarano and Charry (2014)’s small open economy with financial frictions model for the Colombian economy using Bayesian estimation techniques. Additionally, I compute the welfare gains of implementing an optimal response to credit spreads into an augmented Taylor rule. The main result is that a reaction to credit spreads does not imply significant welfare gains unless the economic disturbances increases its volatility, like the disruption implied by a financial crisis. Otherwise its impact over the macroeconomic variables is null.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method denoted as synthetic portfolio for event studies in market microstructure that is particularly interesting to use with high frequency data and thinly traded markets. The method is based on Synthetic Control Method and provides a robust data driven method to build a counterfactual for evaluating the effects of the volatility call auctions. We find that SMC could be used if the loss function is defined as the difference between the returns of the asset and the returns of a synthetic portfolio. We apply SCM to test the performance of the volatility call auction as a circuit breaker in the context of an event study. We find that for Colombian Stock Market securities, the asynchronicity of intraday data reduces the analysis to a selected group of stocks, however it is possible to build a tracking portfolio. The realized volatility increases after the auction, indicating that the mechanism is not enhancing the price discovery process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a methodology for short-term load forecasting based on genetic algorithm feature selection and artificial neural network modeling. A feed forward artificial neural network is used to model the 24-h ahead load based on past consumption, weather and stock index data. A genetic algorithm is used in order to find the best subset of variables for modeling. Three data sets of different geographical locations, encompassing areas of different dimensions with distinct load profiles are used in order to evaluate the methodology. The developed approach was found to generate models achieving a minimum mean average percentage error under 2 %. The feature selection algorithm was able to significantly reduce the number of used features and increase the accuracy of the models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article addresses the effects of the prohibition against naked CDS buying implemented by the European Union in November 2012. Three aspects of market quality are analyzed: liquidity, volatility, and price informativeness. Overall, our results suggest that the ban produced negative effects on liquidity and price informativeness. First, we find that in territories within the scope of the EU regulation, the bid–ask spreads on sovereign CDS contracts rose after the ban, but fell for countries outside its bounds. Open interest declined for both groups of CDS reference entities in our sample, but significantly more in the constraint group. Price delay increased more prominently for countries affected by the ban, whereas price precision decreased for these countries while increasing for CDSs written on other sovereign reference entities. Most notably, our findings indicate that hese negative effects were more pronounced amid reference entities exhibiting lower credit risk. With respect to volatility, the evidence suggests that the ban was successful in stabilizing the CDS market in that volatility decreased, particularly for contracts written on riskier CDS entities.