951 resultados para vector auto-regressive model
Resumo:
O processo de liberalização do setor elétrico em Portugal Continental seguiu uma metodologia idêntica à da maior parte dos países europeus, tendo a abertura de mercado sido efetuada de forma progressiva. Assim, no âmbito do acompanhamento do setor elétrico nacional, reveste-se de particular interesse caracterizar a evolução mais recente do mercado liberalizado, nomeadamente em relação ao preço da energia elétrica. A previsão do preço da energia elétrica é uma questão muito importante para todos os participantes do mercado de energia elétrica. Como se trata de um assunto de grande importância, a previsão do preço da energia elétrica tem sido alvo de diversos estudos e diversas metodologias têm sido propostas. Esta questão é abordada na presente dissertação recorrendo a técnicas de previsão, nomeadamente a métodos baseados no histórico da variável em estudo. As previsões são, segundo alguns especialistas, um dos inputs essenciais que os gestores desenvolvem para ajudar no processo de decisão. Virtualmente cada decisão relevante ao nível das operações depende de uma previsão. Para a realização do modelo de previsão de preço da energia elétrica foram utilizados os modelos Autorregressivos Integrados de Médias Móveis, Autoregressive / Integrated / Moving Average (ARIMA), que geram previsões através da informação contida na própria série temporal. Como se pretende avaliar a estrutura do preço da energia elétrica do mercado de energia, é importante identificar, deste conjunto de variáveis, quais as que estão mais relacionados com o preço. Neste sentido, é realizada em paralelo uma análise exploratória, através da correlação entre o preço da energia elétrica e outras variáveis de estudo, utilizando para esse efeito o coeficiente de correlação de Pearson. O coeficiente de correlação de Pearson é uma medida do grau e da direção de relação linear entre duas variáveis quantitativas. O modelo desenvolvido foi aplicado tendo por base o histórico de preço da eletricidade desde o inicio do mercado liberalizado e de modo a obter as previsões diária, mensal e anual do preço da eletricidade. A metodologia desenvolvida demonstrou ser eficiente na obtenção das soluções e ser suficientemente rápida para prever o valor do preço da energia elétrica em poucos segundos, servindo de apoio à decisão em ambiente de mercado.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. The paper considers a data driven approach in modelling uncertainty in spatial predictions. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic features and describe stochastic variability and non-uniqueness of spatial properties. It is able to capture and preserve key spatial dependencies such as connectivity, which is often difficult to achieve with two-point geostatistical models. Semi-supervised SVR is designed to integrate various kinds of conditioning data and learn dependences from them. A stochastic semi-supervised SVR model is integrated into a Bayesian framework to quantify uncertainty with multiple models fitted to dynamic observations. The developed approach is illustrated with a reservoir case study. The resulting probabilistic production forecasts are described by uncertainty envelopes.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
Central Governor Model (CGM) suggests that perturbations in the rate of heat storage (AS) are centrally integrated to regulate exercise intensity in a feed-forward fashion to prevent excessive thermal strain. We directly tested the CGM by manipulating ambient temperature (Tam) at 20-minute intervals from 20°C to 35°C, and returning to 20°C, while cycling at a set rate of perceived exertion (RPE). The synchronicity of power output (PO) with changes in HS and Tam were quantified using Auto-Regressive Integrated Moving Averages analysis. PO fluctuated irregularly but was not significantly correlated to changes in thermo physiological status. Repeated measures indicated no changes in lactate accumulation. In conclusion, real time dynamic sensation of Tam and integration of HS does not directly influence voluntary pacing strategies during sub-maximal cycling at a constant RPE while non-significant changes in blood lactate suggest an absence of peripheral fatigue.
Resumo:
The Meese-Rogoff forecasting puzzle states that foreign exchange (FX) rates are unpredictable. Since one country’s macroeconomic conditions could affect the price of its national currency, we study the dynamic relations between the FX rates and some macroeconomic accounts. Our research tests whether the predictability of the FX rates could be improved through the advanced econometrics. Improving the predictability of the FX rates has important implications for various groups including investors, business entities and the government. The present thesis examines the dynamic relations between the FX rates, savings and investments for a sample of 25 countries from the Organization for Economic Cooperation and Development. We apply quarterly data of FX rates, macroeconomic indices and accounts including the savings and the investments over three decades. Through preliminary Augmented Dickey-Fuller unit root tests and Johansen cointegration tests, we found that the savings rate and the investment rate are cointegrated with the vector (1,-1). This result is consistent with many previous studies on the savings-investment relations and therefore confirms the validity of the Feldstein-Horioka puzzle. Because of the special cointegrating relation between the savings rate and investment rate, we introduce the savings-investment rate differential (SID). Investigating each country through a vector autoregression (VAR) model, we observe extremely insignificant coefficient estimates of the historical SIDs upon the present FX rates. We also report similar findings through the panel VAR approach. We thus conclude that the historical SIDs are useless in forecasting the FX rate. Nonetheless, the coefficients of the past FX rates upon the current SIDs for both the country-specific and the panel VAR models are statistically significant. Therefore, we conclude that the historical FX rates can conversely predict the SID to some degree. Specifically, depreciation in the domestic currency would cause the increase in the SID.
Resumo:
One of the major concerns of scoliosis patients undergoing surgical treatment is the aesthetic aspect of the surgery outcome. It would be useful to predict the postoperative appearance of the patient trunk in the course of a surgery planning process in order to take into account the expectations of the patient. In this paper, we propose to use least squares support vector regression for the prediction of the postoperative trunk 3D shape after spine surgery for adolescent idiopathic scoliosis. Five dimensionality reduction techniques used in conjunction with the support vector machine are compared. The methods are evaluated in terms of their accuracy, based on the leave-one-out cross-validation performed on a database of 141 cases. The results indicate that the 3D shape predictions using a dimensionality reduction obtained by simultaneous decomposition of the predictors and response variables have the best accuracy.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).
Resumo:
The aim of this paper is to explore effects of macroeconomic variables on house prices and also, the lead-lag relationships of real estate markets to examine house price diffusion across Asian financial centres. The analysis is based on the Global Vector Auto-Regression (GVAR) model estimated using quarterly data for six Asian financial centres (Hong Kong, Tokyo, Seoul, Singapore, Taipei and Bangkok) from 1991Q1 to 2011Q2. The empirical results indicate that the global economic conditions play significant roles in shaping house price movements across Asian financial centres. In particular, a small open economy that heavily relies on international trade such as – Singapore and Tokyo - shows positive correlations between economy’s openness and house prices, consistent with the Balassa-Samuelson hypothesis in international trade. However, region-specific conditions do play important roles as determinants of house prices, partly due to restrictive housing policies and demand-supply imbalances, as found in Singapore and Bangkok.
Resumo:
The relationship between price volatility and competition is examined. Atheoretic, vector auto regressions on farm prices of wheat and retail prices of derivatives (flour, bread, pasta, bulgur and cookies) are compared to results from a dynamic, simultaneous-equations model with theory-based farm-to-retail linkages. Analytical results yield insights about numbers of firms and their impacts on demand- and supply-side multipliers, but the applications to Turkish time series (1988:1-1996:12) yield mixed results.
Resumo:
Sweden, together with Norway, Finland and Denmark, have created a multi-national electricity market called NordPool. In this market, producers and retailers of electricity can buy and sell electricity, and the retailers then offers this electricity to end consumers such as households and industries. Previous studies have shown that pricing at the NordPool market is functioning quite well, but no other study has to my knowledge studied if pricing in the retail market to consumers in Sweden is well functioning. If the market is well functioning, with competition and low transaction costs when changing electricity retailer, we would expect that a homogeneous good such as electricity would be sold at the approximately same price, and that price changes would be highly correlated, in this market. Thus, the aim of this study is to test whether the price of Vattenfall, the largest energy firm in the Swedish market, is highly correlated to the price of other firms in the Swedish retail market for electricity. Descriptive statistics indicate that the price offered by Vattenfall is quite similar to the price of other firms in the market. In addition, regression analysis show that the correlation between the price of Vattenfall and other firms is as high as 0.98.
Resumo:
I start presenting an explicit solution to Taylorís (2001) model, in order to illustrate the link between the target interest rate and the overnight interest rate prevailing in the economy. Next, I use Vector Auto Regressions to shed some light on the evolution of key macroeconomic variables after the Central Bank of Brazil increases the target interest rate by 1%. Point estimates show a four-year accumulated output loss ranging from 0:04% (whole sample, 1980 : 1-2004 : 2; quarterly data) to 0:25% (Post-Real data only) with a Örst-year peak output response between 0:04% and 1:0%; respectively. Prices decline between 2% and 4% in a 4-year horizon. The accumulated output response is found to be between 3:5 and 6 times higher after the Real Plan than when the whole sample is considered. The 95% confidence bands obtained using bias-corrected bootstrap always include the null output response when the whole sample is used, but not when the data is restricted to the Post-Real period. Innovations to interest rates explain between 4:9% (whole sample) and 9:2% (post-Real sample) of the forecast error of GDP.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.