982 resultados para Stable Autoregressive Models
Resumo:
Este trabalho tem por objetivo analisar o potencial de desenvolvimento do contrato futuro de soja no Brasil, por meio da atração de hedgers brasileiros e argentinos. Para tanto, faz-se necessário conhecer os padrões das conexões dos preços entre as regiões analisadas. Nesse sentido, o Capítulo 2 investigou a integração espacial do mercado físico de soja no Brasil (região de Sorriso, no Mato Grosso) e na Argentina (região de Rosário, na província de Santa Fé) e comparou ao grau de integração com os Estados Unidos. Foram empregados modelos autorregressivos com threshold (TAR e M-TAR) e modelos vetoriais de correção de erros, lineares e com threshold (VECM e TVECM), visando captar os efeitos dos custos de transação sobre a integração espacial entre essas regiões. Os resultados apontaram que o mercado de soja brasileiro, argentino e norte-americano são integrados, mesmo considerando-se os efeitos dos custos de transação sobre as decisões de arbitragem espacial. Consequentemente, os preços da soja no mercado internacional tendem a refletir o comportamento dos principais países produtores. Apesar disso, o tempo de transmissão de choques de preços mostrou-se, em geral, menor entre Brasil e Argentina, refletindo a proximidade geográfica. Apontou-se também o comportamento assimétrico da transmissão desses choques, uma vez que choques positivos sobre a relação de longo prazo tendem a ser mais persistentes que os negativos. Se o contrato futuro reflete o comportamento de preços de um único mercado físico integrado, deve-se então esperar que o risco de base seja menor para este mercado e, portanto, que a eficiência do hedge seja maior. No Capítulo 3, o objetivo se constituiu em verificar se há maior eficiência no hedge realizado com os contratos com vencimento em março na CME em relação à BM&FBOVESPA, considerando-se as relações de longo prazo entre os preços à vista e futuros, bem como a dinâmica na estrutura de covariâncias condicionais, por meio de modelos de correção de erros (VECM) e modelos de heterocedasticidade condicional generalizados com correlação condicional dinâmica (DCC-GARCH). Os resultados mostraram que, em geral, a introdução da dinâmica nos segundos momentos das distribuições dos erros tende a aumentar a eficiência da estratégia de hedge. Além disso, foi observado que os produtores de Sorriso tendem a obter melhores condições de hedge na CME, embora haja redução da variância ao se operar na BM&FBOVESPA. Por outro lado, a eficiência do hedge para os produtores de Rosário foi significativamente maior na BM&FBOVESPA do que na CME, o que indica o mercado potencial de hedgers argentinos para negociar o contrato futuro de soja local no Brasil.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.
Resumo:
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.
Resumo:
No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.
Resumo:
The investigation of pathogen persistence in vector-borne diseases is important in different ecological and epidemiological contexts. In this thesis, I have developed deterministic and stochastic models to help investigating the pathogen persistence in host-vector systems by using efficient modelling paradigms. A general introduction with aims and objectives of the studies conducted in the thesis are provided in Chapter 1. The mathematical treatment of models used in the thesis is provided in Chapter 2 where the models are found locally asymptotically stable. The models used in the rest of the thesis are based on either the same or similar mathematical structure studied in this chapter. After that, there are three different experiments that are conducted in this thesis to study the pathogen persistence. In Chapter 3, I characterize pathogen persistence in terms of the Critical Community Size (CCS) and find its relationship with the model parameters. In this study, the stochastic versions of two epidemiologically different host-vector models are used for estimating CCS. I note that the model parameters and their algebraic combination, in addition to the seroprevalence level of the host population, can be used to quantify CCS. The study undertaken in Chapter 4 is used to estimate pathogen persistence using both deterministic and stochastic versions of a model with seasonal birth rate of the vectors. Through stochastic simulations we investigate the pattern of epidemics after the introduction of an infectious individual at different times of the year. The results show that the disease dynamics are altered by the seasonal variation. The higher levels of pre-existing seroprevalence reduces the probability of invasion of dengue. In Chapter 5, I considered two alternate ways to represent the dynamics of a host-vector model. Both of the approximate models are investigated for the parameter regions where the approximation fails to hold. Moreover, three metrics are used to compare them with the Full model. In addition to the computational benefits, these approximations are used to investigate to what degree the inclusion of the vector population in the dynamics of the system is important. Finally, in Chapter 6, I present the summary of studies undertaken and possible extensions for the future work.
Resumo:
No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.
Resumo:
Implementation of stable aeroelastic models with the ability to capture the complex features of Multi concept smartblades is a prime step in reducing the uncertainties that come along with blade dynamics. The numerical simulations of fluid structure interaction can thus be used to test a realistic scenarios comprising of full-scale blades at a reasonably low computational cost. A code which was a combination of two advanced numerical models was designed and was run with the help of paralell HPC supercomputer platform. The first model was based on a variation of dimensional reduction technique proposed by Hodges and Yu. This model was the one to record the structural response of heterogenous composite blades. This technique reduces the geometrical complexities of the heterogenous blade section into a stiffness matrix for an equivalent beam. This derived equivalent 1-D strain energy matrix is similar to the actual 3-D strain energy matrix in an asymptotic sense. As this 1-D matrix helps in accurately modeling the blade structure as a 1-D finite element problem, this substantially redues the computational effort and subsequently the computational cost that are required to model the structural dynamics at each step. Second model comprises of implementation of the Blade Element Momentum Theory. In this approach we map all the velocities and the forces with the help of orthogonal matrices that help in capturing the large deformations and the effects of rotations in calculating the aerodynamic forces. This ultimately helps us to take into account the complex flexo torsional deformations. In this thesis we have succesfully tested these computayinal tools developed by MTU’s research team lead by for the aero elastic analysis of wind-turbine blades. The validation in this thesis is majorly based on several experiments done on NREL-5MW blade, as this is widely accepted as a benchmark blade in the wind industry. Along with the use of this innovative model the internal blade structure was also changed to add up to the existing benefits of the already advanced numerical models.
Resumo:
Here I develop a model of a radiative-convective atmosphere with both radiative and convective schemes highly simplified. The atmospheric absorption of radiation at selective wavelengths makes use of constant mass absorption coefficients in finite width spectral bands. The convective regime is introduced by using a prescribed lapse rate in the troposphere. The main novelty of the radiative-convective model developed here is that it is solved without using any angular approximation for the radiation field. The solution obtained in the purely radiation mode (i. e. with convection ignored) leads to multiple equilibria of stable states, being very similar to some results recently found in simple models of planetary atmospheres. However, the introduction of convective processes removes the multiple equilibria of stable states. This shows the importance of taking convective processes into account even for qualitative analyses of planetary atmosphere
Resumo:
Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.