28 resultados para two-factor models
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
Multi-factor models constitute a useful tool to explain cross-sectional covariance in equities returns. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide interval forecasts.
Resumo:
Multi-factor models constitute a use fui tool to explain cross-sectional covariance in equities retums. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide intervaI forecasts.
Resumo:
In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.
Resumo:
The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.
Resumo:
In this paper we construct common-factor portfolios using a novel linear transformation of standard factor models extracted from large data sets of asset returns. The simple transformation proposed here keeps the basic properties of the usual factor transformations, although some new interesting properties are further attached to them. Some theoretical advantages are shown to be present. Also, their practical importance is confirmed in two applications: the performance of common-factor portfolios are shown to be superior to that of asset returns and factors commonly employed in the finance literature.
Resumo:
This paper investigates whether there is evidence of structural change in the Brazilian term structure of interest rates. Multivariate cointegration techniques are used to verify this evidence. Two econometrics models are estimated. The rst one is a Vector Autoregressive Model with Error Correction Mechanism (VECM) with smooth transition in the deterministic coe¢ cients (Ripatti and Saikkonen [25]). The second one is a VECM with abrupt structural change formulated by Hansen [13]. Two datasets were analysed. The rst one contains a nominal interest rate with maturity up to three years. The second data set focuses on maturity up to one year. The rst data set focuses on a sample period from 1995 to 2010 and the second from 1998 to 2010. The frequency is monthly. The estimated models suggest the existence of structural change in the Brazilian term structure. It was possible to document the existence of multiple regimes using both techniques for both databases. The risk premium for di¤erent spreads varied considerably during the earliest period of both samples and seemed to converge to stable and lower values at the end of the sample period. Long-term risk premiums seemed to converge to inter-national standards, although the Brazilian term structure is still subject to liquidity problems for longer maturities.
Resumo:
We develop and empirically test a continuous time equilibrium model for the pricing of oil futures. The model provides a link between no-arbitrage models and expectation oriented models. It highlights the role of inventories for the identification of different pricing regimes. In an empirical study the hedging performance of our model is compared with five other one- and two-factor pricing models. The hedging problem considered is related to Metallgesellschaft´s strategy to hedge long-term forward commitments with short-term futures. The results show that the downside risk distribution of our inventory based model stochastically dominates those of the other models.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
Estudos recentes apontam que diversas estratégias implementadas em hedge funds geram retornos com características não lineares. Seguindo as sugestões encontradas no paper de Agarwal e Naik (2004), este trabalho mostra que uma série de hedge funds dentro da indústria de fundos de investimentos no Brasil apresenta retornos que se assemelham ao de uma estratégia em opções de compra e venda no índice de mercado Bovespa. Partindo de um modelo de fatores, introduzimos um índice referenciado no retorno sobre opções de modo que tal fator possa explicar melhor que os tradicionais fatores de risco a característica não linear dos retornos dos fundos de investimento.
Resumo:
Este trabalho busca explorar, através de testes empíricos, qual das duas principais teorias de escolha de estrutura ótima de capital das empresas, a Static Trade-off Theory (STT) ou a Pecking Order Theory(POT) melhor explica as decisões de financiamento das companhias brasileiras. Adicionalmente, foi estudado o efeito da assimetria de informações, desempenho e liquidez do mercado acionário nessas decisões. Utilizou-se no presente trabalho métodos econométricos com dados de empresas brasileiras de capital aberto no período abrangendo 1995 a 2005, testando dois modelos representativos da Static Trade-off Theory (STT) e da Pecking Order Theory(POT). Inicialmente, foi testado o grupo amplo de empresas e, posteriormente, realizou-se o teste em subgrupos, controlando os efeitos de desempenho e liquidez do mercado acionário, liquidez das ações das empresas tomadoras e assimetria de informações. Desta forma, os resultados obtidos são indicativos de que a Pecking Order Theory, na sua forma semi-forte, se constitui na melhor teoria explicativa quanto à escolha da estrutura de capital das empresas brasileiras, na qual a geração interna de caixa e o endividamento oneroso e operacional é a fonte prioritária de recursos da companhia, havendo algum nível, embora baixo, da utilização de emissão de ações. Os estudos empíricos para os subgrupos de controle sugerem que a liquidez do mercado e liquidez das ações das empresas são fatores de influência na propensão das empresas emitirem ações, assim como a assimetria de informação. O desempenho do mercado acionário, com base nos dados analisados, aparenta ter pouca influência na captação de recursos via emissões de ações das empresas, não sendo feito no presente estudo distinções entre emissões públicas ou privadas
Resumo:
Este trabalho teve como objetivo incluir flexibilidades gerenciais (tais como técnicas de injeção de gás e água) na avaliação de reservatórios. Concluimos que esta técnicas podem aumentar o valor dos reservatórios em até 25% segundo a teoria de opções reais. A principal vantagem da metodologia de teoria de opções face a tradicional técnica de fluxo de caixa descontado é levar em conta as questões operacionais da indústria do petróleo. Utilizamos dois modelos clássicos para a precificação de reservatórios de petróleo, e aplicamos uma análise de sensibilidade para determinarmos quais fatores são mais relevantes no seu valor econômico. Como era de se esperar em ambos os modelos, o tempo de concessão, bem como a taxa de convenience e/ou dividend yield foram os fatores mais importantes.
Resumo:
Neste trabalho de dissertação busca-se descobrir qual é a identidade corporativa dos principais gerentes da área de Exploração e Produção da PETROBRAS, e analisar os limites e as possibilidades desta identidade na fonnação de um ambiente interno competitivo ou cooperativo. Para tanto, foi realizada pesquisa com base em dois extremos de identidade corporativa, um que representa a identidade corporativa de uma empresa com características de produção em massa e outro que representa uma empresa com características de produção enxuta.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
Esta dissertação objetiva avaliar como as empresas organizam sua estratégia para gestão do conhecimento. Basicamente dois modelos estratégicos são apresentados na literatura: a codificação e a personalização. No entanto, há uma questão que se apresenta na literatura que se refere à importância do foco prioritário em apenas um dos modelos estratégicos citados. Como objetivo principal, pretende-se apresentar uma escala que possa analisar se uma empresa líder em seu segmento realmente possui um foco estratégico com relação à gestão do conhecimento. Durante o desenvolvimento deste trabalho, será percorrido um referencial teórico que busca explicar como a gestão do conhecimento se tomou importante para a estratégia das empresas, tocando em pontos como características da Era da Informação e conceitos básicos da gestão do conhecimento, englobando modelos teóricos que buscam explicar como o conhecimento é criado e transferido dentro das empresas. Um modelo para mensuração do foco estratégico de uma empresa será proposto. Este modelo será composto de indicadores estruturados em forma de um questionário. Para cada indicador haverá uma escala de avaliação e o conjunto do questionário buscará traduzir o foco estratégico citado. Por fim, o questionário será testado em uma empresa de Propriedade Intelectual e serão feitas análises estatísticas para avaliação do mesmo. Para isto será utilizado o procedimento da análise fatorial do tipo confirmatória. Trata-se de uma ferramenta estatística que possibilita uma avaliação tanto do modelo de análise, quanto do modelo estrutural.