8 resultados para Linear combination
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
We use the information content in the decisions of the NBER Business Cycle Dating Committee to construct coincident and leading indices of economic activity for the United States. We identify the coincident index by assuming that the coincident variables have a common cycle with the unobserved state of the economy, and that the NBER business cycle dates signify the turning points in the unobserved state. This model allows us to estimate our coincident index as a linear combination of the coincident series. We establish that our index performs better than other currently popular coincident indices of economic activity.
Resumo:
We use the information content in the decisions of the NBER Business Cycle Dating Committee to construct coincident and leading indices of economic activity for the United States. We identify the coincident index by assuming that the coincident variables have a common cycle with the unobserved state of the economy, and that the NBER business cycle dates signify the turning points in the unobserved state. This model allows us to estimate our coincident index as a linear combination of the coincident series. We establish that our index performs better than other currently popular coincident indices of economic activity.
Resumo:
We use the information content in the decisions of the NBER Business Cycle Dating Committee to construct coincident and leading indices of economic activity for the United States. We identify the coincident index by assuming that the coincident variables have a common cycle with the unobserved state of the economy, and that the NBER business cycle dates signify the turning points in the unobserved state. This model allows us to estimate our coincident index as a linear combination of the coincident series. We compare the performance of our index with other currently popular coincident indices of economic activity.
Resumo:
Esta dissertação estuda o movimento do mercado acionário brasileiro com o objetivo de testar a trajetória de preços de pares de ações, aplicada à estratégia de pair trading. Os ativos estudados compreendem as ações que compõem o Ibovespa e a seleção dos pares é feita de forma unicamente estatística através da característica de cointegração entre ativos, sem análise fundamentalista na escolha. A teoria aqui aplicada trata do movimento similar de preços de pares de ações que evoluem de forma a retornar para o equilíbrio. Esta evolução é medida pela diferença instantânea dos preços comparada à média histórica. A estratégia apresenta resultados positivos quando a reversão à média se efetiva, num intervalo de tempo pré-determinado. Os dados utilizados englobam os anos de 2006 a 2010, com preços intra-diários para as ações do Ibovespa. As ferramentas utilizadas para seleção dos pares e simulação de operação no mercado foram MATLAB (seleção) e Streambase (operação). A seleção foi feita através do Teste de Dickey-Fuller aumentado aplicado no MATLAB para verificar a existência da raiz unitária dos resíduos da combinação linear entre os preços das ações que compõem cada par. A operação foi feita através de back-testing com os dados intra-diários mencionados. Dentro do intervalo testado, a estratégia mostrou-se rentável para os anos de 2006, 2007 e 2010 (com retornos acima da Selic). Os parâmetros calibrados para o primeiro mês de 2006 puderam ser aplicados com sucesso para o restante do intervalo (retorno de Selic + 5,8% no ano de 2006), para 2007, onde o retorno foi bastante próximo da Selic e para 2010, com retorno de Selic + 10,8%. Nos anos de maior volatilidade (2008 e 2009), os testes com os mesmos parâmetros de 2006 apresentaram perdas, mostrando que a estratégia é fortemente impactada pela volatilidade dos retornos dos preços das ações. Este comportamento sugere que, numa operação real, os parâmetros devem ser calibrados periodicamente, com o objetivo de adaptá-los aos cenários mais voláteis.
Resumo:
Multivariate Affine term structure models have been increasingly used for pricing derivatives in fixed income markets. In these models, uncertainty of the term structure is driven by a state vector, while the short rate is an affine function of this vector. The model is characterized by a specific form for the stochastic differential equation (SDE) for the evolution of the state vector. This SDE presents restrictions on its drift term which rule out arbitrages in the market. In this paper we solve the following inverse problem: Suppose the term structure of interest rates is modeled by a linear combination of Legendre polynomials with random coefficients. Is there any SDE for these coefficients which rules out arbitrages? This problem is of particular empirical interest because the Legendre model is an example of factor model with clear interpretation for each factor, in which regards movements of the term structure. Moreover, the Affine structure of the Legendre model implies knowledge of its conditional characteristic function. From the econometric perspective, we propose arbitrage-free Legendre models to describe the evolution of the term structure. From the pricing perspective, we follow Duffie et al. (2000) in exploring Legendre conditional characteristic functions to obtain a computational tractable method to price fixed income derivatives. Closing the article, the empirical section presents precise evidence on the reward of implementing arbitrage-free parametric term structure models: The ability of obtaining a good approximation for the state vector by simply using cross sectional data.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.
Resumo:
This paper revisits Modern Portfolio Theory and derives eleven properties of Efficient Allocations and Portfolios in the presence of leverage. With different degrees of leverage, an Efficient Portfolio is a linear combination of two portfolios that lie in different efficient frontiers - which allows for an attractive reinterpretation of the Separation Theorem. In particular a change in the investor risk-return preferences will leave the allocation between the Minimum Risk and Risk Portfolios completely unaltered - but will change the magnitudes of the tactical risk allocations within the Risk Portfolio. The paper also discusses the role of diversification in an Efficient Portfolio, emphasizing its more tactical, rather than strategic character