34 resultados para dynamic factor models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Com o aumento do número de gestores especializados em um número cada vez maior de possibilidades de investimentos na indústria de fundos brasileira, os fundos Multigestor se tornaram uma alternativa para os investidores que procuram diversificar seus investimentos e delegam às instituições financeiras o trabalho de alocar os recursos dentro das diferentes estratégias e fundos existentes no mercado. O intuito deste estudo é avaliar a capacidade de gerar retornos anormais (alfa) dos fundos de fundos da indústria brasileira, classificados como Fundos Multimercados Multigestor. Para isso foi estudada uma amostra com 1.421 fundos Multigestor com tributação de Longo Prazo no período de janeiro de 2005 a dezembro de 2011. A análise dos resultados encontrados através de regressões de modelos de vários fatores, derivados do modelo de Jensen (1968), sugere que apenas 3,03% dos fundos estudados conseguem adicionar valor a seus cotistas. Foram estudadas ainda as três principais fontes potenciais de geração de alfa dos fundos de fundos, a escolha das estratégias que compõe a carteira do fundo (alocação estratégica), a antecipação de movimentos de mercado (market timing) e a capacidade selecionar os melhores fundos dentro de cada estratégia (seleção de fundos). A partir da inclusão de termos quadráticos, conforme proposto pelos modelos de Treynor e Mazuy (1966) pode-se verificar que os fundos Multigestor, em média, não conseguem adicionar valor tentando antecipar movimentos de mercado (market timing). Através da construção de uma variável explicativa com a composição estratégica de cada fundo da amostra em cada período de tempo, pode-se verificar que os gestores de fundos de fundos, em média, também fracassam ao tentar selecionar os melhores fundos/gestores da indústria. Já a escolha das estratégias que compõe a carteira do fundo (alocação estratégica) mostrou contribuir positivamente para o retorno dos fundos. Ainda foi avaliada a capacidade de gerar alfa antes dos custos, o que elevou o percentual de fundos com alfa positivo para 6,39% dos fundos estudados, mas foi incapaz de alterar o sinal do alfa médio, que permaneceu negativo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis is composed of three articles with the subjects of macroeconomics and - nance. Each article corresponds to a chapter and is done in paper format. In the rst article, which was done with Axel Simonsen, we model and estimate a small open economy for the Canadian economy in a two country General Equilibrium (DSGE) framework. We show that it is important to account for the correlation between Domestic and Foreign shocks and for the Incomplete Pass-Through. In the second chapter-paper, which was done with Hedibert Freitas Lopes, we estimate a Regime-switching Macro-Finance model for the term-structure of interest rates to study the US post-World War II (WWII) joint behavior of macro-variables and the yield-curve. We show that our model tracks well the US NBER cycles, the addition of changes of regime are important to explain the Expectation Theory of the term structure, and macro-variables have increasing importance in recessions to explain the variability of the yield curve. We also present a novel sequential Monte-Carlo algorithm to learn about the parameters and the latent states of the Economy. In the third chapter, I present a Gaussian A ne Term Structure Model (ATSM) with latent jumps in order to address two questions: (1) what are the implications of incorporating jumps in an ATSM for Asian option pricing, in the particular case of the Brazilian DI Index (IDI) option, and (2) how jumps and options a ect the bond risk-premia dynamics. I show that jump risk-premia is negative in a scenario of decreasing interest rates (my sample period) and is important to explain the level of yields, and that gaussian models without jumps and with constant intensity jumps are good to price Asian options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several works in the shopping-time and in the human-capital literature, due to the nonconcavity of the underlying Hamiltonian, use Örst-order conditions in dynamic optimization to characterize necessity, but not su¢ ciency, in intertemporal problems. In this work I choose one paper in each one of these two areas and show that optimality can be characterized by means of a simple aplication of Arrowís (1968) su¢ ciency theorem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper confronts the Capital Asset Pricing Model - CAPM - and the 3-Factor Fama-French - FF - model using both Brazilian and US stock market data for the same Sample period (1999-2007). The US data will serve only as a benchmark for comparative purposes. We use two competing econometric methods, the Generalized Method of Moments (GMM) by (Hansen, 1982) and the Iterative Nonlinear Seemingly Unrelated Regression Estimation (ITNLSUR) by Burmeister and McElroy (1988). Both methods nest other options based on the procedure by Fama-MacBeth (1973). The estimations show that the FF model fits the Brazilian data better than CAPM, however it is imprecise compared with the US analog. We argue that this is a consequence of an absence of clear-cut anomalies in Brazilian data, specially those related to firm size. The tests on the efficiency of the models - nullity of intercepts and fitting of the cross-sectional regressions - presented mixed conclusions. The tests on intercept failed to rejected the CAPM when Brazilian value-premium-wise portfolios were used, contrasting with US data, a very well documented conclusion. The ITNLSUR has estimated an economically reasonable and statistically significant market risk premium for Brazil around 6.5% per year without resorting to any particular data set aggregation. However, we could not find the same for the US data during identical period or even using a larger data set. Este estudo procura contribuir com a literatura empírica brasileira de modelos de apreçamento de ativos. Dois dos principais modelos de apreçamento são Infrontados, os modelos Capital Asset Pricing Model (CAPM)e de 3 fatores de Fama-French. São aplicadas ferramentas econométricas pouco exploradas na literatura nacional na estimação de equações de apreçamento: os métodos de GMM e ITNLSUR. Comparam-se as estimativas com as obtidas de dados americanos para o mesmo período e conclui-se que no Brasil o sucesso do modelo de Fama e French é limitado. Como subproduto da análise, (i) testa-se a presença das chamadas anomalias nos retornos, e (ii) calcula-se o prêmio de risco implícito nos retornos das ações. Os dados revelam a presença de um prêmio de valor, porém não de um prêmio de tamanho. Utilizando o método de ITNLSUR, o prêmio de risco de mercado é positivo e significativo, ao redor de 6,5% ao ano.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes a bivariate markov switching dynamic conditional correlation model for estimating the optimal hedge ratio between spot and futures contracts. It considers the cointegration between series and allows to capture the leverage efect in return equation. The model is applied using daily data of future and spot prices of Bovespa Index and R$/US$ exchange rate. The results in terms of variance reduction and utility show that the bivariate markov switching model outperforms the strategies based ordinary least squares and error correction models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that Judd (1982)’s method can be applied to any finite system, contrary to what he claimed in 1987. An example shows how to employ the technic to study monetary models in presence of capital accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Em modelos de competição de preços, somente um custo de procura positivo por parte do consumidor não gera equilíbrio com dispersão de preços. Já modelos dinâmicos de switching cost consistentemente geram este fenômeno bastante documentado para preços no varejo. Embora ambas as literaturas sejam vastas, poucos modelos tentaram combinar as duas fricções em um só modelo. Este trabalho apresenta um modelo dinâmico de competição de preços em que consumidores idênticos enfrentam custos de procura e de switching. O equilíbrio gera dispersão nos preços. Ainda, como os consumidores são obrigados a se comprometer com uma amostra fixa de firmas antes dos preços serem definidos, somente dois preços serão considerados antes de cada compra. Este resultado independe do tamanho do custo de procura individual do consumidor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frente às mudanças que estão ocorrendo ao longo dos últimos anos, onde a perspectiva da economia passa da industrial para a do conhecimento, nos defrontamos com um novo cenário onde o capital intelectual do individuo é percebido como fator de fundamental importância para o desenvolvimento e crescimento da organização. No entanto, para que esta evolução ocorra, o conhecimento tácito do individuo precisa ser disseminado e compartilhado com os demais integrantes da organização. A preocupação nas empresas se volta, então, para a elaboração de estratégias que corroborem no aperfeiçoamento de seus processos. Além disso, é preciso, também, gerenciar toda esta dinâmica de construção e desenvolvimento do conhecimento, de maneira adequada e eficaz proporcionando, assim, o surgimento de novos valores e de vantagem competitiva. Inumeros modelos para auxiliar no processo de aprendizagem organizacional foram desenvolvidos por diversos autores e estudiosos, dentre eles destacamos o sistema de lições aprendidas, que é construida a partir de experiências, positivas ou negativas, vivenciadas dentro de um contexto, sob ordenação de padrões culturais próprios, com impacto real e significativo. Baseado em processos e procedimentos estabelecidos pela coordenação de um dos produtos ofertados pela FGV e sua rede de distribuição, este trabalho tem como objetivo analisar, à luz da teoria da gestão do conhecimento e, mais especificamente, da gestão das lições aprendidas, como o gereciamento do conhecimento está sendo efetuado no projeto Melhores Práticas, criado pela referida coordenação. Espera-se, também, entender se as fases da aquisição, do desenvolvimento e da disseminação, neste cenário, estão sendo realizados de forma eficaz e, se os resultados alcançados podem servir como uma base para avaliação do efetivo compartilhamento do conhecimento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our main goal is to investigate the question of which interest-rate options valuation models are better suited to support the management of interest-rate risk. We use the German market to test seven spot-rate and forward-rate models with one and two factors for interest-rate warrants for the period from 1990 to 1993. We identify a one-factor forward-rate model and two spot-rate models with two faetors that are not significant1y outperformed by any of the other four models. Further rankings are possible if additional cri teria are applied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is strong empirical evidence that risk premia in long-term interest rates are time-varying. These risk premia critically depend on interest rate volatility, yet existing research has not examined the im- pact of time-varying volatility on excess returns for long-term bonds. To address this issue, we incorporate interest rate option prices, which are very sensitive to interest rate volatility, into a dynamic model for the term structure of interest rates. We estimate three-factor affine term structure models using both swap rates and interest rate cap prices. When we incorporate option prices, the model better captures interest rate volatility and is better able to predict excess returns for long-term swaps over short-term swaps, both in- and out-of-sample. Our results indicate that interest rate options contain valuable infor- mation about risk premia and interest rate dynamics that cannot be extracted from interest rates alone.