21 resultados para Five-factor model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The approach proposed here explores the hierarchical nature of item-level data on price changes. On one hand, price data is naturally organized around a regional strucuture, with variations being observed on separate cities. Moreover, the itens that comprise the natural structure of CPIs are also normally interpreted in terms of groups that have economic interpretations, such as tradables and non-tradables, energyrelated, raw foodstuff, monitored prices, etc. The hierarchical dynamic factor model allow the estimation of multiple factors that are naturally interpreted as relating to each of these regional and economic levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é verificar se os fundos de investimento Multimercado no Brasil geram alphas significativamente positivos, ou seja, se os gestores possuem habilidade e contribuem positivamente para o retorno de seus fundos. Para calcular o alpha dos fundos, foi utilizado um modelo com sete fatores, baseado, principalmente, em Edwards e Caglayan (2001), com a inclusão do fator de iliquidez de uma ação. O período analisado vai de 2003 a 2013. Encontramos que, em média, os fundos multimercado geram alpha negativo. Porém, apesar de o percentual dos que geram interceptos positivos ser baixo, a magnitude dos mesmos é expressiva. Os resultados diferem bastante por classificação Anbima e por base de dados utilizada. Verifica-se também se a performance desses fundos é persistente através de um modelo não-paramétrico baseado em tabelas de contingência. Não encontramos evidências de persistência, nem quando separamos os fundos por classificação.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O trabalho relaciona, com um modelo de três fatores proposto por Huse (2007), variáveis macroeconômicas e financeiras observáveis com a estrutura a termo da taxa de juros (ETTJ) dos países da América Latina (Brasil, Chile, Colômbia e México). Consideramos os seguintes determinantes macroeconômicos: taxa de inflação, taxa de variação do nível de atividade, variação da taxa de câmbio, nível do credit default swaps (CDS), nível da taxa de desemprego, nível da taxa de juros nominal e fatores globais (inclinação da curva de juros norte-americana e variação de índices de commodities). Os modelos explicam mais do que 75% nos casos do Brasil, Chile e Colômbia e de 68% no caso do México. Variações positivas no nível de atividade e inflação são acompanhadas, em todos os países, de um aumento na ETTJ. Aumentos do CDS, com exceção do Chile, acarretam em aumento das taxas longas. Já crescimentos na taxa de desemprego têm efeitos distintos nos países. Ao mesmo tempo, depreciações cambiais não são acompanhadas de subida de juros, o que pode ser explicado pelos bancos centrais considerarem que depreciações de câmbio tem efeitos transitórios na inflação. No México, aumentos na ETTJ são diretamente relacionados com o índice de commodities de energia e metálicas. Já no caso brasileiro, em que os preços da gasolina são regulados e não impactam a inflação, esse canal não é relevante. Variações positivas na inclinação da curva norte-americana têm efeitos similares nas curvas da América Latina, reduzindo as taxas curtas e aumentando as taxas longas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The knowledge of the current state of the economy is crucial for policy makers, economists and analysts. However, a key economic variable, the gross domestic product (GDP), are typically colected on a quartely basis and released with substancial delays by the national statistical agencies. The first aim of this paper is to use a dynamic factor model to forecast the current russian GDP, using a set of timely monthly information. This approach can cope with the typical data flow problems of non-synchronous releases, mixed frequency and the curse of dimensionality. Given that Russian economy is largely dependent on the commodity market, our second motivation relates to study the effects of innovations in the russian macroeconomic fundamentals on commodity price predictability. We identify these innovations through a news index which summarizes deviations of offical data releases from the expectations generated by the DFM and perform a forecasting exercise comparing the performance of different models.