28 resultados para Factor Model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The approach proposed here explores the hierarchical nature of item-level data on price changes. On one hand, price data is naturally organized around a regional strucuture, with variations being observed on separate cities. Moreover, the itens that comprise the natural structure of CPIs are also normally interpreted in terms of groups that have economic interpretations, such as tradables and non-tradables, energyrelated, raw foodstuff, monitored prices, etc. The hierarchical dynamic factor model allow the estimation of multiple factors that are naturally interpreted as relating to each of these regional and economic levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo deste trabalho é verificar se os fundos de investimento Multimercado no Brasil geram alphas significativamente positivos, ou seja, se os gestores possuem habilidade e contribuem positivamente para o retorno de seus fundos. Para calcular o alpha dos fundos, foi utilizado um modelo com sete fatores, baseado, principalmente, em Edwards e Caglayan (2001), com a inclusão do fator de iliquidez de uma ação. O período analisado vai de 2003 a 2013. Encontramos que, em média, os fundos multimercado geram alpha negativo. Porém, apesar de o percentual dos que geram interceptos positivos ser baixo, a magnitude dos mesmos é expressiva. Os resultados diferem bastante por classificação Anbima e por base de dados utilizada. Verifica-se também se a performance desses fundos é persistente através de um modelo não-paramétrico baseado em tabelas de contingência. Não encontramos evidências de persistência, nem quando separamos os fundos por classificação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O trabalho relaciona, com um modelo de três fatores proposto por Huse (2007), variáveis macroeconômicas e financeiras observáveis com a estrutura a termo da taxa de juros (ETTJ) dos países da América Latina (Brasil, Chile, Colômbia e México). Consideramos os seguintes determinantes macroeconômicos: taxa de inflação, taxa de variação do nível de atividade, variação da taxa de câmbio, nível do credit default swaps (CDS), nível da taxa de desemprego, nível da taxa de juros nominal e fatores globais (inclinação da curva de juros norte-americana e variação de índices de commodities). Os modelos explicam mais do que 75% nos casos do Brasil, Chile e Colômbia e de 68% no caso do México. Variações positivas no nível de atividade e inflação são acompanhadas, em todos os países, de um aumento na ETTJ. Aumentos do CDS, com exceção do Chile, acarretam em aumento das taxas longas. Já crescimentos na taxa de desemprego têm efeitos distintos nos países. Ao mesmo tempo, depreciações cambiais não são acompanhadas de subida de juros, o que pode ser explicado pelos bancos centrais considerarem que depreciações de câmbio tem efeitos transitórios na inflação. No México, aumentos na ETTJ são diretamente relacionados com o índice de commodities de energia e metálicas. Já no caso brasileiro, em que os preços da gasolina são regulados e não impactam a inflação, esse canal não é relevante. Variações positivas na inclinação da curva norte-americana têm efeitos similares nas curvas da América Latina, reduzindo as taxas curtas e aumentando as taxas longas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The knowledge of the current state of the economy is crucial for policy makers, economists and analysts. However, a key economic variable, the gross domestic product (GDP), are typically colected on a quartely basis and released with substancial delays by the national statistical agencies. The first aim of this paper is to use a dynamic factor model to forecast the current russian GDP, using a set of timely monthly information. This approach can cope with the typical data flow problems of non-synchronous releases, mixed frequency and the curse of dimensionality. Given that Russian economy is largely dependent on the commodity market, our second motivation relates to study the effects of innovations in the russian macroeconomic fundamentals on commodity price predictability. We identify these innovations through a news index which summarizes deviations of offical data releases from the expectations generated by the DFM and perform a forecasting exercise comparing the performance of different models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the income inequality generated by a jobsearch process when di§erent cohorts of homogeneous workers are allowed to have di§erent degrees of impatience. Using the fact the average wage under the invariant Markovian distribution is a decreasing function of the discount factor (Cysne (2004, 2006)), I show that the Lorenz curve and the between-cohort Gini coe¢ cient of income inequality can be easily derived in this case. An example with arbitrary measures regarding the wage o§ers and the distribution of time preferences among cohorts provides some insights into how much income inequality can be generated, and into how it varies as a function of the probability of unemployment and of the probability that the worker does not Önd a job o§er each period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and calibrate a model where differences in factor endowments lead countries to trade intermediate goods, and gains from trade reflect in total factor productivity. We perform several output and growth decompositions, to assess the impact that barriers to trade, as well as changes in terms of trade, have on measured TFP. We find that for very poor economies gains from trade are large, in some cases representing a doubling of GDP. Also, that an improvement in the terms of trade - by allowing the use of a better mix of intermediate inputs in the production process - translates into productivity growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and calibrate a model where diferences in factor en-dowments lead countries to trade di¤erent goods, so that the existence of international trade changes the sectorial composition of output from one country to another. Gains from trade re ect in total factor productivity. We perform a development decomposition, to assess the impact of trade and barriers to trade on measured TFP. In our sample, the median size of that e¤ect is about 6.5% of output, with a median of 17% and a maximum of 89%. Also, the model predicts that changes in the terms of trade cause a change of productivity, and that efect has an average elasticity of 0.71.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we construct common-factor portfolios using a novel linear transformation of standard factor models extracted from large data sets of asset returns. The simple transformation proposed here keeps the basic properties of the usual factor transformations, although some new interesting properties are further attached to them. Some theoretical advantages are shown to be present. Also, their practical importance is confirmed in two applications: the performance of common-factor portfolios are shown to be superior to that of asset returns and factors commonly employed in the finance literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).