18 resultados para transitory
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
This paper investigates the degree of short run and long run co-movement in U.S. sectoral output data by estimating sectoraI trends and cycles. A theoretical model based on Long and Plosser (1983) is used to derive a reduced form for sectoral output from first principles. Cointegration and common features (cycles) tests are performed; sectoral output data seem to share a relatively high number of common trends and a relatively low number of common cycles. A special trend-cycle decomposition of the data set is performed and the results indicate a very similar cyclical behavior across sectors and a very different behavior for trends. Indeed. sectors cyclical components appear as one. In a variance decomposition analysis, prominent sectors such as Manufacturing and Wholesale/Retail Trade exhibit relatively important transitory shocks.
Resumo:
Reduced form estimation of multivariate data sets currently takes into account long-run co-movement restrictions by using Vector Error Correction Models (VECM' s). However, short-run co-movement restrictions are completely ignored. This paper proposes a way of taking into account short-and long-run co-movement restrictions in multivariate data sets, leading to efficient estimation of VECM' s. It enables a more precise trend-cycle decomposition of the data which imposes no untested restrictions to recover these two components. The proposed methodology is applied to a multivariate data set containing U.S. per-capita output, consumption and investment Based on the results of a post-sample forecasting comparison between restricted and unrestricted VECM' s, we show that a non-trivial loss of efficiency results whenever short-run co-movement restrictions are ignored. While permanent shocks to consumption still play a very important role in explaining consumption’s variation, it seems that the improved estimates of trends and cycles of output, consumption, and investment show evidence of a more important role for transitory shocks than previously suspected. Furthermore, contrary to previous evidence, it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations.
Resumo:
This paper proposes unit tests based on partially adaptive estimation. The proposed tests provide an intermediate class of inference procedures that are more efficient than the traditional OLS-based methods and simpler than unit root tests based on fully adptive estimation using nonparametric methods. The limiting distribution of the proposed test is a combination of standard normal and the traditional Dickey-Fuller (DF) distribution, including the traditional ADF test as a special case when using Gaussian density. Taking into a account the well documented characteristic of heavy-tail behavior in economic and financial data, we consider unit root tests coupled with a class of partially adaptive M-estimators based on the student-t distributions, wich includes te normal distribution as a limiting case. Monte Carlo Experiments indicate that, in the presence of heavy tail distributions or innovations that are contaminated by outliers, the proposed test is more powerful than the traditional ADF test. We apply the proposed test to several macroeconomic time series that have heavy-tailed distributions. The unit root hypothesis is rejected in U.S. real GNP, supporting the literature of transitory shocks in output. However, evidence against unit roots is not found in real exchange rate and nominal interest rate even haevy-tail is taken into a account.
Resumo:
O presente trabalho tem como principal objetivo testar a existência do fenômeno comumente descrito na teoria econômica como curva J que se caracteriza pela piora dos saldos comerciais no curto prazo após um episódio de depreciação real do câmbio. Dada a dificuldade na definição de eventos de depreciação/desvalorização do câmbio, ainda que os testes empíricos sejam efetuados ao longo do período de 1980 a 2005, serão utilizados três períodos específicos na análise descritiva em que a intensa variação positiva do câmbio real forneceu o cenário ideal para que a identificação do comportamento de uma possível deterioração transitória dos saldos comerciais fosse mais facilmente visualizado. Com base em 3(três) abordagens diferentes de testes econométricos, a evidência empírica sugere que o fenômeno da curva J não explica o comportamento da balança comercial após a ocorrência de tais episódios.
Resumo:
Este trabalho procura justapor duas visões do homem no mundo, a cartesiana e a zen-budista, ressaltando o aspecto fragmentador da primeira e o aspecto integrador da segunda. O objetivo básico é contribuir para maior articulação e integração do homem contemporâneo através de três movimentos. Primeiro, através da critica à concepçao racionalista, mecânica, prepotente e antinatural, fundada na lógica dual e antitética do paradigma da ciência e do pensamento ocidental em boa parte construido por René Descartes; Segundo, através da divulgação e da exposição enfática, em nosso meio acadêmico ocidental, da metafisica e da mística budistas, especialmente em sua versão Zen, que compreende o mundo em permanente transformação e construção como um todo articulado ao equilibrio universal, no qual as palavras, os conhecimentos e a percepção são meros signos passageiros que escondem a realidade cósmica. O movimento final pretende negar uma visão maniqueísta da realidade onde a oposição não é vista como a liquidação de um dos termos pelo outro, mas como a busca de uma nova síntese: a gestação, enfim, de um novo referencial do mundo - e aqui apenas se levanta a questão - a ser construído, quem sabe, a partir da integração dos paradigmas caracterizados hoje como ocidental e oriental.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
Lucas (1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle fluctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major differences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values -β=0.985, and ∅=5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% - the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business- cycle fluctuations are respectively 0.63% and 1.17% of per-capita consumption. The same figures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
Lucas(1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle uctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major diferences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values = 0:985, and = 5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business-cycle uctuations are respectively 0.63% and 1.17% of per-capita consumption. The same gures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
The main objective of this paper is to propose a novel setup that allows estimating separately the welfare costs of the uncertainty stemming from business-cycle uctuations and from economic-growth variation, when the two types of shocks associated with them (respectively,transitory and permanent shocks) hit consumption simultaneously. Separating these welfare costs requires dealing with degenerate bivariate distributions. Levis Continuity Theorem and the Disintegration Theorem allow us to adequately de ne the one-dimensional limiting marginal distributions. Under Normality, we show that the parameters of the original marginal distributions are not afected, providing the means for calculating separately the welfare costs of business-cycle uctuations and of economic-growth variation. Our empirical results show that, if we consider only transitory shocks, the welfare cost of business cycles is much smaller than previously thought. Indeed, we found it to be negative - -0:03% of per-capita consumption! On the other hand, we found that the welfare cost of economic-growth variation is relatively large. Our estimate for reasonable preference-parameter values shows that it is 0:71% of consumption US$ 208:98 per person, per year.
Resumo:
We extend the standard price discovery analysis to estimate the information share of dual-class shares across domestic and foreign markets. By examining both common and preferred shares, we aim to extract information not only about the fundamental value of the rm, but also about the dual-class premium. In particular, our interest lies on the price discovery mechanism regulating the prices of common and preferred shares in the BM&FBovespa as well as the prices of their ADR counterparts in the NYSE and in the Arca platform. However, in the presence of contemporaneous correlation between the innovations, the standard information share measure depends heavily on the ordering we attribute to prices in the system. To remain agnostic about which are the leading share class and market, one could for instance compute some weighted average information share across all possible orderings. This is extremely inconvenient given that we are dealing with 2 share prices in Brazil, 4 share prices in the US, plus the exchange rate (and hence over 5,000 permutations!). We thus develop a novel methodology to carry out price discovery analyses that does not impose any ex-ante assumption about which share class or trading platform conveys more information about shocks in the fundamental price. As such, our procedure yields a single measure of information share, which is invariant to the ordering of the variables in the system. Simulations of a simple market microstructure model show that our information share estimator works pretty well in practice. We then employ transactions data to study price discovery in two dual-class Brazilian stocks and their ADRs. We uncover two interesting ndings. First, the foreign market is at least as informative as the home market. Second, shocks in the dual-class premium entail a permanent e ect in normal times, but transitory in periods of nancial distress. We argue that the latter is consistent with the expropriation of preferred shareholders as a class.
Resumo:
In recent years, many central banks have adopted inflation targeting policies starting an intense debate about which measure of inflation to adopt. The literature on core inflation has tried to develop indicators of inflation which would respond only to "significant" changes in inflation. This paper defines a measure of core inflation as the common trend of prices in a multivariate dynamic model, that has, by construction, three properties: it filters idiosyncratic and transitory macro noises, and it leads the future leveI of headline inflation. We also show that the popular trimmed mean estimator of core inflation could be regarded as a proxy for the ideal GLS estimator for heteroskedastic data. We employ an asymmetric trimmed mean estimator to take account of possible skewness of the distribution, and we obtain an unconditional measure of core inflation.
Resumo:
The inability of rational expectation models with money supply rules to deliver inflation persistence following a transitory deviation of money growth from trend is due to the rapid adjustment of the price level to expected events. The observation of persistent inflation in macroeconomic data leads many economists to believe that prices adjust sluggishly and/or expectations must not be rational. Inflation persistence in U.S. data can be characterized by a vector autocorrelation function relating inflation and deviations of output from trend. In the vector autocorrelation function both inflation and output are highly persistent and there are significant positive dynamic cross-correlations relating inflation and output. This paper shows that a flexible-price general equilibrium business cycle model with money and a central bank using a Taylor rule can account for these patterns. There are no sticky prices and no liquidity effects. Agents decisions in a period are taken only after all shocks are observed. The monetary policy rule transforms output persistence into inflation persistence and creates positive cross-correlations between inflation and output.
Resumo:
In this paper, we decompose the variance of logarithmic monthly earnings of prime age males into its permanent and transitory components, using a five-wave rotating panel from the Venezuelan “Encuesta de Hogares por Muestreo” from 1995 to 1997. As far as we know, this is the first time a variance components model is estimated for a developing country. We test several specifications and find that an error component model with individual random effects and first order serially correlated errors fits the data well. In the simplest model, around 22% of earnings variance is explained by the variance of permanent component, 77% by purely stochastic variation and the remaining 1% by serial correlation. These results contrast with studies from industrial countries where the permanent component is predominant. The permanent component is usually interpreted as the results of productivity characteristics of individuals whereas the transitory component is due to stochastic perturbations such as job and/or price instability, among others. Our findings may be due to the timing of the panel when occurred precisely during macroeconomic turmoil resulting from a severe financial crisis. The findings suggest that earnings instability is an important source of inequality in a region characterized by high inequality and macroeconomic instability.
Resumo:
The conventional wisdom is that the aggregate stock price is predictable by the lagged pricedividend ratio, and that aggregate dividends follow approximately a random-walk. Contrary to this belief, this paper finds that variation in the aggregate dividends and price-dividend ratio is related to changes in expected dividend growth. The inclusion of labor income in a cointegrated vector autoregression with prices and dividends allows the identification of predictable variation in dividends. Most of the variation in the price-dividend ratio is due to changes in expected returns, but this paper shows that part of variation is related to transitory dividend growth shocks. Moreover, most of the variation in dividend growth can be attributed to these temporary changes in dividends. I also show that the price-dividend ratio (or dividend yield) can be constructed as the sum of two distinct, but correlated, variables that separately predict dividend growth and returns. One of these components, which could be called the expected return state variable, predicts returns better than the price-dividend ratio does.