12 resultados para Variance decomposition
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
This paper investigates the degree of short run and long run co-movement in U.S. sectoral output data by estimating sectoraI trends and cycles. A theoretical model based on Long and Plosser (1983) is used to derive a reduced form for sectoral output from first principles. Cointegration and common features (cycles) tests are performed; sectoral output data seem to share a relatively high number of common trends and a relatively low number of common cycles. A special trend-cycle decomposition of the data set is performed and the results indicate a very similar cyclical behavior across sectors and a very different behavior for trends. Indeed. sectors cyclical components appear as one. In a variance decomposition analysis, prominent sectors such as Manufacturing and Wholesale/Retail Trade exhibit relatively important transitory shocks.
Resumo:
The paper aims to investigate on empirical and theoretical grounds the Brazilian exchange rate dynamics under floating exchange rates. The empirical analysis examines the short and long term behavior of the exchange rate, interest rate (domestic and foreign) and country risk using econometric techniques such as variance decomposition, Granger causality, cointegration tests, error correction models, and a GARCH model to estimate the exchange rate volatility. The empirical findings suggest that one can argue in favor of a certain degree of endogeneity of the exchange rate and that flexible rates have not been able to insulate the Brazilian economy in the same patterns predicted by literature due to its own specificities (managed floating with the use of international reserves and domestic interest rates set according to inflation target) and to externally determined variables such as the country risk. Another important outcome is the lack of a closer association of domestic and foreign interest rates since the new exchange regime has been adopted. That is, from January 1999 to May 2004, the US monetary policy has no significant impact on the Brazilian exchange rate dynamics, which has been essentially endogenous primarily when we consider the fiscal dominance expressed by the probability of default.
Resumo:
The present work seeks to investigate the dynamics of capital account liberalization and its impact on short run capital flows to Brazil in the period of 1995-2002, considering different segments such as the monetary, derivative and equity markets. This task is pursued by developing a comparative study of financial flows and examining how it is affected by the uncovered interest parity, country risk and the legislation on portfolio capital flows. The empirical framework is based on a vector autoregressive (VAR) analysis using impulse-response functions, variance decomposition and Granger causality tests. In general terms the results indicate a crucial role played by the uncovered interest parity and the country risk to explain portfolio flows, and a less restrictive (more liberalized) legislation is not significant to attract such flows.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
We estimate and test two alternative functional forms representing the aggregate production function for a panel of countries: the extended neoclassical growth model, and a mincerian formulation of schooling-returns to skills. Estimation is performed using instrumentalvariable techniques, and both functional forms are confronted using a Box-Cox test, since human capital inputs enter in levels in the mincerian specification and in logs in the extended neoclassical growth model. Our evidence rejects the extended neoclassical growth model in favor of the mincerian specification, with an estimated capital share of about 42%, a marginal return to education of about 7.5% per year, and an estimated productivity growth of about 1.4% per year. Differences in productivity cannot be disregarded as an explanation of why output per worker varies so much across countries: a variance decomposition exercise shows that productivity alone explains 54% of the variation in output per worker across countries.
Resumo:
Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
Resumo:
This paper analyzes both the levels and evolution of wage inequality in the Brazilian formal labor market using administrative data from the Brazilian Ministry of Labor (RAIS) from 1994 to 2009. After the covariance structure of the log of real weekly wages is estimated and the variance of the log of real weekly wages is decomposed into its permanent and transitory components, we verify that nearly 60% of the inequality within age and education groups is explained by the permanent component, i.e., by time-invariant individual productive characteristics. During this period, wage inequality decreased by 29%. In the rst years immediately after the macroeconomic stabilization (1994
Resumo:
Esse trabalho é uma aplicação do modelo intertemporal de apreçamento de ativos desenvolvido por Campbell (1993) e Campbell e Vuolteenaho (2004) para as carteiras de Fama-French 2x3 brasileiras no period de janeiro de 2003 a abril de 2012 e para as carteiras de Fama-French 5x5 americanas em diferentes períodos. As varíaveis sugeridas por Campbell e Vuolteenaho (2004) para prever os excessos de retorno do mercado acionário americano no period de 1929 a 2001 mostraram-se também bons preditores de excesso de retorno para o mercado brasileiro no período recente, com exceção da inclinação da estrutura a termo das taxas de juros. Entretanto, mostramos que um aumento no small stock value spread indica maior excesso de retorno no futuro, comportamento que não é coerente com a explicação para o prêmio de valor sugerida pelo modelo intertemporal. Ainda, utilizando os resíduos do VAR preditivo para definir o risco de choques de fluxo de caixa e de choques nas taxas de desconto das carteiras de teste, verificamos que o modelo intertemporal resultante não explica adequadamente os retornos observados. Para o mercado norte-americano, concluímos que a abilidade das variáveis propostas para explicar os excessos de retorno do mercado varia no tempo. O sucesso de Campbell e Vuolteenaho (2004) em explicar o prêmio de valor para o mercado norte-americano na amostra de 1963 a 2001 é resultado da especificação do VAR na amostra completa, pois mostramos que nenhuma das varíaveis é um preditor de retorno estatisticamente significante nessa sub-amostra.
Resumo:
This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap
Resumo:
This article presents a group of exercises of level and growth decomposition of output per worker using cross-country data from 1960 to 2000. It is shown that at least until 1975 factors of production (capital and education) were the main source of output dispersion across economies and that productivity variance was considerably smaller than in late years. Only after this date the prominence of productivity started to show up in the data, as the majority of the literature has found. The growth decomposition exercises showed that the reversal of relative importance of productivity vis-a-vis factors is explained by the very good (bad) performance of productivity of fast (slow) growing economies. Although growth in the period, on average, is mostly due to factors accumulation, its variance is explained by productivity.
Resumo:
This article presents a group of exercises of leveI and growth decomposition of output per worker using cross-collntry data from 1960 to :2000. It is shown that at least llntil 197.5 factors of production (capital anel education) ",ere the main source of output dispersion across ecoIlomies and that productivity variance was considerably srnaller than in late years. Qnly after this date the prominence of productivity started to sho\\' up in the data. as the majority of the litcrature has found. The gro\\'th decomposition exercises showecl that t he reversal of relative irnportance of proeluctivity vis-a-\'is factors is explainecl by the very good (bad) performance of procluctivity of fast (slow) growing cconomies. Although growth in the pcriod, on avcragc. is mostly clue to factors accumulation. its variance is explained by productivity.
Resumo:
We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.