13 resultados para Asymptotic Variance, Bayesian Models, Burn-in, Ergodic Average, Ising Model

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using national accounts data for the revenue-GDP and expenditure GDP ratios from 1947 to 1992, we examine two central issues in public finance. First, was the path of public debt sustainable during this period? Second, if debt is sustainable, how has the government historically balanced the budget after hocks to either revenues or expenditures? The results show that (i) public deficit is stationary (bounded asymptotic variance), with the budget in Brazil being balanced almost entirely through changes in taxes, regardless of the cause of the initial imbalance. Expenditures are weakly exogenous, but tax revenues are not;(ii) a rational Brazilian consumer can have a behavior consistent with Ricardian Equivalence (iii) seignorage revenues are critical to restore intertemporal budget equilibrium, since, when we exclude them from total revenues, debt is not sustainable in econometric tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using national accounts data for the revenue-GDP and expenditureGDP ratios from 1947 to 1992, we examine three central issues in public finance. First, was the path of public debt sustainable during this period? Second, if debt is sustainable, how has the government historically balanced the budget after shocks to either revenues or expenditures? Third, are expenditures exogenous? The results show that (i) public deficit is stationary (bounded asymptotic variance), with the budget in Brazil being balanced almost entirely through changes in taxes, regardless of the cause of the initial imbalance. Expenditures are weakly exogenous, but tax revenues are not; (ii) the behavior of a rational Brazilian consumer may be consistent with Ricardian Equivalence; (iii) seigniorage revenues are critical to restore intertemporal budget equilibrium, since, when we exclude them from total revenues, debt is not sustainable in econometric tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the income inequality generated by a jobsearch process when di§erent cohorts of homogeneous workers are allowed to have di§erent degrees of impatience. Using the fact the average wage under the invariant Markovian distribution is a decreasing function of the discount factor (Cysne (2004, 2006)), I show that the Lorenz curve and the between-cohort Gini coe¢ cient of income inequality can be easily derived in this case. An example with arbitrary measures regarding the wage o§ers and the distribution of time preferences among cohorts provides some insights into how much income inequality can be generated, and into how it varies as a function of the probability of unemployment and of the probability that the worker does not Önd a job o§er each period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper analyzes a two period general equilibrium model with individual risk and moral hazard. Each household faces two individual states of nature in the second period. These states solely differ in the household's vector of initial endowments, which is strictly larger in the first state (good state) than in the second state (bad state). In the first period households choose a non-observable action. Higher leveis of action give higher probability of the good state of nature to occur, but lower leveIs of utility. Households have access to an insurance market that allows transfer of income across states of oature. I consider two models of financiaI markets, the price-taking behavior model and the nonlínear pricing modelo In the price-taking behavior model suppliers of insurance have a belief about each household's actíon and take asset prices as given. A variation of standard arguments shows the existence of a rational expectations equilibrium. For a generic set of economies every equilibrium is constraíned sub-optímal: there are commodity prices and a reallocation of financiaI assets satisfying the first period budget constraint such that, at each household's optimal choice given those prices and asset reallocation, markets clear and every household's welfare improves. In the nonlinear pricing model suppliers of insurance behave strategically offering nonlinear pricing contracts to the households. I provide sufficient conditions for the existence of equilibrium and investigate the optimality properties of the modeI. If there is a single commodity then every equilibrium is constrained optimaI. Ir there is more than one commodity, then for a generic set of economies every equilibrium is constrained sub-optimaI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the use of an intertemporal job-search model in the investigation of within-cohort and between-cohort income inequality, the latter being generated by the heterogeneity of time preferences among cohorts of homogenous workers and the former by the cross-sectional turnover in the job market. It also offers an alternative explanation for the empirically-documented negative correlation between time preference and labor income. Under some speciÖc distributions regarding wage offers and time preferences, we show how the within-cohort and between-cohort Gini coe¢ cients of income distribution can be calculated, and how they vary as a function of the parameters of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article studies the welfare and long run allocation impacts of privatization. There are two types of capital in this model economy, one private and the other initially public (“infrastructure”). A positive externality due to infrastructure capital is assumed, so that the government could improve upon decentralized allocations internalizing the externality, but public investmentis …nanced through distortionary taxation. It is shown that privatization is welfare-improving for a large set of economies and that after privatization under-investment is optimal. When operation inefficiency in the public sectoror subsidy to infrastructure accumulation are introduced, gains from privatization are higherand positive for most reasonable combinations of parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the effects of population size in the Peck-Shell analysis of bank runs. We find that a contract featuring equal-treatment for almost all depositors of the same type approximates the optimum. Because the approximation also satisfies Green-Lin incentive constraints, when the planner discloses positions in the queue, welfare in these alternative specifications are sandwiched. Disclosure, however, it is not needed since our approximating contract is not subject to runs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation proposes a bivariate markov switching dynamic conditional correlation model for estimating the optimal hedge ratio between spot and futures contracts. It considers the cointegration between series and allows to capture the leverage efect in return equation. The model is applied using daily data of future and spot prices of Bovespa Index and R$/US$ exchange rate. The results in terms of variance reduction and utility show that the bivariate markov switching model outperforms the strategies based ordinary least squares and error correction models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consumers often pay different prices for the same product bought in the same store at the same time. However, the demand estimation literature has ignored that fact using, instead, aggregate measures such as the “list” or average price. In this paper we show that this will lead to biased price coefficients. Furthermore, we perform simple comparative statics simulation exercises for the logit and random coefficient models. In the “list” price case we find that the bias is larger when discounts are higher, proportion of consumers facing discount prices is higher and when consumers are more unwilling to buy the product so that they almost only do it when facing discount. In the average price case we find that the bias is larger when discounts are higher, proportion of consumers that have access to discount are similar to the ones that do not have access and when consumers willingness to buy is very dependent on idiosyncratic shocks. Also bias is less problematic in the average price case in markets with a lot of bargain deals, so that prices are as good as individual. We conclude by proposing ways that the econometrician can reduce this bias using different information that he may have available.