9 resultados para Errors-in-variables model
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.
Resumo:
This article develops a life-cycle general equilibrium model with heterogeneous agents who make choices of nondurables consumption, investment in homeowned housing and labour supply. Agents retire from an specific age and receive Social Security benefits which are dependant on average past earnings. The model is calibrated, numerically solved and is able to match stylized U.S. aggregate statistics and to generate average life-cycle profiles of its decision variables consistent with data and literature. We also conduct an exercise of complete elimination of the Social Security system and compare its results with the benchmark economy. The results enable us to emphasize the importance of endogenous labour supply and benefits for agents' consumption-smoothing behaviour.
Resumo:
en_US
Resumo:
We report results on the optimal \choice of technique" in a model originally formulated by Robinson, Solow and Srinivasan (henceforth, the RSS model) and further discussed by Okishio and Stiglitz. By viewing this vintage-capital model without discounting as a speci c instance of the general theory of intertemporal resource allocation associated with Brock, Gale and McKenzie, we resolve longstanding conjectures in the form of theorems on the existence and price support of optimal paths, and of conditions suÆcient for the optimality of a policy rst identi ed by Stiglitz. We dispose of the necessity of these conditions in surprisingly simple examples of economies in which (i) an optimal path is periodic, (ii) a path following Stiglitz' policy is bad, and (iii) there is optimal investment in di erent vintages at di erent times. (129 words)
Resumo:
On using McKenzie’s taxonomy of optimal accumulation in the longrun, we report a “uniform turnpike” theorem of the third kind in a model original to Robinson, Solow and Srinivasan (RSS), and further studied by Stiglitz. Our results are presented in the undiscounted, discrete-time setting emphasized in the recent work of Khan-Mitra, and they rely on the importance of strictly concave felicity functions, or alternatively, on the value of a “marginal rate of transformation”, ξσ, from one period to the next not being unity. Our results, despite their specificity, contribute to the methodology of intertemporal optimization theory, as developed in economics by Ramsey, von Neumann and their followers.
Resumo:
This paper studies the electricity load demand behavior during the 2001 rationing period, which was implemented because of the Brazilian energetic crisis. The hourly data refers to a utility situated in the southeast of the country. We use the model proposed by Soares and Souza (2003), making use of generalized long memory to model the seasonal behavior of the load. The rationing period is shown to have imposed a structural break in the series, decreasing the load at about 20%. Even so, the forecast accuracy is decreased only marginally, and the forecasts rapidly readapt to the new situation. The forecast errors from this model also permit verifying the public response to pieces of information released regarding the crisis.
Resumo:
This article studies the welfare and long run allocation impacts of privatization. There are two types of capital in this model economy, one private and the other initially public (“infrastructure”). A positive externality due to infrastructure capital is assumed, so that the government could improve upon decentralized allocations internalizing the externality, but public investmentis …nanced through distortionary taxation. It is shown that privatization is welfare-improving for a large set of economies and that after privatization under-investment is optimal. When operation inefficiency in the public sectoror subsidy to infrastructure accumulation are introduced, gains from privatization are higherand positive for most reasonable combinations of parameters.
Resumo:
There are four different hypotheses analyzed in the literature that explain deunionization, namely: the decrease in the demand for union representation by the workers; the impaet of globalization over unionization rates; teehnieal ehange and ehanges in the legal and politieal systems against unions. This paper aims to test alI ofthem. We estimate a logistie regression using panel data proeedure with 35 industries from 1973 to 1999 and eonclude that the four hypotheses ean not be rejeeted by the data. We also use a varianee analysis deeomposition to study the impaet of these variables over the drop in unionization rates. In the model with no demographic variables the results show that these economic (tested) variables can account from 10% to 12% of the drop in unionization. However, when we include demographic variables these tested variables can account from 10% to 35% in the total variation of unionization rates. In this case the four hypotheses tested can explain up to 50% ofthe total drop in unionization rates explained by the model.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.