6 resultados para Current hosusehold survey

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper argues that changes in the returns to occupational tasks have contributed to changes in the wage distribution over the last three decades. Using Current Population Survey (CPS) data, we first show that the 1990s polarization of wages is explained by changes in wage setting between and within occupations, which are well captured by tasks measures linked to technological change and offshorability. Using a decomposition based on Firpo, Fortin, and Lemieux (2009), we find that technological change and deunionization played a central role in the 1980s and 1990s, while offshorability became an important factor from the 1990s onwards.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes the effects of the mlmmum wage on both, eammgs and employment, using a Brazilian rotating panel data (Pesquisa Mensal do Emprego - PME) which has a similar design to the US Current Population Survey (CPS). First an intuitive description of the data is done by graphical analysis. In particular, Kemel densities are used to show that an increase in the minimum wage compresses the eamings distribution. This graphical analysis is then forrnalized by descriptive models. This is followed by a discussion on identification and endogeneity that leads to the respecification of the model. Second, models for employment are estimated, using an interesting decomposition that makes it possible to separate out the effects of an increase in the minimum wage on number of hours and on posts of jobs. The main result is that an increase in the minimum wage was found to compress the eamings distribution, with a moderately small effect on the leveI of employment, contributing to alleviate inequality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neste estudo é proposto que a instabilidade macroeconômica extrema causada pela hiperinflação nas décadas de 80 e 90 no Brasil causou um efeito de longo prazo no comportamento de poupança dos indivíduos. Usando dados da Pesquisa Nacional por Amostra de Domicílio (PNAD) de 2009 e 2011 e um questionário complementar, encontramos três evidências significantes: (1) indivíduos que possuem memória do período de hiperinflação no Brasil tem uma menor probabilidade de participar do mercado de ações; (2) há uma forte evidência que pessoas que estavam em idade formativa durante a hiperinflação são menos dispostos de possuir algum tipo de instrumento financeiro do que pessoas que tiveram a experiência desse choque macroeconômico em outros períodos de suas vidas; (3) mulheres solteiras são muito mais prováveis de ter uma poupança financeira que homens solteiros.