899 resultados para Current hosusehold survey
Resumo:
© 2016 by the Midwest Political Science Association.Recent research has cast doubt on the potential for various electoral reforms to increase voter turnout. In this article, we examine the effectiveness of preregistration laws, which allow young citizens to register before being eligible to vote. We use two empirical approaches to evaluate the impact of preregistration on youth turnout. First, we implement difference-in-difference and lag models to bracket the causal effect of preregistration implementation using the 2000-2012 Current Population Survey. Second, focusing on the state of Florida, we leverage a discontinuity based on date of birth to estimate the effect of increased preregistration exposure on the turnout of young registrants. In both approaches, we find preregistration increases voter turnout, with equal effectiveness for various subgroups in the electorate. More broadly, observed patterns suggest that campaign context and supporting institutions may help to determine when and if electoral reforms are effective.
Resumo:
In the United States, poverty has been historically higher and disproportionately concentrated in the American South. Despite this fact, much of the conventional poverty literature in the United States has focused on urban poverty in cities, particularly in the Northeast and Midwest. Relatively less American poverty research has focused on the enduring economic distress in the South, which Wimberley (2008:899) calls “a neglected regional crisis of historic and contemporary urgency.” Accordingly, this dissertation contributes to the inequality literature by focusing much needed attention on poverty in the South.
Each empirical chapter focuses on a different aspect of poverty in the South. Chapter 2 examines why poverty is higher in the South relative to the Non-South. Chapter 3 focuses on poverty predictors within the South and whether there are differences in the sub-regions of the Deep South and Peripheral South. These two chapters compare the roles of family demography, economic structure, racial/ethnic composition and heterogeneity, and power resources in shaping poverty. Chapter 4 examines whether poverty in the South has been shaped by historical racial regimes.
The Luxembourg Income Study (LIS) United States datasets (2000, 2004, 2007, 2010, and 2013) (derived from the U.S. Census Current Population Survey (CPS) Annual Social and Economic Supplement) provide all the individual-level data for this study. The LIS sample of 745,135 individuals is nested in rich economic, political, and racial state-level data compiled from multiple sources (e.g. U.S. Census Bureau, U.S. Department of Agriculture, University of Kentucky Center for Poverty Research, etc.). Analyses involve a combination of techniques including linear probability regression models to predict poverty and binary decomposition of poverty differences.
Chapter 2 results suggest that power resources, followed by economic structure, are most important in explaining the higher poverty in the South. This underscores the salience of political and economic contexts in shaping poverty across place. Chapter 3 results indicate that individual-level economic factors are the largest predictors of poverty within the South, and even more so in the Deep South. Moreover, divergent results between the South, Deep South, and Peripheral South illustrate how the impact of poverty predictors can vary in different contexts. Chapter 4 results show significant bivariate associations between historical race regimes and poverty among Southern states, although regression models fail to yield significant effects. Conversely, historical race regimes do have a small, but significant effect in explaining the Black-White poverty gap. Results also suggest that employment and education are key to understanding poverty among Blacks and the Black-White poverty gap. Collectively, these chapters underscore why place is so important for understanding poverty and inequality. They also illustrate the salience of micro and macro characteristics of place for helping create, maintain, and reproduce systems of inequality across place.
Resumo:
Atypical employment, such as temporary, on-call and contract work, has been found disproportionately to attract the jobless. But there is no consensus in the literature as to the labour market consequences of such job choice by unemployed individuals. Using data from the Current Population Survey, we investigate the implications of the initial job-finding strategies pursued by the jobless for their short- and medium-term employment stability. At first sight, it appears that taking an offer of regular employment provides the greatest degree of employment continuity for the jobless. However, closer inspection indicates that the jobless who take up atypical employment are not only more likely to be employed 1 month and 1 year later than those who continue to search, but also to enjoy employment continuity that is not less favourable than that offered by regular, open-ended employment.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
This paper argues that changes in the returns to occupational tasks have contributed to changes in the wage distribution over the last three decades. Using Current Population Survey (CPS) data, we first show that the 1990s polarization of wages is explained by changes in wage setting between and within occupations, which are well captured by tasks measures linked to technological change and offshorability. Using a decomposition based on Firpo, Fortin, and Lemieux (2009), we find that technological change and deunionization played a central role in the 1980s and 1990s, while offshorability became an important factor from the 1990s onwards.
Resumo:
This paper analyzes the effects of the mlmmum wage on both, eammgs and employment, using a Brazilian rotating panel data (Pesquisa Mensal do Emprego - PME) which has a similar design to the US Current Population Survey (CPS). First an intuitive description of the data is done by graphical analysis. In particular, Kemel densities are used to show that an increase in the minimum wage compresses the eamings distribution. This graphical analysis is then forrnalized by descriptive models. This is followed by a discussion on identification and endogeneity that leads to the respecification of the model. Second, models for employment are estimated, using an interesting decomposition that makes it possible to separate out the effects of an increase in the minimum wage on number of hours and on posts of jobs. The main result is that an increase in the minimum wage was found to compress the eamings distribution, with a moderately small effect on the leveI of employment, contributing to alleviate inequality.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations
Resumo:
The Brazilian Network for Continuous Monitoring of GNSS - RBMC is a national network of continuously operating reference GNSS stations. Since its establishment in December of 1996, it has been playing an essential role for the maintenance and user access of the fundamental geodetic frame in the country. In order to provide better services for RBMC, the Brazilian Institute of Geography and Statistics - IBGE and the National Institute of Colonization and Land Reform - INCRA are both partners involved in the National Geospatial Framework Project - PIGN. This paper provides an overview of the recent modernization phases the RBMC network has undergone highlighting its future steps. These steps involve the installation of new equipment, provide real time data from a group of core stations and compute real-time DGPS corrections, based on CDGPS (The real-time Canada-Wide DGPS Service) (The Real-Time Canada-Wide DGPS Service. http://www.cdgps.com/ 2009a). In addition to this, a post-mission Precise Point Positioning (PPP) service has been established based on the current Geodetic Survey Division of NRCan (CSRS-PPP) service. This service is operational since April 2009 and is in large use in the country. All activities mentioned before are based on a cooperation signed at the end of 2004 with the University of New Brunswick, supported by the Canadian International Development Agency and the Brazilian Cooperation Agency. The Geodetic Survey Division of NRCan is also participating in this modernization effort under the same project. This infrastructure of 66 GNSS stations, the real time, post processing services and the potentiality of providing Wide Area DGPS corrections in the future show that the RBMC system is comparable to those available in USA and Europe. © Springer-Verlag Berlin Heidelberg 2012.
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.
Resumo:
Objectives Our objective in this study was to compare assistance received by individuals in the United States and Sweden with characteristics associated with low, moderate, or high 1-year placement risk in the United States. Methods We used longitudinal nationally representative data from 4,579 participants aged 75 years and older in the 1992 and 1993 waves of the Medicare Current Beneficiary Survey (MCBS) and cross-sectional data from 1,379 individuals aged 75 years and older in the Swedish Aging at Home (AH) national survey for comparative purposes. We developed a logistic regression equation using U.S. data to identify individuals with 3 levels (low, moderate, or high) of predicted 1-year institutional placement risk. Groups with the same characteristics were identified in the Swedish sample and compared on formal and informal assistance received. Results Formal service utilization was higher in Swedish sample, whereas informal service use is lower overall. Individuals with characteristics associated with high placement risk received more formal and less informal assistance in Sweden relative to the United States. Discussion Differences suggest formal services supplement informal support in the United States and that formal and informal services are complementary in Sweden.
Resumo:
Using data from the Current Population Survey, we examine recent trends in the relative economic status of black men. Our findings point to gains in the relative wages of black men (compared to whites) during the 1990s, especially among younger workers. In 1989, the average black male worker (experienced or not) earned about 69 percent as much per week as the average white male worker. In 2001, the average younger black worker was earning about 86% percent as much as an equally experienced white male; black males at all experience levels earned 72 percent as much as the average white in 2001. Greater occupational diversity and a reduction in unobserved skill differences and/or labor market discrimination explain much of the trend. For both younger and older workers, general wage inequality tempered the rate of wage convergence between blacks and whites during the 1990s, although the effects were less pronounced than during the 1980s.
Resumo:
Past studies have tested the claim that blacks are the last hired during periods of economic growth and the first fired in recessions by examining the movement of relative unemployment rates over the business cycle. Any conclusion drawn from this type of analysis must be viewed as tentative because the cyclical movements in the underlying transitions into and out of unemployment are not examined. Using Current Population Survey data matched across adjacent months from 1989 to 2004, this paper examines labor market transitions for prime age males to test this hypothesis. Considerable evidence is presented that blacks are the first fired as the business cycle weakens. However, no evidence is found that blacks are the last hired. Instead, blacks are initially hired from the ranks of the unemployed early in the business cycle and later are drawn from non-participation. Narrowing of the racial unemployment gap near the peak of the business cycle is driven by a reduction in the rate of job loss for blacks rather than increases in hiring. There is also evidence that residual differences in the racial unemployment gap vary systematically over the business cycle in a manner consistent with discrimination being more evident in the economy at times when its cost is lower.
Resumo:
Latest issue consulted: 1987.