954 resultados para Current hosusehold survey
Resumo:
In this analysis, we examine the relationship between an individual’s decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the benefit from volunteering. We demonstrate that the interaction between this information and one’s private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Census 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement file data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We additionally use an innovative instrumental variables strategy to account for reflection bias and endogeneity caused by selective sorting by individuals into neighbourhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
This paper argues that changes in the returns to occupational tasks have contributed to changes in the wage distribution over the last three decades. Using Current Population Survey (CPS) data, we first show that the 1990s polarization of wages is explained by changes in wage setting between and within occupations, which are well captured by tasks measures linked to technological change and offshorability. Using a decomposition based on Firpo, Fortin, and Lemieux (2009), we find that technological change and deunionization played a central role in the 1980s and 1990s, while offshorability became an important factor from the 1990s onwards.
Resumo:
This paper analyzes the effects of the mlmmum wage on both, eammgs and employment, using a Brazilian rotating panel data (Pesquisa Mensal do Emprego - PME) which has a similar design to the US Current Population Survey (CPS). First an intuitive description of the data is done by graphical analysis. In particular, Kemel densities are used to show that an increase in the minimum wage compresses the eamings distribution. This graphical analysis is then forrnalized by descriptive models. This is followed by a discussion on identification and endogeneity that leads to the respecification of the model. Second, models for employment are estimated, using an interesting decomposition that makes it possible to separate out the effects of an increase in the minimum wage on number of hours and on posts of jobs. The main result is that an increase in the minimum wage was found to compress the eamings distribution, with a moderately small effect on the leveI of employment, contributing to alleviate inequality.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations
Resumo:
The Brazilian Network for Continuous Monitoring of GNSS - RBMC is a national network of continuously operating reference GNSS stations. Since its establishment in December of 1996, it has been playing an essential role for the maintenance and user access of the fundamental geodetic frame in the country. In order to provide better services for RBMC, the Brazilian Institute of Geography and Statistics - IBGE and the National Institute of Colonization and Land Reform - INCRA are both partners involved in the National Geospatial Framework Project - PIGN. This paper provides an overview of the recent modernization phases the RBMC network has undergone highlighting its future steps. These steps involve the installation of new equipment, provide real time data from a group of core stations and compute real-time DGPS corrections, based on CDGPS (The real-time Canada-Wide DGPS Service) (The Real-Time Canada-Wide DGPS Service. http://www.cdgps.com/ 2009a). In addition to this, a post-mission Precise Point Positioning (PPP) service has been established based on the current Geodetic Survey Division of NRCan (CSRS-PPP) service. This service is operational since April 2009 and is in large use in the country. All activities mentioned before are based on a cooperation signed at the end of 2004 with the University of New Brunswick, supported by the Canadian International Development Agency and the Brazilian Cooperation Agency. The Geodetic Survey Division of NRCan is also participating in this modernization effort under the same project. This infrastructure of 66 GNSS stations, the real time, post processing services and the potentiality of providing Wide Area DGPS corrections in the future show that the RBMC system is comparable to those available in USA and Europe. © Springer-Verlag Berlin Heidelberg 2012.
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.
Resumo:
Objectives Our objective in this study was to compare assistance received by individuals in the United States and Sweden with characteristics associated with low, moderate, or high 1-year placement risk in the United States. Methods We used longitudinal nationally representative data from 4,579 participants aged 75 years and older in the 1992 and 1993 waves of the Medicare Current Beneficiary Survey (MCBS) and cross-sectional data from 1,379 individuals aged 75 years and older in the Swedish Aging at Home (AH) national survey for comparative purposes. We developed a logistic regression equation using U.S. data to identify individuals with 3 levels (low, moderate, or high) of predicted 1-year institutional placement risk. Groups with the same characteristics were identified in the Swedish sample and compared on formal and informal assistance received. Results Formal service utilization was higher in Swedish sample, whereas informal service use is lower overall. Individuals with characteristics associated with high placement risk received more formal and less informal assistance in Sweden relative to the United States. Discussion Differences suggest formal services supplement informal support in the United States and that formal and informal services are complementary in Sweden.
Resumo:
Using data from the Current Population Survey, we examine recent trends in the relative economic status of black men. Our findings point to gains in the relative wages of black men (compared to whites) during the 1990s, especially among younger workers. In 1989, the average black male worker (experienced or not) earned about 69 percent as much per week as the average white male worker. In 2001, the average younger black worker was earning about 86% percent as much as an equally experienced white male; black males at all experience levels earned 72 percent as much as the average white in 2001. Greater occupational diversity and a reduction in unobserved skill differences and/or labor market discrimination explain much of the trend. For both younger and older workers, general wage inequality tempered the rate of wage convergence between blacks and whites during the 1990s, although the effects were less pronounced than during the 1980s.
Resumo:
Past studies have tested the claim that blacks are the last hired during periods of economic growth and the first fired in recessions by examining the movement of relative unemployment rates over the business cycle. Any conclusion drawn from this type of analysis must be viewed as tentative because the cyclical movements in the underlying transitions into and out of unemployment are not examined. Using Current Population Survey data matched across adjacent months from 1989 to 2004, this paper examines labor market transitions for prime age males to test this hypothesis. Considerable evidence is presented that blacks are the first fired as the business cycle weakens. However, no evidence is found that blacks are the last hired. Instead, blacks are initially hired from the ranks of the unemployed early in the business cycle and later are drawn from non-participation. Narrowing of the racial unemployment gap near the peak of the business cycle is driven by a reduction in the rate of job loss for blacks rather than increases in hiring. There is also evidence that residual differences in the racial unemployment gap vary systematically over the business cycle in a manner consistent with discrimination being more evident in the economy at times when its cost is lower.
Resumo:
Latest issue consulted: 1987.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
As a new medium for questionnaire delivery, the Internet has the potential to revolutionize the survey process. Online-questionnaires can provide many capabilities not found in traditional paper-based questionnaires. Despite this, and the introduction of a plethora of tools to support online-questionnaire creation, current electronic survey design typically replicates the look-and-feel of paper-based questionnaires, thus failing to harness the full power of the electronic delivery medium. A recent environmental scan of online-questionnaire design tools found that little, if any, support is incorporated within these tools to guide questionnaire designers according to best-practice [Lumsden & Morgan 2005]. This paper briefly introduces a comprehensive set of guidelines for the design of online-questionnaires. Drawn from relevant disparate sources, all the guidelines incorporated within the set are proven in their own right; as an initial assessment of the value of the set of guidelines as a practical reference guide, we undertook an informal study to observe the effect of introducing the guidelines into the design process for a complex online-questionnaire. The paper discusses the qualitative findings — which are encouraging for the role of the guidelines in the ‘bigger picture’ of online survey delivery across many domains such as e-government, e-business, and e-health — of this case study.