4 resultados para Historic American Buildings Survey

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Elizabeth River system is an estuary in southeastern Virginia, surrounded by the towns of Chesapeake, Norfolk, Portsmouth, and Virginia Beach. The river has played important roles in U.S. history and has been the location of various military and industrial activities. These activities have been the source of chemical contamination in this aquatic system. Important industries, until the 1990s, included wood treatment plants that used creosote, an oil-derived product that is rich in polycyclic aromatic hydrocarbons (PAH). These plants left a legacy of PAH pollution in the river, and in particular Atlantic Wood Industries is a designated Superfund site now undergoing remediation. Numerous studies examined the distribution of PAH in the river and impacts on resident fauna. This review focuses on how a small estuarine fish with a limited home range, Fundulus heteroclitus (Atlantic killifish or mummichog), has responded to this pollution. While in certain areas of the river this species has clearly been impacted, as evidenced by elevated rates of liver cancer, some subpopulations, notably the one associated with the Atlantic Wood Industries site, displayed a remarkable ability to resist the marked effects PAH have on the embryonic development of fish. This review provides evidence of how pollutants have acted as evolutionary agents, causing changes in ecosystems potentially lasting longer than the pollutants themselves. Mechanisms underlying this evolved resistance, as well as mechanisms underlying the effects of PAH on embryonic development, are also described. The review concludes with a description of ongoing and promising efforts to restore this historic American river.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What role do state party organizations play in twenty-first century American politics? What is the nature of the relationship between the state and national party organizations in contemporary elections? These questions frame the three studies presented in this dissertation. More specifically, I examine the organizational development of the state party organizations and the strategic interactions and connections between the state and national party organizations in contemporary elections.

In the first empirical chapter, I argue that the Internet Age represents a significant transitional period for state party organizations. Using data collected from surveys of state party leaders, this chapter reevaluates and updates existing theories of party organizational strength and demonstrates the importance of new indicators of party technological capacity to our understanding of party organizational development in the early twenty-first century. In the second chapter, I ask whether the national parties utilize different strategies in deciding how to allocate resources to state parties through fund transfers and through the 50-state-strategy party-building programs that both the Democratic and Republican National Committees advertised during the 2010 elections. Analyzing data collected from my 2011 state party survey and party-fund-transfer data collected from the Federal Election Commission, I find that the national parties considered a combination of state and national electoral concerns in directing assistance to the state parties through their 50-state strategies, as opposed to the strict battleground-state strategy that explains party fund transfers. In my last chapter, I examine the relationships between platforms issued by Democratic and Republican state and national parties and the strategic considerations that explain why state platforms vary in their degree of similarity to the national platform. I analyze an extensive platform dataset, using cluster analysis and document similarity measures to compare platform content across the 1952 to 2014 period. The analysis shows that, as a group, Democratic and Republican state platforms exhibit greater intra-party homogeneity and inter-party heterogeneity starting in the early 1990s, and state-national platform similarity is higher in states that are key players in presidential elections, among other factors. Together, these three studies demonstrate the significance of the state party organizations and the state-national party partnership in contemporary politics.