14 resultados para large sample distributions
em Duke University
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
BACKGROUND: Malignant glioma is a rare cancer with poor survival. The influence of diet and antioxidant intake on glioma survival is not well understood. The current study examines the association between antioxidant intake and survival after glioma diagnosis. METHODS: Adult patients diagnosed with malignant glioma during 1991-1994 and 1997-2001 were enrolled in a population-based study. Diagnosis was confirmed by review of pathology specimens. A modified food-frequency questionnaire interview was completed by each glioma patient or a designated proxy. Intake of each food item was converted to grams consumed/day. From this nutrient database, 16 antioxidants, calcium, a total antioxidant index and 3 macronutrients were available for survival analysis. Cox regression estimated mortality hazard ratios associated with each nutrient and the antioxidant index adjusting for potential confounders. Nutrient values were categorized into tertiles. Models were stratified by histology (Grades II, III, and IV) and conducted for all (including proxy) subjects and for a subset of self-reported subjects. RESULTS: Geometric mean values for 11 fat-soluble and 6 water-soluble individual antioxidants, antioxidant index and 3 macronutrients were virtually the same when comparing all cases (n=748) to self-reported cases only (n=450). For patients diagnosed with Grade II and Grade III histology, moderate (915.8-2118.3 mcg) intake of fat-soluble lycopene was associated with poorer survival when compared to low intake (0.0-914.8 mcg), for self-reported cases only. High intake of vitamin E and moderate/high intake of secoisolariciresinol among Grade III patients indicated greater survival for all cases. In Grade IV patients, moderate/high intake of cryptoxanthin and high intake of secoisolariciresinol were associated with poorer survival among all cases. Among Grade II patients, moderate intake of water-soluble folate was associated with greater survival for all cases; high intake of vitamin C and genistein and the highest level of the antioxidant index were associated with poorer survival for all cases. CONCLUSIONS: The associations observed in our study suggest that the influence of some antioxidants on survival following a diagnosis of malignant glioma are inconsistent and vary by histology group. Further research in a large sample of glioma patients is needed to confirm/refute our results.
Resumo:
BACKGROUND: There is considerable interest in the development of methods to efficiently identify all coding variants present in large sample sets of humans. There are three approaches possible: whole-genome sequencing, whole-exome sequencing using exon capture methods, and RNA-Seq. While whole-genome sequencing is the most complete, it remains sufficiently expensive that cost effective alternatives are important. RESULTS: Here we provide a systematic exploration of how well RNA-Seq can identify human coding variants by comparing variants identified through high coverage whole-genome sequencing to those identified by high coverage RNA-Seq in the same individual. This comparison allowed us to directly evaluate the sensitivity and specificity of RNA-Seq in identifying coding variants, and to evaluate how key parameters such as the degree of coverage and the expression levels of genes interact to influence performance. We find that although only 40% of exonic variants identified by whole genome sequencing were captured using RNA-Seq; this number rose to 81% when concentrating on genes known to be well-expressed in the source tissue. We also find that a high false positive rate can be problematic when working with RNA-Seq data, especially at higher levels of coverage. CONCLUSIONS: We conclude that as long as a tissue relevant to the trait under study is available and suitable quality control screens are implemented, RNA-Seq is a fast and inexpensive alternative approach for finding coding variants in genes with sufficiently high expression levels.
Resumo:
The present study examined the impact of the developmental timing of trauma exposure on posttraumatic stress disorder (PTSD) symptoms and psychosocial functioning in a large sample of community-dwelling older adults (N = 1,995). Specifically, we investigated whether the negative consequences of exposure to traumatic events were greater for traumas experienced during childhood, adolescence, young adulthood, midlife, or older adulthood. Each of these developmental periods is characterized by age-related changes in cognitive and social processes that may influence psychological adjustment following trauma exposure. Results revealed that older adults who experienced their currently most distressing traumatic event during childhood exhibited more severe symptoms of PTSD and lower subjective happiness compared with older adults who experienced their most distressing trauma after the transition to adulthood. Similar findings emerged for measures of social support and coping ability. The differential effects of childhood compared with later life traumas were not fully explained by differences in cumulative trauma exposure or by differences in the objective and subjective characteristics of the events. Our findings demonstrate the enduring nature of traumatic events encountered early in the life course and underscore the importance of examining the developmental context of trauma exposure in investigations of the long-term consequences of traumatic experiences.
Resumo:
Post-traumatic stress disorder (PTSD) affects regions that support autobiographical memory (AM) retrieval, such as the hippocampus, amygdala and ventral medial prefrontal cortex (PFC). However, it is not well understood how PTSD may impact the neural mechanisms of memory retrieval for the personal past. We used a generic cue method combined with parametric modulation analysis and functional MRI (fMRI) to investigate the neural mechanisms affected by PTSD symptoms during the retrieval of a large sample of emotionally intense AMs. There were three main results. First, the PTSD group showed greater recruitment of the amygdala/hippocampus during the construction of negative versus positive emotionally intense AMs, when compared to controls. Second, across both the construction and elaboration phases of retrieval the PTSD group showed greater recruitment of the ventral medial PFC for negatively intense memories, but less recruitment for positively intense memories. Third, the PTSD group showed greater functional coupling between the ventral medial PFC and the amygdala for negatively intense memories, but less coupling for positively intense memories. In sum, the fMRI data suggest that there was greater recruitment and coupling of emotional brain regions during the retrieval of negatively intense AMs in the PTSD group when compared to controls.
Resumo:
Pezdek, Blandon-Gitlin, and Gabbay (2006) found that perceptions of the plausibility of events increase the likelihood that imagination may induce false memories of those events. Using a survey conducted by Gallup, we asked a large sample of the general population how plausible it would be for a person with longstanding emotional problems and a need for psychotherapy to be a victim of childhood sexual abuse, even though the person could not remember the abuse. Only 18% indicated that it was implausible or very implausible, whereas 67% indicated that such an occurrence was either plausible or very plausible. Combined with Pezdek et al.s' findings, and counter to their conclusions, our findings imply that there is a substantial danger of inducing false memories of childhood sexual abuse through imagination in psychotherapy.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
OBJECTIVE: In a large sample of community-dwelling older adults with histories of exposure to a broad range of traumatic events, we examined the extent to which appraisals of traumatic events mediate the relations between insecure attachment styles and posttraumatic stress disorder (PTSD) symptom severity. METHOD: Participants completed an assessment of adult attachment, in addition to measures of PTSD symptom severity, event centrality, event severity, and ratings of the A1 PTSD diagnostic criterion for the potentially traumatic life event that bothered them most at the time of the study. RESULTS: Consistent with theoretical proposals and empirical studies indicating that individual differences in adult attachment systematically influence how individuals evaluate distressing events, individuals with higher attachment anxiety perceived their traumatic life events to be more central to their identity and more severe. Greater event centrality and event severity were each in turn related to higher PTSD symptom severity. In contrast, the relation between attachment avoidance and PTSD symptoms was not mediated by appraisals of event centrality or event severity. Furthermore, neither attachment anxiety nor attachment avoidance was related to participants' ratings of the A1 PTSD diagnostic criterion. CONCLUSION: Our findings suggest that attachment anxiety contributes to greater PTSD symptom severity through heightened perceptions of traumatic events as central to identity and severe. (PsycINFO Database Record
Resumo:
This dissertation explores the complex interactions between organizational structure and the environment. In Chapter 1, I investigate the effect of financial development on the formation of European corporate groups. Since cross-country regressions are hard to interpret in a causal sense, we exploit exogenous industry measures to investigate a specific channel through which financial development may affect group affiliation: internal capital markets. Using a comprehensive firm-level dataset on European corporate groups in 15 countries, we find that countries
with less developed financial markets have a higher percentage of group affiliates in more capital intensive industries. This relationship is more pronounced for young and small firms and for affiliates of large and diversified groups. Our findings are consistent with the view that internal capital markets may, under some conditions, be more efficient than prevailing external markets, and that this may drive group affiliation even in developed economies. In Chapter 2, I bridge current streams of innovation research to explore the interplay between R&D, external knowledge, and organizational structure–three elements of a firm’s innovation strategy which we argue should logically be studied together. Using within-firm patent assignment patterns,
we develop a novel measure of structure for a large sample of American firms. We find that centralized firms invest more in research and patent more per R&D dollar than decentralized firms. Both types access technology via mergers and acquisitions, but their acquisitions differ in terms of frequency, size, and i\ntegration. Consistent with our framework, their sources of value creation differ: while centralized firms derive more value from internal R&D, decentralized firms rely more on external knowledge. We discuss how these findings should stimulate more integrative work on theories of innovation. In Chapter 3, I use novel data on 1,265 newly-public firms to show that innovative firms exposed to environments with lower M&A activity just after their initial public offering (IPO) adapt by engaging in fewer technological acquisitions and
more internal research. However, this adaptive response becomes inertial shortly after IPO and persists well into maturity. This study advances our understanding of how the environment shapes heterogeneity and capabilities through its impact on firm structure. I discuss how my results can help bridge inertial versus adaptive perspectives in the study of organizations, by
documenting an instance when the two interact.
Resumo:
Disparities in the crack/cocaine discourse have changed drastically since its inception over 30 years ago. Since the late 1980s, research examining this particular abuse has become more complex as both nationally and globally crack use/abuse has been examined within various contexts. Crack use has often been framed as an African American problem in part resulting from the high volume of African Americans seeking treatment for illnesses associated with their crack-cocaine use, and more African Americans dying from crack-cocaine overdose. This logical fallacy persists despite evidence showing African Americans have lower substance use/abuse compared to Caucasians. Given the impact of the crack epidemic as well as its related drug policies on African American communities and their families, further examination of crack use/abuse is necessary. This study will discuss the crack epidemic historically and examine crack use among clients of a large sample of outpatient substance abuse treatment units over a decade period between 1995 and 2005.
Resumo:
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log-likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Habitat loss, fragmentation, and degradation threaten the World’s ecosystems and species. These, and other threats, will likely be exacerbated by climate change. Due to a limited budget for conservation, we are forced to prioritize a few areas over others. These places are selected based on their uniqueness and vulnerability. One of the most famous examples is the biodiversity hotspots: areas where large quantities of endemic species meet alarming rates of habitat loss. Most of these places are in the tropics, where species have smaller ranges, diversity is higher, and ecosystems are most threatened.
Species distributions are useful to understand ecological theory and evaluate extinction risk. Small-ranged species, or those endemic to one place, are more vulnerable to extinction than widely distributed species. However, current range maps often overestimate the distribution of species, including areas that are not within the suitable elevation or habitat for a species. Consequently, assessment of extinction risk using these maps could underestimate vulnerability.
In order to be effective in our quest to conserve the World’s most important places we must: 1) Translate global and national priorities into practical local actions, 2) Find synergies between biodiversity conservation and human welfare, 3) Evaluate the different dimensions of threats, in order to design effective conservation measures and prepare for future threats, and 4) Improve the methods used to evaluate species’ extinction risk and prioritize areas for conservation. The purpose of this dissertation is to address these points in Colombia and other global biodiversity hotspots.
In Chapter 2, I identified the global, strategic conservation priorities and then downscaled to practical local actions within the selected priorities in Colombia. I used existing range maps of 171 bird species to identify priority conservation areas that would protect the greatest number of species at risk in Colombia (endemic and small-ranged species). The Western Andes had the highest concentrations of such species—100 in total—but the lowest densities of national parks. I then adjusted the priorities for this region by refining these species ranges by selecting only areas of suitable elevation and remaining habitat. The estimated ranges of these species shrank by 18–100% after accounting for habitat and suitable elevation. Setting conservation priorities on the basis of currently available range maps excluded priority areas in the Western Andes and, by extension, likely elsewhere and for other taxa. By incorporating detailed maps of remaining natural habitats, I made practical recommendations for conservation actions. One recommendation was to restore forest connections to a patch of cloud forest about to become isolated from the main Andes.
For Chapter 3, I identified areas where bird conservation met ecosystem service protection in the Central Andes of Colombia. Inspired by the November 11th (2011) landslide event near Manizales, and the current poor results of Colombia’s Article 111 of Law 99 of 1993 as a conservation measure in this country, I set out to prioritize conservation and restoration areas where landslide prevention would complement bird conservation in the Central Andes. This area is one of the most biodiverse places on Earth, but also one of the most threatened. Using the case of the Rio Blanco Reserve, near Manizales, I identified areas for conservation where endemic and small-range bird diversity was high, and where landslide risk was also high. I further prioritized restoration areas by overlapping these conservation priorities with a forest cover map. Restoring forests in bare areas of high landslide risk and important bird diversity yields benefits for both biodiversity and people. I developed a simple landslide susceptibility model using slope, forest cover, aspect, and stream proximity. Using publicly available bird range maps, refined by elevation, I mapped concentrations of endemic and small-range bird species. I identified 1.54 km2 of potential restoration areas in the Rio Blanco Reserve, and 886 km2 in the Central Andes region. By prioritizing these areas, I facilitate the application of Article 111 which requires local and regional governments to invest in land purchases for the conservation of watersheds.
Chapter 4 dealt with elevational ranges of montane birds and the impact of lowland deforestation on their ranges in the Western Andes of Colombia, an important biodiversity hotspot. Using point counts and mist-nets, I surveyed six altitudinal transects spanning 2200 to 2800m. Three transects were forested from 2200 to 2800m, and three were partially deforested with forest cover only above 2400m. I compared abundance-weighted mean elevation, minimum elevation, and elevational range width. In addition to analyzing the effect of deforestation on 134 species, I tested its impact within trophic guilds and habitat preference groups. Abundance-weighted mean and minimum elevations were not significantly different between forested and partially deforested transects. Range width was marginally different: as expected, ranges were larger in forested transects. Species in different trophic guilds and habitat preference categories showed different trends. These results suggest that deforestation may affect species’ elevational ranges, even within the forest that remains. Climate change will likely exacerbate harmful impacts of deforestation on species’ elevational distributions. Future conservation strategies need to account for this by protecting connected forest tracts across a wide range of elevations.
In Chapter 5, I refine the ranges of 726 species from six biodiversity hotspots by suitable elevation and habitat. This set of 172 bird species for the Atlantic Forest, 138 for Central America, 100 for the Western Andes of Colombia, 57 for Madagascar, 102 for Sumatra, and 157 for Southeast Asia met the criteria for range size, endemism, threat, and forest use. Of these 586 species, the Red List deems 108 to be threatened: 15 critically endangered, 29 endangered, and 64 vulnerable. When ranges are refined by elevational limits and remaining forest cover, 10 of those critically endangered species have ranges < 100km2, but then so do 2 endangered species, seven vulnerable, and eight non-threatened ones. Similarly, 4 critically endangered species, 20 endangered, and 12 vulnerable species have refined ranges < 5000km2, but so do 66 non-threatened species. A striking 89% of these species I have classified in higher threat categories have <50% of their refined ranges inside protected areas. I find that for 43% of the species I assessed, refined range sizes fall within thresholds that typically have higher threat categories than their current assignments. I recommend these species for closer inspection by those who assess risk. These assessments are not only important on a species-by-species basis, but by combining distributions of threatened species, I create maps of conservation priorities. They differ significantly from those created from unrefined ranges.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.