940 resultados para MCMC sampling
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data publication contains measurements from the Continuous Surface Sampling System [CSSS] made during one campaign of the Tara Oceans Expedition. Water was pumped at the front of the vessel from ~2m depth, then de-bubbled and circulated to a Sea-Bird TSG temperature and conductivity sensor. System maintenance (instrument cleaning, flushing) was done approximately once a week and in port between successive legs. All data were stamped with a GPS.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data publication contains measurements from the Continuous Surface Sampling System [CSSS] made during one campaign of the Tara Oceans Expedition. Water was pumped at the front of the vessel from ~2m depth, then de-bubbled and circulated to a Sea-Bird TSG temperature and conductivity sensor. System maintenance (instrument cleaning, flushing) was done approximately once a week and in port between successive legs. All data were stamped with a GPS.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data publication contains measurements from the Continuous Surface Sampling System [CSSS] made during one campaign of the Tara Oceans Expedition. Water was pumped at the front of the vessel from ~2m depth, then de-bubbled and circulated to a Sea-Bird TSG temperature and conductivity sensor. System maintenance (instrument cleaning, flushing) was done approximately once a week and in port between successive legs. All data were stamped with a GPS.
Resumo:
Archaeological fish otoliths have the potential to serve as proxies for both season of site occupation and palaeoclimate conditions. By sampling along the distinctive sub-annual seasonal bands of the otolith and completing a stable isotope (δ¹⁸O, δ¹³C) analysis, variations within the fish’s environment can be identified. Through the analysis of cod otoliths from two archaeological sites on Kiska Island, Gertrude Cove (KIS-010) and Witchcraft Point (KIS-005), this research evaluates a micromilling methodological approach to extracting climatic data from archaeological cod otoliths. In addition, δ¹⁸Ootolith data and radiocarbon dates frame a discussion of Pacific cod harvesting, site occupation, and changing climatic conditions on Kiska Island. To aid in the interpretation of the archaeological Pacific cod results, archaeological and modern Atlantic cod otoliths were also analyzed as a component of this study to develop. The Atlantic cod otoliths provided the methodological and interpretative framework for the study, and also served to assess the efficacy of this sampling strategy for archaeological materials and to add time-depth to existing datasets. The δ¹⁸Ootolith values successfully illustrate relative variation in ambient water temperature. The Pacific cod δ¹⁸O values demonstrate a weak seasonal signal identifiable up to year 3, followed by relatively stable values until year 6/7 when values continuously increase. Based on the δ¹⁸O values, the Pacific cod were exposed to the coldest water temperatures immediately prior to capture. The lack of a clear cycle of seasonal variation and the continued increase in values towards the otolith edge obscures the season of capture, and indicates that other behavioural, environmental, or methodological factors influenced the otolith δ¹⁸O values. It is suggested that Pacific cod would have been harvested throughout the year, and the presence of cod remains in Aleutian archaeological sites cannot be used as a reliable indicator of summer occupation. In addition, when the δ¹⁸O otolith values are integrated with radiocarbon dates and known climatic regimes, it is demonstrated that climatic conditions play an integral role in the pattern of occupation at Gertrude Cove. Initial site occupation coincides with the end of a neoglacial cooling period, and the most recent and continuous occupation coincides with the end of a localized warming period and the onset of the Little Ice Age (LIA).
Resumo:
Sustainability can be indicated by a number of factors. Populations need to be aged evenly, ensuring a healthy equilibrium. Job opportunities must be numerous and of wide varieties to balance incomes from different employment sectors. Regions must also sustain vital natural resources in the area which are directly related to a place being self-sustaining. These indicators prove to be true, especially in Newfoundland, where people have struggled to remain in the small traditional communities that they consider being there 'home.' The population of Corner Brook and the surrounding areas can be stratified according to the values people hold to their special place. Even though people in western Newfoundland hold strong ties to their home, some parts of the region even though people in western Newfoundland hold strong ties to their home, some parts of the region struggle with employment, low incomes, out-migration, and dependency on declining natural resources. The aim of this paper is to present the process of designing a sample strategy for a human values pilot survey conducted in the city of Corner Brook. It will present a theoretical background over the period 2002-2006 to be used for sampling strategy.
Resumo:
As the world's synchrotrons and X-FELs endeavour to meet the need to analyse ever-smaller protein crystals, there grows a requirement for a new technique to present nano-dimensional samples to the beam for X-ray diffraction experiments.The work presented here details developmental work to reconfigure the nano tweezer technology developed by Optofluidics (PA, USA) for the trapping of nano dimensional protein crystals for X-ray crystallography experiments. The system in its standard configuration is used to trap nano particles for optical microscopy. It uses silicon nitride laser waveguides that bridge a micro fluidic channel. These waveguides contain 180 nm apertures of enabling the system to use biologically compatible 1.6 micron wavelength laser light to trap nano dimensional biological samples. Using conventional laser tweezers, the wavelength required to trap such nano dimensional samples would destroy them. The system in its optical configuration has trapped protein molecules as small as 10 nanometres.
Resumo:
As the world's synchrotrons and X-FELs endeavour to meet the need to analyse ever-smaller protein crystals, there grows a requirement for a new technique to present nano-dimensional samples to the beam for X-ray diffraction experiments.The work presented here details developmental work to reconfigure the nano tweezer technology developed by Optofluidics (PA, USA) for the trapping of nano dimensional protein crystals for X-ray crystallography experiments. The system in its standard configuration is used to trap nano particles for optical microscopy. It uses silicon nitride laser waveguides that bridge a micro fluidic channel. These waveguides contain 180 nm apertures of enabling the system to use biologically compatible 1.6 micron wavelength laser light to trap nano dimensional biological samples. Using conventional laser tweezers, the wavelength required to trap such nano dimensional samples would destroy them. The system in its optical configuration has trapped protein molecules as small as 10 nanometres.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
http://digitalcommons.fiu.edu/fce_lter_photos/1281/thumbnail.jpg
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.