27 resultados para Inference.

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationships between O2 and δ15N NO3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N:18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32 % of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source δ15N NO3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. © Author(s) 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technological advances in genotyping have given rise to hypothesis-based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally "validated" in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. RESULTS: Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. CONCLUSIONS: Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The correlation between diet and dental topography is of importance to paleontologists seeking to diagnose ecological adaptations in extinct taxa. Although the subject is well represented in the literature, few studies directly compare methods or evaluate dietary signals conveyed by both upper and lower molars. Here, we address this gap in our knowledge by comparing the efficacy of three measures of functional morphology for classifying an ecologically diverse sample of thirteen medium- to large-bodied platyrrhines by diet category (e.g., folivore, frugivore, hard object feeder). We used Shearing Quotient (SQ), an index derived from linear measurements of molar cutting edges and two indices of crown surface topography, Occlusal Relief (OR) and Relief Index (RFI). Using SQ, OR, and RFI, individuals were then classified by dietary category using Discriminate Function Analysis. Both upper and lower molar variables produce high classification rates in assigning individuals to diet categories, but lower molars are consistently more successful. SQs yield the highest classification rates. RFI and OR generally perform above chance. Upper molar RFI has a success rate below the level of chance. Adding molar length enhances the discriminatory power for all variables. We conclude that upper molar SQs are useful for dietary reconstruction, especially when combined with body size information. Additionally, we find that among our sample of platyrrhines, SQ remains the strongest predictor of diet, while RFI is less useful at signaling dietary differences in absence of body size information. The study demonstrates new ways for inferring the diets of extinct platyrrhine primates when both upper and lower molars are available, or, for taxa known only from upper molars. The techniques are useful in reconstructing diet in stem representatives of anthropoid clade, who share key aspects of molar morphology with extant platyrrhines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We recently developed an approach for testing the accuracy of network inference algorithms by applying them to biologically realistic simulations with known network topology. Here, we seek to determine the degree to which the network topology and data sampling regime influence the ability of our Bayesian network inference algorithm, NETWORKINFERENCE, to recover gene regulatory networks. NETWORKINFERENCE performed well at recovering feedback loops and multiple targets of a regulator with small amounts of data, but required more data to recover multiple regulators of a gene. When collecting the same number of data samples at different intervals from the system, the best recovery was produced by sampling intervals long enough such that sampling covered propagation of regulation through the network but not so long such that intervals missed internal dynamics. These results further elucidate the possibilities and limitations of network inference based on biological data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MOTIVATION: Although many network inference algorithms have been presented in the bioinformatics literature, no suitable approach has been formulated for evaluating their effectiveness at recovering models of complex biological systems from limited data. To overcome this limitation, we propose an approach to evaluate network inference algorithms according to their ability to recover a complex functional network from biologically reasonable simulated data. RESULTS: We designed a simulator to generate data representing a complex biological system at multiple levels of organization: behaviour, neural anatomy, brain electrophysiology, and gene expression of songbirds. About 90% of the simulated variables are unregulated by other variables in the system and are included simply as distracters. We sampled the simulated data at intervals as one would sample from a biological system in practice, and then used the sampled data to evaluate the effectiveness of an algorithm we developed for functional network inference. We found that our algorithm is highly effective at recovering the functional network structure of the simulated system-including the irrelevance of unregulated variables-from sampled data alone. To assess the reproducibility of these results, we tested our inference algorithm on 50 separately simulated sets of data and it consistently recovered almost perfectly the complex functional network structure underlying the simulated data. To our knowledge, this is the first approach for evaluating the effectiveness of functional network inference algorithms at recovering models from limited data. Our simulation approach also enables researchers a priori to design experiments and data-collection protocols that are amenable to functional network inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.

Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UNLABELLED: PREMISE OF THE STUDY: The Sphagnopsida, an early-diverging lineage of mosses (phylum Bryophyta), are morphologically and ecologically unique and have profound impacts on global climate. The Sphagnopsida are currently classified in two genera, Sphagnum (peat mosses) with some 350-500 species and Ambuchanania with one species. An analysis of phylogenetic relationships among species and genera in the Sphagnopsida were conducted to resolve major lineages and relationships among species within the Sphagnopsida. • METHODS: Phylogenetic analyses of nucleotide sequences from the nuclear, plastid, and mitochondrial genomes (11 704 nucleotides total) were conducted and analyzed using maximum likelihood and Bayesian inference employing seven different substitution models of varying complexity. • KEY RESULTS: Phylogenetic analyses resolved three lineages within the Sphagnopsida: (1) Sphagnum sericeum, (2) S. inretortum plus Ambuchanania leucobryoides, and (3) all remaining species of Sphagnum. Sister group relationships among these three clades could not be resolved, but the phylogenetic results indicate that the highly divergent morphology of A. leucobryoides is derived within the Sphagnopsida rather than plesiomorphic. A new classification is proposed for class Sphagnopsida, with one order (Sphagnales), three families, and four genera. • CONCLUSIONS: The Sphagnopsida are an old lineage within the phylum Bryophyta, but the extant species of Sphagnum represent a relatively recent radiation. It is likely that additional species critical to understanding the evolution of peat mosses await discovery, especially in the southern hemisphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UNLABELLED: PREMISE OF THE STUDY: The Frullania tamarisci complex includes eight Holarctic liverwort species. One of these, F. asagrayana, is distributed broadly throughout eastern North America from Canada to the Gulf Coast. Preliminary genetic data suggested that the species includes two groups of populations. This study was designed to test whether the two groups are reproductively isolated biological species. • METHODS: Eighty-eight samples from across the range of F. asagrayana, plus 73 samples from one population, were genotyped for 13 microsatellite loci. Sequences for two plastid loci and nrITS were obtained from 13 accessions. Genetic data were analyzed using coalescent models and Bayesian inference. • KEY RESULTS: Frullania asagrayana is sequence-invariant at the two plastid loci and ITS2, but two clear groups were resolved by microsatellites. The two groups are largely reproductively isolated, but there is a low level of gene flow from the southern to the northern group. No gene flow was detected in the other direction. A local population was heterogeneous but displayed strong genetic structure. • CONCLUSIONS: The genetic structure of F. asagrayana in eastern North America reflects morphologically cryptic differentiation between reproductively isolated groups of populations, near-panmixis within groups, and clonal propagation at local scales. Reproductive isolation between groups that are invariant at the level of nucleotide sequences shows that caution must be exercised in making taxonomic and evolutionary inferences from reciprocal monophyly (or lack thereof) between putative species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an estimation-theoretic approach to the inference of an incoherent 3D scattering density from 2D scattered speckle field measurements. The object density is derived from the covariance of the speckle field. The inference is performed by a constrained optimization technique inspired by compressive sensing theory. Experimental results demonstrate and verify the performance of our estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This Letter shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is sufficient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.