3 resultados para local-to-zero analysis
em Duke University
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The pathogenesis of osteoarthritis is mediated in part by inflammatory cytokines including interleukin-1 (IL-1), which promote degradation of articular cartilage and prevent human mesenchymal stem cell (hMSC) chondrogenesis. We combined gene therapy and functional tissue engineering to develop engineered cartilage with immunomodulatory properties that allow chondrogenesis in the presence of pathologic levels of IL-1 by inducing overexpression of IL-1 receptor antagonist (IL-1Ra) in hMSCs via scaffold-mediated lentiviral gene delivery. A doxycycline-inducible vector was used to transduce hMSCs in monolayer or within 3D woven PCL scaffolds to enable tunable IL-1Ra production. In the presence of IL-1, IL-1Ra-expressing engineered cartilage produced cartilage-specific extracellular matrix, while resisting IL-1-induced upregulation of matrix metalloproteinases and maintaining mechanical properties similar to native articular cartilage. The ability of functional engineered cartilage to deliver tunable anti-inflammatory cytokines to the joint may enhance the long-term success of therapies for cartilage injuries or osteoarthritis.
Following this, we modified this anti-inflammatory engineered cartilage to incorporate rabbit MSCs and evaluated this therapeutic strategy in a pilot study in vivo in rabbit osteochondral defects. Rabbits were fed a custom doxycycline diet to induce gene expression in engineered cartilage implanted in the joint. Serum and synovial fluid were collected and the levels of doxycycline and inflammatory mediators were measured. Rabbits were euthanized 3 weeks following surgery and tissues were harvested for analysis. We found that doxycycline levels in serum and synovial fluid were too low to induce strong overexpression of hIL-1Ra in the joint and hIL-1Ra was undetectable in synovial fluid via ELISA. Although hIL-1Ra expression in the first few days local to the site of injury may have had a beneficial effect, overall a higher doxycycline dose and more readily transduced cell population would improve application of this therapy.
In addition to the 3D woven PCL scaffold, cartilage-derived matrix scaffolds have recently emerged as a promising option for cartilage tissue engineering. Spatially-defined, biomaterial-mediated lentiviral gene delivery of tunable and inducible morphogenetic transgenes may enable guided differentiation of hMSCs into both cartilage and bone within CDM scaffolds, enhancing the ability of the CDM scaffold to provide chondrogenic cues to hMSCs. In addition to controlled production of anti-inflammatory proteins within the joint, in situ production of chondro- and osteo-inductive factors within tissue-engineered cartilage, bone, or osteochondral tissue may be highly advantageous as it could eliminate the need for extensive in vitro differentiation involving supplementation of culture media with exogenous growth factors. To this end, we have utilized controlled overexpression of transforming growth factor-beta 3 (TGF-β3), bone morphogenetic protein-2 (BMP-2) or a combination of both factors, to induce chondrogenesis, osteogenesis, or both, within CDM hemispheres. We found that TGF-β3 overexpression led to robust chondrogenesis in vitro and BMP-2 overexpression led to mineralization but not accumulation of type I collagen. We also showed the development of a single osteochondral construct by combining tissues overexpressing BMP-2 (hemisphere insert) and TGF-β3 (hollow hemisphere shell) and culturing them together in the same media. Chondrogenic ECM was localized in the TGF-β3-expressing portion and osteogenic ECM was localized in the BMP-2-expressing region. Tissue also formed in the interface between the two pieces, integrating them into a single construct.
Since CDM scaffolds can be enzymatically degraded just like native cartilage, we hypothesized that IL-1 may have an even larger influence on CDM than PCL tissue-engineered constructs. Additionally, anti-inflammatory engineered cartilage implanted in vivo will likely affect cartilage and the underlying bone. There is some evidence that osteogenesis may be enhanced by IL-1 treatment rather than inhibited. To investigate the effects of an inflammatory environment on osteogenesis and chondrogenesis within CDM hemispheres, we evaluated the ability of IL-1Ra-expressing or control constructs to undergo chondrogenesis and osteogenesis in the prescence of IL-1. We found that IL-1 prevented chondrogenesis in CDM hemispheres but did not did not produce discernable effects on osteogenesis in CDM hemispheres. IL-1Ra-expressing CDM hemispheres produced robust cartilage-like ECM and did not upregulate inflammatory mediators during chondrogenic culture in the presence of IL-1.
Resumo:
There are many sociopolitical theories to help explain why governments and actors do what they do. Securitization Theory is a process-oriented theory in international relations that focuses on how an actor defines another actor as an “existential threat,” and the resulting responses that can be taken in order to address that threat. While Securitization Theory is an acceptable method to analyze the relationships between actors in the international system, this thesis contends that the proper examination is multi-factorial, focusing on the addition of Role Theory to the analysis. Consideration of Role Theory, which is another international relations theory that explains how an actor’s strategies, relationships, and perceptions by others is based on pre-conceptualized definitions of that actor’s identity, is essential in order to fully explain why an actor might respond to another in a particular way. Certain roles an actor may enact produce a rival relationship with other actors in the system, and it is those rival roles that elicit securitized responses. The possibility of a securitized response lessens when a role or a relationship between roles becomes ambiguous. There are clear points of role rivalry and role ambiguity between Hizb’allah and Iran, which has directly impacted, and continues to impact, how the United States (US) responds to these actors. Because of role ambiguity, the US has still not conceptualized an effective way to deal with Hizb’allah and Iran holistically across all its various areas of operation and in its various enacted roles. It would be overly simplistic to see Hizb’allah and Iran solely through one lens depending on which hemisphere or continent one is observing. The reality is likely more nuanced. Both Role Theory and Securitization theory can help to understand and articulate those nuances. By examining two case studies of Hizb’allah and Iran’s enactment of various roles in both the Middle East and Latin America, the situations where roles cause a securitized response and where the response is less securitized due to role ambiguity will become clear. Using this augmented approach of combining both theories, along with supplementing the manner in which an actor, action, or role is analyzed, will produce better methods for policy-making that will be able to address the more ambiguous activities of Hizb’allah and Iran in these two regions.