5 resultados para Modified reflected normal loss function
em Duke University
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Understanding how genes affect behavior is critical to develop precise therapies for human behavioral disorders. The ability to investigate the relationship between genes and behavior has been greatly advanced over the last few decades due to progress in gene-targeting technology. Recently, the Tet gene family was discovered and implicated in epigenetic modification of DNA methylation by converting 5-methylcytosine to 5-hydroxymethylcytosine (5hmC). 5hmC and its catalysts, the TET proteins, are highly abundant in the postnatal brain but with unclear functions. To investigate their neural functions, we generated new lines of Tet1 and Tet3 mutant mice using a gene targeting approach. We designed both mutations to cause a frameshift by deleting the largest coding exon of Tet1 (Tet1Δe4) and the catalytic domain of Tet3 (Tet3Δe7-9). As Tet1 is also highly expressed in embryonic stem cells (ESCs), we generated Tet1 homozygous deleted ESCs through sequential targeting to compare the function of Tet1 in the brain to its role in ESCs. To test our hypothesis that TET proteins epigenetically regulate transcription of key neural genes important for normal brain function, we examined transcriptional and epigenetic differences in the Tet1Δe4 mouse brain. The oxytocin receptor (OXTR), a neural gene implicated in social behaviors, is suggested to be epigenetically regulated by an unknown mechanism. Interestingly, several human studies have found associations between OXTR DNA hypermethylation and a wide spectrum of behavioral traits and neuropsychiatric disorders including autism spectrum disorders. Here we report the first evidence for an epigenetic mechanism of Oxtr transcription as expression of Oxtr is reduced in the brains of Tet1Δe4-/- mice. Likewise, the CpG island overlapping the promoter of Oxtr is hypermethylated during early embryonic development and persists into adulthood. We also discovered altered histone modifications at the hypermethylated regions, indicating the loss of TET1 has broad effects on the chromatin structure at Oxtr. Unexpectedly, we discovered an array of novel mRNA isoforms of Oxtr that are selectively reduced in Tet1Δe4-/- mice. Additionally, Tet1Δe4-/- mice display increased agonistic behaviors and impaired maternal care and short-term memory. Our findings support a novel role for TET1 in regulating Oxtr expression by preventing DNA hypermethylation and implicate TET1 in social behaviors, offering novel insight into Oxtr epigenetic regulation and its role in neuropsychiatric disorders.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
When the heart fails, there is often a constellation of biochemical alterations of the beta-adrenergic receptor (betaAR) signaling system, leading to the loss of cardiac inotropic reserve. betaAR down-regulation and functional uncoupling are mediated through enhanced activity of the betaAR kinase (betaARK1), the expression of which is increased in ischemic and failing myocardium. These changes are widely viewed as representing an adaptive mechanism, which protects the heart against chronic activation. In this study, we demonstrate, using in vivo intracoronary adenoviral-mediated gene delivery of a peptide inhibitor of betaARK1 (betaARKct), that the desensitization and down-regulation of betaARs seen in the failing heart may actually be maladaptive. In a rabbit model of heart failure induced by myocardial infarction, which recapitulates the biochemical betaAR abnormalities seen in human heart failure, delivery of the betaARKct transgene at the time of myocardial infarction prevents the rise in betaARK1 activity and expression and thereby maintains betaAR density and signaling at normal levels. Rather than leading to deleterious effects, cardiac function is improved, and the development of heart failure is delayed. These results appear to challenge the notion that dampening of betaAR signaling in the failing heart is protective, and they may lead to novel therapeutic strategies to treat heart disease via inhibition of betaARK1 and preservation of myocardial betaAR function.
Resumo:
Proper balancing of the activities of metabolic pathways to meet the challenge of providing necessary products for biosynthetic and energy demands of the cell is a key requirement for maintaining cell viability and allowing for cell proliferation. Cell metabolism has been found to play a crucial role in numerous cell settings, including in the cells of the immune system, where a successful immune response requires rapid proliferation and successful clearance of dangerous pathogens followed by resolution of the immune response. Additionally, it is now well known that cell metabolism is markedly altered from normal cells in the setting of cancer, where tumor cells rapidly and persistently proliferate. In both settings, alterations to the metabolic profile of the cells play important roles in promoting cell proliferation and survival.
It has long been known that many types of tumor cells and actively proliferating immune cells adopt a metabolic phenotype of aerobic glycolysis, whereby the cell, even under normoxic conditions, imports large amounts of glucose and fluxes it through the glycolytic pathway and produces lactate. However, the metabolic programs utilized by various immune cell subsets have only recently begun to be explored in detail, and the metabolic features and pathways influencing cell metabolism in tumor cells in vivo have not been studied in detail. The work presented here examines the role of metabolism in regulating the function of an important subset of the immune system, the regulatory T cell (Treg) and the role and regulation of metabolism in the context of malignant T cell acute lymphoblastic leukemia (T-ALL). We show that Treg cells, in order to properly function to suppress auto-inflammatory disease, adopt a metabolic program that is characterized by oxidative metabolism and active suppression of anabolic signaling and metabolic pathways. We found that the transcription factor FoxP3, which is highly expressed in Treg cells, drives this phenotype. Perturbing the metabolic phenotype of Treg cells by enforcing increased glycolysis or driving proliferation and anabolic signaling through inflammatory signaling pathways results in a reduction in suppressive function of Tregs.
In our studies focused on the metabolism of T-ALL, we observed that while T-ALL cells use and require aerobic glycolysis, the glycolytic metabolism of T-ALL is restrained compared to that of an antigen activated T cell. The metabolism of T-ALL is instead balanced, with mitochondrial metabolism also being increased. We observed that the pro-anabolic growth mTORC1 signaling pathway was limited in primary T-ALL cells as a result of AMPK pathway activity. AMPK pathway signaling was elevated as a result of oncogene induced metabolic stress. AMPK played a key role in the regulation of T-ALL cell metabolism, as genetic deletion of AMPK in an in vivo murine model of T-ALL resulted in increased glycolysis and anabolic metabolism, yet paradoxically increased cell death and increased mouse survival time. AMPK acts to promote mitochondrial oxidative metabolism in T-ALL through the regulation of Complex I activity, and loss of AMPK reduced mitochondrial oxidative metabolism and resulted in increased metabolic stress. Confirming a role for mitochondrial metabolism in T-ALL, we observed that the direct pharmacological inhibition of Complex I also resulted in a rapid loss of T-ALL cell viability in vitro and in vivo. Taken together, this work establishes an important role for AMPK to both balance the metabolic pathways utilized by T-ALL to allow for cell proliferation and to also promote tumor cell viability by controlling metabolic stress.
Overall, this work demonstrates the importance of the proper coupling of metabolic pathway activity with the function needs of particular types of immune cells. We show that Treg cells, which mainly act to keep immune responses well regulated, adopt a metabolic program where glycolytic metabolism is actively repressed, while oxidative metabolism is promoted. In the setting of malignant T-ALL cells, metabolic activity is surprisingly balanced, with both glycolysis and mitochondrial oxidative metabolism being utilized. In both cases, altering the metabolic balance towards glycolytic metabolism results in negative outcomes for the cell, with decreased Treg functionality and increased metabolic stress in T-ALL. In both cases, this work has generated a new understanding of how metabolism couples to immune cell function, and may allow for selective targeting of immune cell subsets by the specific targeting of metabolic pathways.