8 resultados para Markov Chains

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.

In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.

We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The computational detection of regulatory elements in DNA is a difficult but important problem impacting our progress in understanding the complex nature of eukaryotic gene regulation. Attempts to utilize cross-species conservation for this task have been hampered both by evolutionary changes of functional sites and poor performance of general-purpose alignment programs when applied to non-coding sequence. We describe a new and flexible framework for modeling binding site evolution in multiple related genomes, based on phylogenetic pair hidden Markov models which explicitly model the gain and loss of binding sites along a phylogeny. We demonstrate the value of this framework for both the alignment of regulatory regions and the inference of precise binding-site locations within those regions. As the underlying formalism is a stochastic, generative model, it can also be used to simulate the evolution of regulatory elements. Our implementation is scalable in terms of numbers of species and sequence lengths and can produce alignments and binding-site predictions with accuracy rivaling or exceeding current systems that specialize in only alignment or only binding-site prediction. We demonstrate the validity and power of various model components on extensive simulations of realistic sequence data and apply a specific model to study Drosophila enhancers in as many as ten related genomes and in the presence of gain and loss of binding sites. Different models and modeling assumptions can be easily specified, thus providing an invaluable tool for the exploration of biological hypotheses that can drive improvements in our understanding of the mechanisms and evolution of gene regulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contemporary globalization has been marked by significant shifts in the organization and governance of global industries. In the 1970s and 1980s, one such shift was characterized by the emergence of buyer-driven and producer-driven commodity chains. In the early 2000s, a more differentiated typology of governance structures was introduced, which focused on new types of coordination in global value chains (GVCs). Today the organization of the global economy is entering another phase, with transformations that are reshaping the governance structures of both GVCs and global capitalism at various levels: (1) the end of the Washington Consensus and the rise of contending centers of economic and political power; (2) a combination of geographic consolidation and value chain concentration in the global supply base, which, in some cases, is shifting bargaining power from lead firms in GVCs to large suppliers in developing economies; (3) new patterns of strategic coordination among value chain actors; (4) a shift in the end markets of many GVCs accelerated by the economic crisis of 2008-09, which is redefining regional geographies of investment and trade; and (5) a diffusion of the GVC approach to major international donor agencies, which is prompting a reformulation of established development paradigms. © 2013 © 2013 Taylor & Francis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The global value chain (GVC) concept has gained popularity as a way to analyze the international expansion and geographical fragmentation of contemporary supply chains and value creation and capture therein. It has been used broadly in academic publications that examine a wide range of global industries, and by many of the international organizations concerned with economic development. This note highlights some of the main features of GVC analysis and discusses the relationship between the core concepts of governance and upgrading. The key dynamics of contemporary global supply chains and their implications for global production and trade are illustrated by: (1) the consolidation of global value chains and the new geography of value creation and capture, with an emphasis on China; (2) the key roles of global supermarkets and private standards in agri-food supply chains; and (3) how the recent economic crisis contributes to shifting end markets and the regionalization of value chains. It concludes with a discussion of the future direction of GVC analysis and a potential collaboration with supply chain researchers. © 2012 Institute for Supply Management, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rise of private food standards has brought forth an ongoing debate about whether they work as a barrier for smallholders and hinder poverty reduction in developing countries. This paper uses a global value chain approach to explain the relationship between value chain structure and agrifood safety and quality standards and to discuss the challenges and possibilities this entails for the upgrading of smallholders. It maps four potential value chain scenarios depending on the degree of concentration in the markets for agrifood supply (farmers and manufacturers) and demand (supermarkets and other food retailers) and discusses the impact of lead firms and key intermediaries on smallholders in different chain situations. Each scenario is illustrated with case examples. Theoretical and policy issues are discussed, along with proposals for future research in terms of industry structure, private governance, and sustainable value chains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

© 2014, Springer Science+Business Media Dordrecht.The burgeoning literature on global value chains (GVCs) has recast our understanding of how industrial clusters are shaped by their ties to the international economy, but within this context, the role played by corporate social responsibility (CSR) continues to evolve. New research in the past decade allows us to better understand how CSR is linked to industrial clusters and GVCs. With geographic production and trade patterns in many industries becoming concentrated in the global South, lead firms in GVCs have been under growing pressure to link economic and social upgrading in more integrated forms of CSR. This is leading to a confluence of “private governance” (corporate codes of conduct and monitoring), “social governance” (civil society pressure on business from labor organizations and non-governmental organizations), and “public governance” (government policies to support gains by labor groups and environmental activists). This new form of “synergistic governance” is illustrated with evidence from recent studies of GVCs and industrial clusters, as well as advances in theorizing about new patterns of governance in GVCs and clusters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

© Emerald Group Publishing Limited.Purpose – The purpose of this paper is to introduce the global value chain (GVC) approach to understand the relationship between multinational enterprises (MNEs) and the changing patterns of global trade, investment and production, and its impact on economic and social upgrading. It aims to illuminate how GVCs can advance our understanding about MNEs and rising power (RP) firms and their impact on economic and social upgrading in fragmented and dispersed global production systems. Design/methodology/approach – The paper reviews theGVCliterature focusing on two conceptual elements of the GVC approach, governance and upgrading, and highlights three key recent developments in GVCs: concentration, regionalization and synergistic governance. Findings – The paper underscores the complicated role of GVCs in shaping economic and social upgrading for emerging economies, RP firms and developing country firms in general. Rising geographic and organizational concentration in GVCs leads to the uneven distribution of upgrading opportunities in favor of RP firms, and yet economic upgrading may be elusive even for the most established suppliers because of power asymmetry with global buyers. Shifting end markets and the regionalization of value chains can benefit RP firms by presenting alternative markets for upgrading. Yet, without further upgrading, such benefits may be achieved at the expense of social downgrading. Finally, the ineffectiveness of private standards to achieve social upgrading has led to calls for synergistic governance through the cooperation of private, public and social actors, both global and local. Originality/value – The paper illuminates how the GVC approach and its key concepts can contribute to the critical international business and RP firms literature by examining the latest dynamics in GVCs and their impacts on economic and social development in developing countries.