10 resultados para Conditional CAPM
em CaltechTHESIS
Resumo:
In this thesis, I will discuss how information-theoretic arguments can be used to produce sharp bounds in the studies of quantum many-body systems. The main advantage of this approach, as opposed to the conventional field-theoretic argument, is that it depends very little on the precise form of the Hamiltonian. The main idea behind this thesis lies on a number of results concerning the structure of quantum states that are conditionally independent. Depending on the application, some of these statements are generalized to quantum states that are approximately conditionally independent. These structures can be readily used in the studies of gapped quantum many-body systems, especially for the ones in two spatial dimensions. A number of rigorous results are derived, including (i) a universal upper bound for a maximal number of topologically protected states that is expressed in terms of the topological entanglement entropy, (ii) a first-order perturbation bound for the topological entanglement entropy that decays superpolynomially with the size of the subsystem, and (iii) a correlation bound between an arbitrary local operator and a topological operator constructed from a set of local reduced density matrices. I also introduce exactly solvable models supported on a three-dimensional lattice that can be used as a reliable quantum memory.
Resumo:
Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.
RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.
Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.
Resumo:
RNA interference (RNAi) is a powerful biological pathway allowing for sequence-specific knockdown of any gene of interest. While RNAi is a proven tool for probing gene function in biological circuits, it is limited by being constitutively ON and executes the logical operation: silence gene Y. To provide greater control over post-transcriptional gene silencing, we propose engineering a biological logic gate to implement “conditional RNAi.” Such a logic gate would silence gene Y only upon the expression of gene X, a completely unrelated gene, executing the logic: if gene X is transcribed, silence independent gene Y. Silencing of gene Y could be confined to a specific time and/or tissue by appropriately selecting gene X.
To implement the logic of conditional RNAi, we present the design and experimental validation of three nucleic acid self-assembly mechanisms which detect a sub-sequence of mRNA X and produce a Dicer substrate specific to gene Y. We introduce small conditional RNAs (scRNAs) to execute the signal transduction under isothermal conditions. scRNAs are small RNAs which change conformation, leading to both shape and sequence signal transduction, in response to hybridization to an input nucleic acid target. While all three conditional RNAi mechanisms execute the same logical operation, they explore various design alternatives for nucleic acid self-assembly pathways, including the use of duplex and monomer scRNAs, stable versus metastable reactants, multiple methods of nucleation, and 3-way and 4-way branch migration.
We demonstrate the isothermal execution of the conditional RNAi mechanisms in a test tube with recombinant Dicer. These mechanisms execute the logic: if mRNA X is detected, produce a Dicer substrate targeting independent mRNA Y. Only the final Dicer substrate, not the scRNA reactants or intermediates, is efficiently processed by Dicer. Additional work in human whole-cell extracts and a model tissue-culture system delves into both the promise and challenge of implementing conditional RNAi in vivo.
Resumo:
In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
Some of the most exciting developments in the field of nucleic acid engineering include the utilization of synthetic nucleic acid molecular devices as gene regulators, as disease marker detectors, and most recently, as therapeutic agents. The common thread between these technologies is their reliance on the detection of specific nucleic acid input markers to generate some desirable output, such as a change in the copy number of an mRNA (for gene regulation), a change in the emitted light intensity (for some diagnostics), and a change in cell state within an organism (for therapeutics). The research presented in this thesis likewise focuses on engineering molecular tools that detect specific nucleic acid inputs, and respond with useful outputs.
Four contributions to the field of nucleic acid engineering are presented: (1) the construction of a single nucleotide polymorphism (SNP) detector based on the mechanism of hybridization chain reaction (HCR); (2) the utilization of a single-stranded oligonucleotide molecular Scavenger as a means of enhancing HCR selectivity; (3) the implementation of Quenched HCR, a technique that facilitates transduction of a nucleic acid chemical input into an optical (light) output, and (4) the engineering of conditional probes that function as sequence transducers, receiving target signal as input and providing a sequence of choice as output. These programmable molecular systems are conceptually well-suited for performing wash-free, highly selective rapid genotyping and expression profiling in vitro, in situ, and potentially in living cells.
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
The process of prophage integration by phage λ and the function and structure of the chromosomal elements required for λ integration have been studied with the use of λ deletion mutants. Since attφ, the substrate of the integration enzymes, is not essential for λ growth, and since attφ resides in a portion of the λ chromosome which is not necessary for vegetative growth, viable λ deletion mutants were isolated and examined to dissect the structure of attφ.
Deletion mutants were selected from wild type populations by treating the phage under conditions where phage are inactivated at a rate dependent on the DNA content of the particles. A number of deletion mutants were obtained in this way, and many of these mutants proved to have defects in integration. These defects were defined by analyzing the properties of Int-promoted recombination in these att mutants.
The types of mutants found and their properties indicated that attφ has three components: a cross-over point which is bordered on either side by recognition elements whose sequence is specifically required for normal integration. The interactions of the recognition elements in Int-promoted recombination between att mutants was examined and proved to be quite complex. In general, however, it appears that the λ integration system can function with a diverse array of mutant att sites.
The structure of attφ was examined by comparing the genetic properties of various att mutants with their location in the λ chromosome. To map these mutants, the techniques of heteroduplex DNA formation and electron microscopy were employed. It was found that integration cross-overs occur at only one point in attφ and that the recognition sequences that direct the integration enzymes to their site of action are quite small, less than 2000 nucleotides each. Furthermore, no base pair homology was detected between attφ and its bacterial analog, attB. This result clearly demonstrates that λ integration can occur between chromosomes which have little, if any, homology. In this respect, λ integration is unique as a system of recombination since most forms of generalized recombination require extensive base pair homology.
An additional study on the genetic and physical distances in the left arm of the λ genome was described. Here, a large number of conditional lethal nonsense mutants were isolated and mapped, and a genetic map of the entire left arm, comprising a total of 18 genes, was constructed. Four of these genes were discovered in this study. A series of λdg transducing phages was mapped by heteroduplex electron microscopy and the relationship between physical and genetic distances in the left arm was determined. The results indicate that recombination frequency in the left arm is an accurate reflection of physical distances, and moreover, there do not appear to be any undiscovered genes in this segment of the genome.