12 resultados para Shift-and-add algorithms

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three different categories of flow problems of a fluid containing small particles are being considered here. They are: (i) a fluid containing small, non-reacting particles (Parts I and II); (ii) a fluid containing reacting particles (Parts III and IV); and (iii) a fluid containing particles of two distinct sizes with collisions between two groups of particles (Part V).

Part I

A numerical solution is obtained for a fluid containing small particles flowing over an infinite disc rotating at a constant angular velocity. It is a boundary layer type flow, and the boundary layer thickness for the mixture is estimated. For large Reynolds number, the solution suggests the boundary layer approximation of a fluid-particle mixture by assuming W = Wp. The error introduced is consistent with the Prandtl’s boundary layer approximation. Outside the boundary layer, the flow field has to satisfy the “inviscid equation” in which the viscous stress terms are absent while the drag force between the particle cloud and the fluid is still important. Increase of particle concentration reduces the boundary layer thickness and the amount of mixture being transported outwardly is reduced. A new parameter, β = 1/Ω τv, is introduced which is also proportional to μ. The secondary flow of the particle cloud depends very much on β. For small values of β, the particle cloud velocity attains its maximum value on the surface of the disc, and for infinitely large values of β, both the radial and axial particle velocity components vanish on the surface of the disc.

Part II

The “inviscid” equation for a gas-particle mixture is linearized to describe the flow over a wavy wall. Corresponding to the Prandtl-Glauert equation for pure gas, a fourth order partial differential equation in terms of the velocity potential ϕ is obtained for the mixture. The solution is obtained for the flow over a periodic wavy wall. For equilibrium flows where λv and λT approach zero and frozen flows in which λv and λT become infinitely large, the flow problem is basically similar to that obtained by Ackeret for a pure gas. For finite values of λv and λT, all quantities except v are not in phase with the wavy wall. Thus the drag coefficient CD is present even in the subsonic case, and similarly, all quantities decay exponentially for supersonic flows. The phase shift and the attenuation factor increase for increasing particle concentration.

Part III

Using the boundary layer approximation, the initial development of the combustion zone between the laminar mixing of two parallel streams of oxidizing agent and small, solid, combustible particles suspended in an inert gas is investigated. For the special case when the two streams are moving at the same speed, a Green’s function exists for the differential equations describing first order gas temperature and oxidizer concentration. Solutions in terms of error functions and exponential integrals are obtained. Reactions occur within a relatively thin region of the order of λD. Thus, it seems advantageous in the general study of two-dimensional laminar flame problems to introduce a chemical boundary layer of thickness λD within which reactions take place. Outside this chemical boundary layer, the flow field corresponds to the ordinary fluid dynamics without chemical reaction.

Part IV

The shock wave structure in a condensing medium of small liquid droplets suspended in a homogeneous gas-vapor mixture consists of the conventional compressive wave followed by a relaxation region in which the particle cloud and gas mixture attain momentum and thermal equilibrium. Immediately following the compressive wave, the partial pressure corresponding to the vapor concentration in the gas mixture is higher than the vapor pressure of the liquid droplets and condensation sets in. Farther downstream of the shock, evaporation appears when the particle temperature is raised by the hot surrounding gas mixture. The thickness of the condensation region depends very much on the latent heat. For relatively high latent heat, the condensation zone is small compared with ɅD.

For solid particles suspended initially in an inert gas, the relaxation zone immediately following the compression wave consists of a region where the particle temperature is first being raised to its melting point. When the particles are totally melted as the particle temperature is further increased, evaporation of the particles also plays a role.

The equilibrium condition downstream of the shock can be calculated and is independent of the model of the particle-gas mixture interaction.

Part V

For a gas containing particles of two distinct sizes and satisfying certain conditions, momentum transfer due to collisions between the two groups of particles can be taken into consideration using the classical elastic spherical ball model. Both in the relatively simple problem of normal shock wave and the perturbation solutions for the nozzle flow, the transfer of momentum due to collisions which decreases the velocity difference between the two groups of particles is clearly demonstrated. The difference in temperature as compared with the collisionless case is quite negligible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies mobile robotic manipulators, where one or more robot manipulator arms are integrated with a mobile robotic base. The base could be a wheeled or tracked vehicle, or it might be a multi-limbed locomotor. As robots are increasingly deployed in complex and unstructured environments, the need for mobile manipulation increases. Mobile robotic assistants have the potential to revolutionize human lives in a large variety of settings including home, industrial and outdoor environments.

Mobile Manipulation is the use or study of such mobile robots as they interact with physical objects in their environment. As compared to fixed base manipulators, mobile manipulators can take advantage of the base mechanism’s added degrees of freedom in the task planning and execution process. But their use also poses new problems in the analysis and control of base system stability, and the planning of coordinated base and arm motions. For mobile manipulators to be successfully and efficiently used, a thorough understanding of their kinematics, stability, and capabilities is required. Moreover, because mobile manipulators typically possess a large number of actuators, new and efficient methods to coordinate their large numbers of degrees of freedom are needed to make them practically deployable. This thesis develops new kinematic and stability analyses of mobile manipulation, and new algorithms to efficiently plan their motions.

I first develop detailed and novel descriptions of the kinematics governing the operation of multi- limbed legged robots working in the presence of gravity, and whose limbs may also be simultaneously used for manipulation. The fundamental stance constraint that arises from simple assumptions about friction and the ground contact and feasible motions is derived. Thereafter, a local relationship between joint motions and motions of the robot abdomen and reaching limbs is developed. Baseeon these relationships, one can define and analyze local kinematic qualities including limberness, wrench resistance and local dexterity. While previous researchers have noted the similarity between multi- fingered grasping and quasi-static manipulation, this thesis makes explicit connections between these two problems.

The kinematic expressions form the basis for a local motion planning problem that that determines the joint motions to achieve several simultaneous objectives while maintaining stance stability in the presence of gravity. This problem is translated into a convex quadratic program entitled the balanced priority solution, whose existence and uniqueness properties are developed. This problem is related in spirit to the classical redundancy resoxlution and task-priority approaches. With some simple modifications, this local planning and optimization problem can be extended to handle a large variety of goals and constraints that arise in mobile-manipulation. This local planning problem applies readily to other mobile bases including wheeled and articulated bases. This thesis describes the use of the local planning techniques to generate global plans, as well as for use within a feedback loop. The work in this thesis is motivated in part by many practical tasks involving the Surrogate and RoboSimian robots at NASA/JPL, and a large number of examples involving the two robots, both real and simulated, are provided.

Finally, this thesis provides an analysis of simultaneous force and motion control for multi- limbed legged robots. Starting with a classical linear stiffness relationship, an analysis of this problem for multiple point contacts is described. The local velocity planning problem is extended to include generation of forces, as well as to maintain stability using force-feedback. This thesis also provides a concise, novel definition of static stability, and proves some conditions under which it is satisfied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we investigate atomic scale imperfections and fluctuations in the quantum transport properties of novel semiconductor nanostructures. For this purpose, we have developed a numerically efficient supercell model of quantum transport capable of representing potential variations in three dimensions. This flexibility allows us to examine new quantum device structures made possible through state-of-the-art semiconductor fabrication techniques such as molecular beam epitaxy and nanolithography. These structures, with characteristic dimensions on the order of a few nanometers, hold promise for much smaller, faster and more efficient devices than those in present operation, yet they are highly sensitive to structural and compositional variations such as defect impurities, interface roughness and alloy disorder. If these quantum structures are to serve as components of reliable, mass-produced devices, these issues must be addressed.

In Chapter 1 we discuss some of the important issues in resonant tunneling devices and mention some of thier applications. In Chapters 2 and 3, we describe our supercell model of quantum transport and an efficient numerical implementation. In the remaining chapters, we present applications.

In Chapter 4, we examine transport in single and double barrier tunneling structures with neutral impurities. We find that an isolated attractive impurity in a single barrier can produce a transmission resonance whose position and strength are sensitive to the location of the impurity within the barrier. Multiple impurities can lead to a complex resonance structure that fluctuates widely with impurity configuration. In addition, impurity resonances can give rise to negative differential resistance. In Chapter 5, we study interface roughness and alloy disorder in double barrier structures. We find that interface roughness and alloy disorder can shift and broaden the n = 1 transmission resonance and give rise to new resonance peaks, especially in the presence of clusters comparable in size to the electron deBroglie wavelength. In Chapter 6 we examine the effects of interface roughness and impurities on transmission in a quantum dot electron waveguide. We find that variation in the configuration and stoichiometry of the interface roughness leads to substantial fluctuations in the transmission properties. These fluctuations are reduced by an attractive impurity placed near the center of the dot.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red fluorescent proteins (RFPs) have attracted significant engineering focus because of the promise of near infrared fluorescent proteins, whose light penetrates biological tissue, and which would allow imaging inside of vertebrate animals. The RFP landscape, which numbers ~200 members, is mostly populated by engineered variants of four native RFPs, leaving the vast majority of native RFP biodiversity untouched. This is largely due to the fact that native RFPs are obligate tetramers, limiting their usefulness as fusion proteins. Monomerization has imposed critical costs on these evolved tetramers, however, as it has invariably led to loss of brightness, and often to many other adverse effects on the fluorescent properties of the derived monomeric variants. Here we have attempted to understand why monomerization has taken such a large toll on Anthozoa class RFPs, and to outline a clear strategy for their monomerization. We begin with a structural study of the far-red fluorescence of AQ143, one of the furthest red emitting RFPs. We then try to separate the problem of stable and bright fluorescence from the design of a soluble monomeric β-barrel surface by engineering a hybrid protein (DsRmCh) with an oligomeric parent that had been previously monomerized, DsRed, and a pre-stabilized monomeric core from mCherry. This allows us to use computational design to successfully design a stable, soluble, fluorescent monomer. Next we took HcRed, which is a previously unmonomerized RFP that has far-red fluorescence (λemission = 633 nm) and attempted to monomerize it making use of lessons learned from DsRmCh. We engineered two monomeric proteins by pre-stabilizing HcRed’s core, then monomerizing in stages, making use of computational design and directed evolution techniques such as error-prone mutagenesis and DNA shuffling. We call these proteins mGinger0.1 (λem = 637 nm / Φ = 0.02) and mGinger0.2 (λem = 631 nm Φ = 0.04). They are the furthest red first generation monomeric RFPs ever developed, are significantly thermostabilized, and add diversity to a small field of far-red monomeric FPs. We anticipate that the techniques we describe will be facilitate future RFP monomerization, and that further core optimization of the mGingers may allow significant improvements in brightness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis advances our understanding of midlatitude storm tracks and how they respond to perturbations in the climate system. The midlatitude storm tracks are regions of maximal turbulent kinetic energy in the atmosphere. Through them, the bulk of the atmospheric transport of energy, water vapor, and angular momentum occurs in midlatitudes. Therefore, they are important regulators of climate, controlling basic features such as the distribution of surface temperatures, precipitation, and winds in midlatitudes. Storm tracks are robustly projected to shift poleward in global-warming simulations with current climate models. Yet the reasons for this shift have remained unclear. Here we show that this shift occurs even in extremely idealized (but still three-dimensional) simulations of dry atmospheres. We use these simulations to develop an understanding of the processes responsible for the shift and develop a conceptual model that accounts for it.

We demonstrate that changes in the convective static stability in the deep tropics alone can drive remote shifts in the midlatitude storm tracks. Through simulations with a dry idealized general circulation model (GCM), midlatitude storm tracks are shown to be located where the mean available potential energy (MAPE, a measure of the potential energy available to be converted into kinetic energy) is maximal. As the climate varies, even if only driven by tropical static stability changes, the MAPE maximum shifts primarily because of shifts of the maximum of near-surface meridional temperature gradients. The temperature gradients shift in response to changes in the width of the tropical Hadley circulation, whose width is affected by the tropical static stability. Storm tracks generally shift in tandem with shifts of the subtropical terminus of the Hadley circulation.

We develop a one-dimensional diffusive energy-balance model that links changes in the Hadley circulation to midlatitude temperature gradients and so to the storm tracks. It is the first conceptual model to incorporate a dynamical coupling between the tropical Hadley circulation and midlatitude turbulent energy transport. Numerical and analytical solutions of the model elucidate the circumstances of when and how the storm tracks shift in tandem with the terminus of the Hadley circulation. They illustrate how an increase of only the convective static stability in the deep tropics can lead to an expansion of the Hadley circulation and a poleward shift of storm tracks.

The simulations with the idealized GCM and the conceptual energy-balance model demonstrate a clear link between Hadley circulation dynamics and midlatitude storm track position. With the help of the hierarchy of models presented in this thesis, we obtain a closed theory of storm track shifts in dry climates. The relevance of this theory for more realistic moist climates is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last century, the silicon revolution has enabled us to build faster, smaller and more sophisticated computers. Today, these computers control phones, cars, satellites, assembly lines, and other electromechanical devices. Just as electrical wiring controls electromechanical devices, living organisms employ "chemical wiring" to make decisions about their environment and control physical processes. Currently, the big difference between these two substrates is that while we have the abstractions, design principles, verification and fabrication techniques in place for programming with silicon, we have no comparable understanding or expertise for programming chemistry.

In this thesis we take a small step towards the goal of learning how to systematically engineer prescribed non-equilibrium dynamical behaviors in chemical systems. We use the formalism of chemical reaction networks (CRNs), combined with mass-action kinetics, as our programming language for specifying dynamical behaviors. Leveraging the tools of nucleic acid nanotechnology (introduced in Chapter 1), we employ synthetic DNA molecules as our molecular architecture and toehold-mediated DNA strand displacement as our reaction primitive.

Abstraction, modular design and systematic fabrication can work only with well-understood and quantitatively characterized tools. Therefore, we embark on a detailed study of the "device physics" of DNA strand displacement (Chapter 2). We present a unified view of strand displacement biophysics and kinetics by studying the process at multiple levels of detail, using an intuitive model of a random walk on a 1-dimensional energy landscape, a secondary structure kinetics model with single base-pair steps, and a coarse-grained molecular model that incorporates three-dimensional geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Our findings are consistent with previously measured or inferred rates for hybridization, fraying, and branch migration, and provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.

In Chapters 3 and 4, we identify and overcome the crucial experimental challenges involved in using our general DNA-based technology for engineering dynamical behaviors in the test tube. In this process, we identify important design rules that inform our choice of molecular motifs and our algorithms for designing and verifying DNA sequences for our molecular implementation. We also develop flexible molecular strategies for "tuning" our reaction rates and stoichiometries in order to compensate for unavoidable non-idealities in the molecular implementation, such as imperfectly synthesized molecules and spurious "leak" pathways that compete with desired pathways.

We successfully implement three distinct autocatalytic reactions, which we then combine into a de novo chemical oscillator. Unlike biological networks, which use sophisticated evolved molecules (like proteins) to realize such behavior, our test tube realization is the first to demonstrate that Watson-Crick base pairing interactions alone suffice for oscillatory dynamics. Since our design pipeline is general and applicable to any CRN, our experimental demonstration of a de novo chemical oscillator could enable the systematic construction of CRNs with other dynamic behaviors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents methods for incrementally constructing controllers in the presence of uncertainty and nonlinear dynamics. The basic setting is motion planning subject to temporal logic specifications. Broadly, two categories of problems are treated. The first is reactive formal synthesis when so-called discrete abstractions are available. The fragment of linear-time temporal logic (LTL) known as GR(1) is used to express assumptions about an adversarial environment and requirements of the controller. Two problems of changes to a specification are posed that concern the two major aspects of GR(1): safety and liveness. Algorithms providing incremental updates to strategies are presented as solutions. In support of these, an annotation of strategies is developed that facilitates repeated modifications. A variety of properties are proven about it, including necessity of existence and sufficiency for a strategy to be winning. The second category of problems considered is non-reactive (open-loop) synthesis in the absence of a discrete abstraction. Instead, the presented stochastic optimization methods directly construct a control input sequence that achieves low cost and satisfies a LTL formula. Several relaxations are considered as heuristics to address the rarity of sampling trajectories that satisfy an LTL formula and demonstrated to improve convergence rates for Dubins car and single-integrators subject to a recurrence task.