17 resultados para Conflict of Interest

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.

RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.

Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Superprotonic phase transitions and thermal behaviors of three complex solid acid systems are presented, namely Rb3H(SO4)2-RbHSO4 system, Rb3H(SeO4)2-Cs3H(SeO4)2 solid solution system, and Cs6(H2SO4)3(H1.5PO4)4. These material systems present a rich set of phase transition characteristics that set them apart from other, simpler solid acids. A.C. impedance spectroscopy, high-temperature X-ray powder diffraction, and thermal analysis, as well as other characterization techniques, were employed to investigate the phase behavior of these systems.

Rb3H(SO4)2 is an atypical member of the M3H(XO4)2 class of compounds (M = alkali metal or NH4+ and X = S or Se) in that a transition to a high-conductivity state involves disproportionation into two phases rather than a simple polymorphic transition [1]. In the present work, investigations of the Rb3H(SO4)2-RbHSO4 system have revealed the disproportionation products to be Rb2SO4 and the previously unknown compound Rb5H3(SO4)4. The new compound becomes stable at a temperature between 25 and 140 °C and is isostructural to a recently reported trigonal phase with space group P3̅m of Cs5H3(SO4)4 [2]. At 185 °C the compound undergoes an apparently polymorphic transformation with a heat of transition of 23.8 kJ/mol and a slight additional increase in conductivity.

The compounds Rb3H(SeO4)2 and Cs3H(SeO4)2, though not isomorphous at ambient temperatures, are quintessential examples of superprotonic materials. Both adopt monoclinic structures at ambient temperatures and ultimately transform to a trigonal (R3̅m) superprotonic structure at slightly elevated temperatures, 178 and 183 °C, respectively. The compounds are completely miscible above the superprotonic transition and show extensive solubility below it. Beyond a careful determination of the phase boundaries, we find a remarkable 40-fold increase in the superprotonic conductivity in intermediate compositions rich in Rb as compared to either end-member.

The compound Cs6(H2SO4)3(H1.5PO4)4 is unusual amongst solid acid compounds in that it has a complex cubic structure at ambient temperature and apparently transforms to a simpler cubic structure of the CsCl-type (isostructural with CsH2PO4) at its transition temperature of 100-120 °C [3]. Here it is found that, depending on the level of humidification, the superprotonic transition of this material is superimposed with a decomposition reaction, which involves both exsolution of (liquid) acid and loss of H2O. This reaction can be suppressed by application of sufficiently high humidity, in which case Cs6(H2SO4)3(H1.5PO4)4 undergoes a true superprotonic transition. It is proposed that, under conditions of low humidity, the decomposition/dehydration reaction transforms the compound to Cs6(H2-0.5xSO4)3(H1.5PO4)4-x, also of the CsCl structure type at the temperatures of interest, but with a smaller unit cell. With increasing temperature, the decomposition/dehydration proceeds to greater and greater extent and unit cell of the solid phase decreases. This is identified to be the source of the apparent negative thermal expansion behavior.

References

[1] L.A. Cowan, R.M. Morcos, N. Hatada, A. Navrotsky, S.M. Haile, Solid State Ionics 179 (2008) (9-10) 305.

[2] M. Sakashita, H. Fujihisa, K.I. Suzuki, S. Hayashi, K. Honda, Solid State Ionics 178 (2007) (21-22) 1262.

[3] C.R.I. Chisholm, Superprotonic Phase Transitions in Solid Acids: Parameters affecting the presence and stability of superprotonic transitions in the MHnXO4 family of compounds (X=S, Se, P, As; M=Li, Na, K, NH4, Rb, Cs), Materials Science, California Institute of Technology, Pasadena, California (2003).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Insect vector-borne diseases, such as malaria and dengue fever (both spread by mosquito vectors), continue to significantly impact health worldwide, despite the efforts put forth to eradicate them. Suppression strategies utilizing genetically modified disease-refractory insects have surfaced as an attractive means of disease control, and progress has been made on engineering disease-resistant insect vectors. However, laboratory-engineered disease refractory genes would probably not spread in the wild, and would most likely need to be linked to a gene drive system in order to proliferate in native insect populations. Underdominant systems like translocations and engineered underdominance have been proposed as potential mechanisms for spreading disease refractory genes. Not only do these threshold-dependent systems have certain advantages over other potential gene drive mechanisms, such as localization of gene drive and removability, extreme engineered underdominance can also be used to bring about reproductive isolation, which may be of interest in controlling the spread of GMO crops. Proof-of-principle establishment of such drive mechanisms in a well-understood and studied insect, such as Drosophila melanogaster, is essential before more applied systems can be developed for the less characterized vector species of interest, such as mosquitoes. This work details the development of several distinct types of engineered underdominance and of translocations in Drosophila, including ones capable of bringing about reproductive isolation and population replacement, as a proof of concept study that can inform efforts to construct such systems in insect disease vectors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the expansion and improvement of the iterative in situ click chemistry OBOC peptide library screening technology. Previous work provided a proof-of-concept demonstration that this technique was advantageous for the production of protein-catalyzed capture (PCC) agents that could be used as drop-in replacements for antibodies in a variety of applications. Chapter 2 describes the technology development that was undertaken to optimize this screening process and make it readily available for a wide variety of targets. This optimization is what has allowed for the explosive growth of the PCC agent project over the past few years.

These technology improvements were applied to the discovery of PCC agents specific for single amino acid point mutations in proteins, which have many applications in cancer detection and treatment. Chapter 3 describes the use of a general all-chemical epitope-targeting strategy that can focus PCC agent development directly to a site of interest on a protein surface. This technique utilizes a chemically-synthesized chunk of the protein, called an epitope, substituted with a click handle in combination with the OBOC in situ click chemistry libraries in order to focus ligand development at a site of interest. Specifically, Chapter 3 discusses the use of this technique in developing a PCC agent specific for the E17K mutation of Akt1. Chapter 4 details the expansion of this ligand into a mutation-specific inhibitor, with applications in therapeutics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of catalysts that selectively oligomerize light olefins for uses in polymers and fuels remains of interest to the petrochemical and materials industry. For this purpose, two tantalum compounds, (FI)TaMe2Cl2 and (FI)TaMe4, implementing a previously reported phenoxy-imine (FI) ligand framework, have been synthesized and characterized with NMR spectroscopy and X-ray crystallography. When tested for ethylene oligomerization catalysis, (FI)TaMe2Cl2 was found to dimerize ethylene when activated with Et2Zn or EtMgCl, and (FI)TaMe4 dimerized ethylene when activated with B(C6F5)3, both at room temperature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The nuclear resonant reaction 19F(ρ,αγ)16O has been used to perform depth-sensitive analyses of fluorine in lunar samples and carbonaceous chondrites. The resonance at 0.83 MeV (center-of-mass) in this reaction is utilized to study fluorine surface films, with particular interest paid to the outer micron of Apollo 15 green glass, Apollo 17 orange glass, and lunar vesicular basalts. These results are distinguished from terrestrial contamination, and are discussed in terms of a volcanic origin for the samples of interest. Measurements of fluorine in carbonaceous chondrites are used to better define the solar system fluorine abundance. A technique for measurement of carbon on solid surfaces with applications to direct quantitative analysis of implanted solar wind carbon in lunar samples is described.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I. The attenuation of sound due to particles suspended in a gas was first calculated by Sewell and later by Epstein in their classical works on the propagation of sound in a two-phase medium. In their work, and in more recent works which include calculations of sound dispersion, the calculations were made for systems in which there was no mass transfer between the two phases. In the present work, mass transfer between phases is included in the calculations.

The attenuation and dispersion of sound in a two-phase condensing medium are calculated as functions of frequency. The medium in which the sound propagates consists of a gaseous phase, a mixture of inert gas and condensable vapor, which contains condensable liquid droplets. The droplets, which interact with the gaseous phase through the interchange of momentum, energy, and mass (through evaporation and condensation), are treated from the continuum viewpoint. Limiting cases, for flow either frozen or in equilibrium with respect to the various exchange processes, help demonstrate the effects of mass transfer between phases. Included in the calculation is the effect of thermal relaxation within droplets. Pressure relaxation between the two phases is examined, but is not included as a contributing factor because it is of interest only at much higher frequencies than the other relaxation processes. The results for a system typical of sodium droplets in sodium vapor are compared to calculations in which there is no mass exchange between phases. It is found that the maximum attenuation is about 25 per cent greater and occurs at about one-half the frequency for the case which includes mass transfer, and that the dispersion at low frequencies is about 35 per cent greater. Results for different values of latent heat are compared.

II. In the flow of a gas-particle mixture through a nozzle, a normal shock may exist in the diverging section of the nozzle. In Marble’s calculation for a shock in a constant area duct, the shock was described as a usual gas-dynamic shock followed by a relaxation zone in which the gas and particles return to equilibrium. The thickness of this zone, which is the total shock thickness in the gas-particle mixture, is of the order of the relaxation distance for a particle in the gas. In a nozzle, the area may change significantly over this relaxation zone so that the solution for a constant area duct is no longer adequate to describe the flow. In the present work, an asymptotic solution, which accounts for the area change, is obtained for the flow of a gas-particle mixture downstream of the shock in a nozzle, under the assumption of small slip between the particles and gas. This amounts to the assumption that the shock thickness is small compared with the length of the nozzle. The shock solution, valid in the region near the shock, is matched to the well known small-slip solution, which is valid in the flow downstream of the shock, to obtain a composite solution valid for the entire flow region. The solution is applied to a conical nozzle. A discussion of methods of finding the location of a shock in a nozzle is included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

My focus in this thesis is to contribute to a more thorough understanding of the mechanics of ice and deformable glacier beds. Glaciers flow under their own weight through a combination of deformation within the ice column and basal slip, which involves both sliding along and deformation within the bed. Deformable beds, which are made up of unfrozen sediment, are prevalent in nature and are often the primary contributors to ice flow wherever they are found. Their granular nature imbues them with unique mechanical properties that depend on the granular structure and hydrological properties of the bed. Despite their importance for understanding glacier flow and the response of glaciers to changing climate, the mechanics of deformable glacier beds are not well understood.

Our general approach to understanding the mechanics of bed deformation and their effect on glacier flow is to acquire synoptic observations of ice surface velocities and their changes over time and to use those observations to infer the mechanical properties of the bed. We focus on areas where changes in ice flow over time are due to known environmental forcings and where the processes of interest are largely isolated from other effects. To make this approach viable, we further develop observational methods that involve the use of mapping radar systems. Chapters 2 and 5 focus largely on the development of these methods and analysis of results from ice caps in central Iceland and an ice stream in West Antarctica. In Chapter 3, we use these observations to constrain numerical ice flow models in order to study the mechanics of the bed and the ice itself. We show that the bed in an Iceland ice cap deforms plastically and we derive an original mechanistic model of ice flow over plastically deforming beds that incorporates changes in bed strength caused by meltwater flux from the surface. Expanding on this work in Chapter 4, we develop a more detailed mechanistic model for till-covered beds that helps explain the mechanisms that cause some glaciers to surge quasi-periodically. In Antarctica, we observe and analyze the mechanisms that allow ocean tidal variations to modulate ice stream flow tens of kilometers inland. We find that the ice stream margins are significantly weakened immediately upstream of the area where ice begins to float and that this weakening likely allows changes in stress over the floating ice to propagate through the ice column.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.