13 resultados para Morphology Control Synthesis

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents methods for incrementally constructing controllers in the presence of uncertainty and nonlinear dynamics. The basic setting is motion planning subject to temporal logic specifications. Broadly, two categories of problems are treated. The first is reactive formal synthesis when so-called discrete abstractions are available. The fragment of linear-time temporal logic (LTL) known as GR(1) is used to express assumptions about an adversarial environment and requirements of the controller. Two problems of changes to a specification are posed that concern the two major aspects of GR(1): safety and liveness. Algorithms providing incremental updates to strategies are presented as solutions. In support of these, an annotation of strategies is developed that facilitates repeated modifications. A variety of properties are proven about it, including necessity of existence and sufficiency for a strategy to be winning. The second category of problems considered is non-reactive (open-loop) synthesis in the absence of a discrete abstraction. Instead, the presented stochastic optimization methods directly construct a control input sequence that achieves low cost and satisfies a LTL formula. Several relaxations are considered as heuristics to address the rarity of sampling trajectories that satisfy an LTL formula and demonstrated to improve convergence rates for Dubins car and single-integrators subject to a recurrence task.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.

The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.

The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Long linear polymers that are end-functionalized with associative groups were studied as additives to hydrocarbon fluids to mitigate the fire hazard associated with the presence of mist in a crash scenario. These polymers were molecularly designed to overcome both the shear-degradation of long polymer chains in turbulent flows, and the chain collapse induced by the random placement of associative groups along polymer backbones. Architectures of associative groups on the polymer chain ends that were tested included clusters of self-associative carboxyl groups and pairs of hetero-complementary associative units.

Linear polymers with clusters of discrete numbers of carboxyl groups on their chain ends were investigated first: an innovative synthetic strategy was devised to achieve unprecedented backbone lengths and precise control of the number of carboxyl groups on chain ends (N). We found that a very narrow range of N allows the co-existence of sufficient end-association strength and polymer solubility in apolar media. Subsequent steady-flow rheological study on solution behavior of such soluble polymers in apolar media revealed that the end-association of very long chains in apolar media leads to the formation of flower-like micelles interconnected by bridging chains, which trap significant fraction of polymer chains into looped structures with low contribution to mist-control. The efficacy of very long 1,4-polybutadiene chains end-functionalized with clusters of four carboxyl groups as mist-control additives for jet fuel was further tested. In addition to being shear-resistant, the polymer was found capable of providing fire-protection to jet fuel at concentrations as low as 0.3wt%. We also found that this polymer has excellent solubility in jet fuel over a wide range of temperature (-30 to +70°C) and negligible interference with dewatering of jet fuel. It does not cause an adverse increase in viscosity at concentrations where mist-control efficacy exists.

Four pairs of hetero-complementary associative end-groups of varying strengths were subsequently investigated, in the hopes of achieving supramolecular aggregates with both mist-control ability and better utilization of polymer building blocks. Rheological study of solutions of the corresponding complementary associative polymer pairs in apolar media revealed the strength of complementary end-association required to achieve supramolecular aggregates capable of modulating rheological properties of the solution.

Both self-associating and complementary associating polymers have therefore been found to resist shear degradation. The successful strategy of building soluble, end-associative polymers with either self-associative or complementary associative groups will guide the next generation of mist-control technology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the materials scale, thermoelectric efficiency is defined by the dimensionless figure of merit zT. This value is made up of three material components in the form zT = Tα2/ρκ, where α is the Seebeck coefficient, ρ is the electrical resistivity, and κ is the total thermal conductivity. Therefore, in order to improve zT would require the reduction of κ and ρ while increasing α. However due to the inter-relation of the electrical and thermal properties of materials, typical routes to thermoelectric enhancement come in one of two forms. The first is to isolate the electronic properties and increase α without negatively affecting ρ. Techniques like electron filtering, quantum confinement, and density of states distortions have been proposed to enhance the Seebeck coefficient in thermoelectric materials. However, it has been difficult to prove the efficacy of these techniques. More recently efforts to manipulate the band degeneracy in semiconductors has been explored as a means to enhance α.

The other route to thermoelectric enhancement is through minimizing the thermal conductivity, κ. More specifically, thermal conductivity can be broken into two parts, an electronic and lattice term, κe and κl respectively. From a functional materials standpoint, the reduction in lattice thermal conductivity should have a minimal effect on the electronic properties. Most routes incorporate techniques that focus on the reduction of the lattice thermal conductivity. The components that make up κl (κl = 1/3Cνl) are the heat capacity (C), phonon group velocity (ν), and phonon mean free path (l). Since the difficulty is extreme in altering the heat capacity and group velocity, the phonon mean free path is most often the source of reduction.

Past routes to decreasing the phonon mean free path has been by alloying and grain size reduction. However, in these techniques the electron mobility is often negatively affected because in alloying any perturbation to the periodic potential can cause additional adverse carrier scattering. Grain size reduction has been another successful route to enhancing zT because of the significant difference in electron and phonon mean free paths. However, grain size reduction is erratic in anisotropic materials due to the orientation dependent transport properties. However, microstructure formation in both equilibrium and nonequilibrium processing routines can be used to effectively reduce the phonon mean free path as a route to enhance the figure of merit.

This work starts with a discussion of several different deliberate microstructure varieties. Control of the morphology and finally structure size and spacing is discussed at length. Since the material example used throughout this thesis is anisotropic a short primer on zone melting is presented as an effective route to growing homogeneous and oriented polycrystalline material. The resulting microstructure formation and control is presented specifically in the case of In2Te3-Bi2Te3 composites and the transport properties pertinent to thermoelectric materials is presented. Finally, the transport and discussion of iodine doped Bi2Te3 is presented as a re-evaluation of the literature data and what is known today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft hierarchical materials often present unique functional properties that are sensitive to the geometry and organization of their micro- and nano-structural features across different lengthscales. Carbon Nanotube (CNT) foams are hierarchical materials with fibrous morphology that are known for their remarkable physical, chemical and electrical properties. Their complex microstructure has led them to exhibit intriguing mechanical responses at different length-scales and in different loading regimes. Even though these materials have been studied for mechanical behavior over the past few years, their response at high-rate finite deformations and the influence of their microstructure on bulk mechanical behavior and energy dissipative characteristics remain elusive.

In this dissertation, we study the response of aligned CNT foams at the high strain-rate regime of 102 - 104 s-1. We investigate their bulk dynamic response and the fundamental deformation mechanisms at different lengthscales, and correlate them to the microstructural characteristics of the foams. We develop an experimental platform, with which to study the mechanics of CNT foams in high-rate deformations, that includes direct measurements of the strain and transmitted forces, and allows for a full field visualization of the sample’s deformation through high-speed microscopy.

We synthesize various CNT foams (e.g., vertically aligned CNT (VACNT) foams, helical CNT foams, micro-architectured VACNT foams and VACNT foams with microscale heterogeneities) and show that the bulk functional properties of these materials are highly tunable either by tailoring their microstructure during synthesis or by designing micro-architectures that exploit the principles of structural mechanics. We also develop numerical models to describe the bulk dynamic response using multiscale mass-spring models and identify the mechanical properties at length scales that are smaller than the sample height.

The ability to control the geometry of microstructural features, and their local interactions, allows the creation of novel hierarchical materials with desired functional properties. The fundamental understanding provided by this work on the key structure-function relations that govern the bulk response of CNT foams can be extended to other fibrous, soft and hierarchical materials. The findings can be used to design materials with tailored properties for different engineering applications, like vibration damping, impact mitigation and packaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notwithstanding advances in modern chemical methods, the selective installation of sterically encumbered carbon stereocenters, in particular all-carbon quaternary centers, remains an unsolved problem in organic chemistry. The prevalence of all-carbon quaternary centers in biologically active natural products and pharmaceutical compounds provides a strong impetus to address current limitations in the state of the art of their generation. This thesis presents four related projects, all of which share in the goal of constructing highly-congested carbon centers in a stereoselective manner, and in the use of transition-metal catalyzed alkylation as a means to address that goal.

The first research described is an extension of allylic alkylation methodology previously developed in the Stoltz group to small, strained rings. This research constitutes the first transition metal-catalyzed enantioselective α-alkylation of cyclobutanones. Under Pd-catalysis, this chemistry affords all–carbon α-quaternary cyclobutanones in good to excellent yields and enantioselectivities.

Next is described our development of a (trimethylsilyl)ethyl β-ketoester class of enolate precursors, and their application in palladium–catalyzed asymmetric allylic alkylation to yield a variety of α-quaternary ketones and lactams. Independent coupling partner synthesis engenders enhanced allyl substrate scope relative to allyl β-ketoester substrates; highly functionalized α-quaternary ketones generated by the union of our fluoride-triggered β-ketoesters and sensitive allylic alkylation coupling partners serve to demonstrate the utility of this method for complex fragment coupling.

Lastly, our development of an Ir-catalyzed asymmetric allylic alkylation of cyclic β-ketoesters to afford highly congested, vicinal stereocenters comprised of tertiary and all-carbon quaternary centers with outstanding regio-, diastereo-, and enantiocontrol is detailed. Implementation of a subsequent Pd-catalyzed alkylation affords dialkylated products with pinpoint stereochemical control of both chiral centers. The chemistry is then extended to include acyclic β-ketoesters and similar levels of selective and functional group tolerance are observed. Critical to the successful development of this method was the employment of iridium catalysis in concert with N-aryl-phosphoramidite ligands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.

A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.

The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?

To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.

To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.