6 resultados para Mixed Binary Linear Programming

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.

The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.

The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.

The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the chemistry that affects life on planet Earth occurs in the condensed phase. The TeraHertz (THz) or far-infrared (far-IR) region of the electromagnetic spectrum (from 0.1 THz to 10 THz, 3 cm-1 to 300 cm-1, or 3000 μm to 30 μm) has been shown to provide unique possibilities in the study of condensed-phase processes. The goal of this work is to expand the possibilities available in the THz region and undertake new investigations of fundamental interest to chemistry. Since we are fundamentally interested in condensed-phase processes, this thesis focuses on two areas where THz spectroscopy can provide new understanding: astrochemistry and solvation science. To advance these fields, we had to develop new instrumentation that would enable the experiments necessary to answer new questions in either astrochemistry or solvation science. We first developed a new experimental setup capable of studying astrochemical ice analogs in both the TeraHertz (THz), or far-Infrared (far-IR), region (0.3 - 7.5 THz; 10 - 250 cm-1) and the mid-IR (400 - 4000 cm-1). The importance of astrochemical ices lies in their key role in the formation of complex organic molecules, such as amino acids and sugars in space. Thus, the instruments are capable of performing variety of spectroscopic studies that can provide especially relevant laboratory data to support astronomical observations from telescopes such as the Herschel Space Telescope, the Stratospheric Observatory for Infrared Astronomy (SOFIA), and the Atacama Large Millimeter Array (ALMA). The experimental apparatus uses a THz time-domain spectrometer, with a 1750/875 nm plasma source and a GaP detector crystal, to cover the bandwidth mentioned above with ~10 GHz (~0.3 cm-1) resolution.

Using the above instrumentation, experimental spectra of astrochemical ice analogs of water and carbon dioxide in pure, mixed, and layered ices were collected at different temperatures under high vacuum conditions with the goal of investigating the structure of the ice. We tentatively observe a new feature in both amorphous solid water and crystalline water at 33 cm-1 (1 THz). In addition, our studies of mixed and layered ices show how it is possible to identify the location of carbon dioxide as it segregates within the ice by observing its effect on the THz spectrum of water ice. The THz spectra of mixed and layered ices are further analyzed by fitting their spectra features to those of pure amorphous solid water and crystalline water ice to quantify the effects of temperature changes on structure. From the results of this work, it appears that THz spectroscopy is potentially well suited to study thermal transformations within the ice.

To advance the study of liquids with THz spectroscopy, we developed a new ultrafast nonlinear THz spectroscopic technique: heterodyne-detected, ultrafast THz Kerr effect (TKE) spectroscopy. We implemented a heterodyne-detection scheme into a TKE spectrometer that uses a stilbazoiumbased THz emitter, 4-N,N-dimethylamino-4-N-methyl-stilbazolium 2,4,6-trimethylbenzenesulfonate (DSTMS), and high numerical aperture optics which generates THz electric field in excess of 300 kV/cm, in the sample. This allows us to report the first measurement of quantum beats at terahertz (THz) frequencies that result from vibrational coherences initiated by the nonlinear, dipolar interaction of a broadband, high-energy, (sub)picosecond THz pulse with the sample. Our instrument improves on both the frequency coverage, and sensitivity previously reported; it also ensures a backgroundless measurement of the THz Kerr effect in pure liquids. For liquid diiodomethane, we observe a quantum beat at 3.66 THz (122 cm-1), in exact agreement with the fundamental transition frequency of the υ4 vibration of the molecule. This result provides new insight into dipolar vs. Raman selection rules at terahertz frequencies.

To conclude we discuss future directions for the nonlinear THz spectroscopy in the Blake lab. We report the first results from an experiment using a plasma-based THz source for nonlinear spectroscopy that has the potential to enable nonlinear THz spectra with a sub-100 fs temporal resolution, and how the optics involved in the plasma mechanism can enable THz pulse shaping. Finally, we discuss how a single-shot THz detection scheme could improve the acquisition of THz data and how such a scheme could be implemented in the Blake lab. The instruments developed herein will hopefully remain a part of the groups core competencies and serve as building blocks for the next generation of THz instrumentation that pushes the frontiers of both chemistry and the scientific enterprise as a whole.