53 resultados para D-optimal design

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The solar resource is the most abundant renewable resource on earth, yet it is currently exploited with relatively low efficiencies. To make solar energy more affordable, we can either reduce the cost of the cell or increase the efficiency with a similar cost cell. In this thesis, we consider several different optical approaches to achieve these goals. First, we consider a ray optical model for light trapping in silicon microwires. With this approach, much less material can be used, allowing for a cost savings. We next focus on reducing the escape of radiatively emitted and scattered light from the solar cell. With this angle restriction approach, light can only enter and escape the cell near normal incidence, allowing for thinner cells and higher efficiencies. In Auger-limited GaAs, we find that efficiencies greater than 38% may be achievable, a significant improvement over the current world record. To experimentally validate these results, we use a Bragg stack to restrict the angles of emitted light. Our measurements show an increase in voltage and a decrease in dark current, as less radiatively emitted light escapes. While the results in GaAs are interesting as a proof of concept, GaAs solar cells are not currently made on the production scale for terrestrial photovoltaic applications. We therefore explore the application of angle restriction to silicon solar cells. While our calculations show that Auger-limited cells give efficiency increases of up to 3% absolute, we also find that current amorphous silicion-crystalline silicon heterojunction with intrinsic thin layer (HIT) cells give significant efficiency gains with angle restriction of up to 1% absolute. Thus, angle restriction has the potential for unprecedented one sun efficiencies in GaAs, but also may be applicable to current silicon solar cell technology. Finally, we consider spectrum splitting, where optics direct light in different wavelength bands to solar cells with band gaps tuned to those wavelengths. This approach has the potential for very high efficiencies, and excellent annual power production. Using a light-trapping filtered concentrator approach, we design filter elements and find an optimal design. Thus, this thesis explores silicon microwires, angle restriction, and spectral splitting as different optical approaches for improving the cost and efficiency of solar cells.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The prospect of terawatt-scale electricity generation using a photovoltaic (PV) device places strict requirements on the active semiconductor optoelectronic properties and elemental abundance. After reviewing the constraints placed on an "earth-abundant" solar absorber, we find zinc phosphide (α-Zn3P2) to be an ideal candidate. In addition to its near-optimal direct band gap of 1.5 eV, high visible-light absorption coefficient (>104 cm-1), and long minority-carrier diffusion length (>5 μm), Zn3P2 is composed of abundant Zn and P elements and has excellent physical properties for scalable thin-film deposition. However, to date, a Zn3P2 device of sufficient efficiency for commercial applications has not been demonstrated. Record efficiencies of 6.0% for multicrystalline and 4.3% for thin-film cells have been reported, respectively. Performance has been limited by the intrinsic p-type conductivity of Zn3P2 which restricts us to Schottky and heterojunction device designs. Due to our poor understanding of Zn3P2 interfaces, an ideal heterojunction partner has not yet been found.

The goal of this thesis is to explore the upper limit of solar conversion efficiency achievable with a Zn3P2 absorber through the design of an optimal heterojunction PV device. To do so, we investigate three key aspects of material growth, interface energetics, and device design. First, the growth of Zn3P2 on GaAs(001) is studied using compound-source molecular-beam epitaxy (MBE). We successfully demonstrate the pseudomorphic growth of Zn3P2 epilayers of controlled orientation and optoelectronic properties. Next, the energy-band alignments of epitaxial Zn3P2 and II-VI and III-V semiconductor interfaces are measured via high-resolution x-ray photoelectron spectroscopy in order to determine the most appropriate heterojunction partner. From this work, we identify ZnSe as a nearly ideal n-type emitter for a Zn3P2 PV device. Finally, various II-VI/Zn3P2 heterojunction solar cells designs are fabricated, including substrate and superstrate architectures, and evaluated based on their solar conversion efficiency.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.

In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes research pursued in two areas, both involving the design and synthesis of sequence specific DNA-cleaving proteins. The first involves the use of sequence-specific DNA-cleaving metalloproteins to probe the structure of a protein-DNA complex, and the second seeks to develop cleaving moieties capable of DNA cleavage through the generation of a non-diffusible oxidant under physiological conditions.

Chapter One provides a brief review of the literature concerning sequence-specific DNA-binding proteins. Chapter Two summarizes the results of affinity cleaving experiments using leucine zipper-basic region (bZip) DNA-binding proteins. Specifically, the NH_2-terminal locations of a dimer containing the DNA binding domain of the yeast transcriptional activator GCN4 were mapped on the binding sites 5'-CTGACTAAT-3' and 5'ATGACTCTT- 3' using affinity cleaving. Analysis of the DNA cleavage patterns from Fe•EDTA-GCN4(222-281) and (226-281) dimers reveals that the NH_2-termini are in the major groove nine to ten base pairs apart and symmetrically displaced four to five base pairs from the central C of the recognition site. These data are consistent with structural models put forward for this class of DNA binding proteins. The results of these experiments are evaluated in light of the recently published crystal structure for the GCN4-DNA complex. Preliminary investigations of affinity cleaving proteins based on the DNA-binding domains of the bZip proteins Jun and Fos are also described.

Chapter Three describes experiments demonstrating the simultaneous binding of GCN4(226-281) and 1-Methylimidazole-2-carboxamide-netropsin (2-ImN), a designed synthetic peptide which binds in the minor groove of DNA at 5'-TGACT-3' sites as an antiparallel, side-by-side dimer. Through the use of Fe•EDTA-GCN4(226-281) as a sequence-specific footprinting agent, it is shown that the dimeric protein GCN4(226-281) and the dimeric peptide 2- ImN can simultaneously occupy their common binding site in the major and minor grooves of DNA, respectively. The association constants for 2-ImN in the presence and in the absence of Fe•EDTA-GCN4(226-281) are found to be similar, suggesting that the binding of the two dimers is not cooperative.

Chapter Four describes the synthesis and characterization of PBA-β-OH-His- Hin(139-190), a hybrid protein containing the DNA-binding domain of Hin recombinase and the putative iron-binding and oxygen-activating domain of the antitumor antibiotic bleomycin. This 54-residue protein, comprising residues 139-190 of Hin recombinase with the dipeptide pyrimidoblamic acid-β-hydroxy-L-histidine (PBA-β-OH-His) at the NH2 terminus, was synthesized by solid phase methods. PBA-β-OH-His-Hin(139- 190) binds specifically to DNA at four distinct Hin binding sites with affinities comparable to those of the unmodified Hin(139-190). In the presence of dithiothreitol (DTT), Fe•PB-β-OH-His-Hin(139-190) cleaves DNA with specificity remarkably similar to that of Fe•EDTA-Hin(139-190), although with lower efficiency. Analysis of the cleavage pattern suggests that DNA cleavage is mediated through a diffusible species, in contrast with cleavage by bleomycin, which occurs through a non-diffusible oxidant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the design and synthesis of a true, heterogeneous, asymmetric catalyst. The catalyst consists of a thin film that resides on a high-surface- area hydrophilic solid and is composed of a chiral, hydrophilic organometallic complex dissolved in ethylene glycol. Reactions of prochiral organic reactants take place predominantly at the ethylene glycol-bulk organic interface.

The synthesis of this new heterogeneous catalyst is accomplished in a series of designed steps. A novel, water-soluble, tetrasulfonated 2,2'-bis (diphenylphosphino)-1,1'-binaphthyl (BINAP-4S0_3Na) is synthesized by direct sulfonation of 2,2'-bis(diphenylphosphino)-1,1'-binaphthyl (BINAP). The rhodium (I) complex of BINAP-4SO_3Na is prepared and is shown to be the first homogeneous catalyst to perform asymmetric reductions of prochiral 2-acetamidoacrylic acids in neat water with enantioselectivities as high as those obtained in non-aqueous solvents. The ruthenium (II) complex, [Ru(BINAP-4SO_3Na)(benzene)Cl]Cl is also synthesized and exhibits a broader substrate specificity as well as higher enantioselectivities for the homogeneous asymmetric reduction of prochiral 2-acylamino acid precursors in water. Aquation of the ruthenium-chloro bond in water is found to be detrimental to the enantioselectivity with some substrates. Replacement of water by ethylene glycol results in the same high e.e's as those found in neat methanol. The ruthenium complex is impregnated onto a controlled pore-size glass CPG-240 by the incipient wetness technique. Anhydrous ethylene glycol is used as the immobilizing agent in this heterogeneous catalyst, and a non-polar 1:1 mixture of chloroform and cyclohexane is employed as the organic phase.

Asymmetric reduction of 2-(6'-methoxy-2'-naphthyl)acrylic acid to the non-steroidal anti-inflammatory agent, naproxen, is accomplished with this heterogeneous catalyst at a third of the rate observed in homogeneous solution with an e.e. of 96% at a reaction temperature of 3°C and 1,400 psig of hydrogen. No leaching of the ruthenium complex into the bulk organic phase is found at a detection limit of 32 ppb. Recycling of the catalyst is possible without any loss in enantioselectivity. Long-term stability of this new heterogeneous catalyst is proven by a self-assembly test. That is, under the reaction conditions, the individual components of the present catalytic system self-assemble into the supported-catalyst configuration.

The strategies outlined here for the design and synthesis of this new heterogeneous catalyst are general, and can hopefully be applied to the development of other heterogeneous, asymmetric catalysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.

In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.