10 resultados para algorithm design and analysis
em CaltechTHESIS
Resumo:
Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
This work describes the design and synthesis of a true, heterogeneous, asymmetric catalyst. The catalyst consists of a thin film that resides on a high-surface- area hydrophilic solid and is composed of a chiral, hydrophilic organometallic complex dissolved in ethylene glycol. Reactions of prochiral organic reactants take place predominantly at the ethylene glycol-bulk organic interface.
The synthesis of this new heterogeneous catalyst is accomplished in a series of designed steps. A novel, water-soluble, tetrasulfonated 2,2'-bis (diphenylphosphino)-1,1'-binaphthyl (BINAP-4S0_3Na) is synthesized by direct sulfonation of 2,2'-bis(diphenylphosphino)-1,1'-binaphthyl (BINAP). The rhodium (I) complex of BINAP-4SO_3Na is prepared and is shown to be the first homogeneous catalyst to perform asymmetric reductions of prochiral 2-acetamidoacrylic acids in neat water with enantioselectivities as high as those obtained in non-aqueous solvents. The ruthenium (II) complex, [Ru(BINAP-4SO_3Na)(benzene)Cl]Cl is also synthesized and exhibits a broader substrate specificity as well as higher enantioselectivities for the homogeneous asymmetric reduction of prochiral 2-acylamino acid precursors in water. Aquation of the ruthenium-chloro bond in water is found to be detrimental to the enantioselectivity with some substrates. Replacement of water by ethylene glycol results in the same high e.e's as those found in neat methanol. The ruthenium complex is impregnated onto a controlled pore-size glass CPG-240 by the incipient wetness technique. Anhydrous ethylene glycol is used as the immobilizing agent in this heterogeneous catalyst, and a non-polar 1:1 mixture of chloroform and cyclohexane is employed as the organic phase.
Asymmetric reduction of 2-(6'-methoxy-2'-naphthyl)acrylic acid to the non-steroidal anti-inflammatory agent, naproxen, is accomplished with this heterogeneous catalyst at a third of the rate observed in homogeneous solution with an e.e. of 96% at a reaction temperature of 3°C and 1,400 psig of hydrogen. No leaching of the ruthenium complex into the bulk organic phase is found at a detection limit of 32 ppb. Recycling of the catalyst is possible without any loss in enantioselectivity. Long-term stability of this new heterogeneous catalyst is proven by a self-assembly test. That is, under the reaction conditions, the individual components of the present catalytic system self-assemble into the supported-catalyst configuration.
The strategies outlined here for the design and synthesis of this new heterogeneous catalyst are general, and can hopefully be applied to the development of other heterogeneous, asymmetric catalysts.
Resumo:
For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.
Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.
The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.
Resumo:
Despite over 30 years of effort, an HIV-1 vaccine that elicits protective antibodies still does not exist. Recent clinical studies have identified that during natural infection about 20% of the population is capable of mounting a potent and protective antibody response. Closer inspection of these individuals reveal that a subset of these antibodies, recently termed potent VRC01-like (PVL), derive exclusively from a single human germline heavy chain gene. Induced clonal expansion of the B cell encoding this gene is the first step through which PVL antibodies may be elicited. Unfortunately, naturally occurring HIV gp120s fail to bind to this germline, and as a result cannot be used as the initial prime for a vaccine regimen. We have determined the crystal structure of an important germline antibody that is a promising target for vaccine design efforts, and have set out to engineer a more likely candidate using computationally-guided rational design.
In addition to prevention efforts on the side of vaccine design, recently characterized broadly neutralizing anti-HIV antibodies have excellent potential for use in gene therapy and passive immunotherapy. The separation distance between functional Fabs on an antibody is important due to the sparse distribution of envelop spikes on HIV compared to other viruses. We set out to build and characterize novel antibody architectures by incorporating structured linkers into the hinge region of an anti-HIV antibody b12. The goal was to observe whether these linkers increased the arm-span of the IgG dimer. When incorporated, flexible Gly4Ser repeats did not result in detectable extensions of the IgG antigen binding domains, by contrast to linkers including more rigid domains such as β2-microglobulin, Zn-α2-glycoprotein, and tetratricopeptide repeats (TPRs). This study adds an additional set of linkers with varying lengths and rigidities to the available linker repertoire, which may be useful for the modification and construction of antibodies and other fusion proteins.
Resumo:
This dissertation focuses on the incorporation of non-innocent or multifunctional moieties into different ligand scaffolds to support one or multiple metal centers in close proximity. Chapter 2 focuses on the initial efforts to synthesize hetero- or homometallic tri- or dinuclear metal carbonyl complexes supported by para-terphenyl diphosphine ligands. A series of [M2M’(CO)4]-type clusters (M = Ni, Pd; M’ = Fe, Co) could be accessed and used to relate the metal composition to the properties of the complexes. During these studies it was also found that non-innocent behavior was observed in dinuclear Fe complexes that result from changes in oxidation state of the cluster. These studies led to efforts to rationally incorporate central arene moieties capable managing both protons and electrons during small molecule activation.
Chapter 3 discusses the synthesis of metal complexes supported by a novel para-terphenyl diphosphine ligand containing a non-innocent 1,4-hydroquinone moiety as the central arene. A Pd0-hydroquinone complex was found to mediate the activation of a variety of small molecules to form the corresponding Pd0-quinone complexes in a formal two proton ⁄ two electron transformation. Mechanistic investigations of dioxygen activation revealed a metal-first activation process followed by subsequent proton and electron transfer from the ligand. These studies revealed the capacity of the central arene substituent to serve as a reservoir for a formal equivalent of dihydrogen, although the stability of the M-quinone compounds prevented access to the PdII-quinone oxidation state, thus hindering of small molecule transformations requiring more than two electrons per equivalent of metal complex.
Chapter 4 discusses the synthesis of metal complexes supported by a ligand containing a 3,5-substituted pyridine moiety as the linker separating the phenylene phosphine donors. Nickel and palladium complexes supported by this ligand were found to tolerate a wide variety of pyridine nitrogen-coordinated electrophiles which were found to alter central pyridine electronics, and therefore metal-pyridine π-system interactions, substantially. Furthermore, nickel complexes supported by this ligand were found to activate H-B and H-Si bonds and formally hydroborate and hydrosilylate the central pyridine ring. These systems highlight the potential use of pyridine π-system-coordinated metal complexes to reversibly store reducing equivalents within the ligand framework in a manner akin to the previously discussed 1,4-hydroquinone diphosphine ligand scaffold.
Chapter 5 departs from the phosphine-based chemistry and instead focuses on the incorporation of hydrogen bonding networks into the secondary coordination sphere of [Fe4(μ4-O)]-type clusters supported by various pyrazolate ligands. The aim of this project is to stabilize reactive oxygenic species, such as oxos, to study their spectroscopy and reactivity in the context of complicated multimetallic clusters. Herein is reported this synthesis and electrochemical and Mössbauer characterization of a series of chloride clusters have been synthesized using parent pyrazolate and a 3-aminophenyl substituted pyrazolate ligand. Efforts to rationally access hydroxo and oxo clusters from these chloride precursors represents ongoing work that will continue in the group.
Appendix A discusses attempts to access [Fe3Ni]-type clusters as models of the enzymatic active site of [NiFe] carbon monoxide dehydrogenase. Efforts to construct tetranuclear clusters with an interstitial sulfide proved unsuccessful, although a (μ3-S) ligand could be installed through non-oxidative routes into triiron clusters. While [Fe3Ni(μ4-O)]-type clusters could be assembled, accessing an open heterobimetallic edge site proved challenging, thus prohibiting efforts to study chemical transformations, such as hydroxide attack onto carbon monoxide or carbon dioxide coordination, relevant to the native enzyme. Appendix B discusses the attempts to synthesize models of the full H-cluster of [FeFe]-hydrogenase using a bioinorganic approach. A synthetic peptide containing three cysteine donors was successfully synthesized and found to chelate a preformed synthetic [Fe4S4] cluster. However, efforts to incorporate the diiron subsite model complex proved challenging as the planned thioester exchange reaction was found to non-selectively acetylate the peptide backbone, thus preventing the construction of the full six-iron cluster.
Resumo:
Part I
The physical phenomena which will ultimately limit the packing density of planar bipolar and MOS integrated circuits are examined. The maximum packing density is obtained by minimizing the supply voltage and the size of the devices. The minimum size of a bipolar transistor is determined by junction breakdown, punch-through and doping fluctuations. The minimum size of a MOS transistor is determined by gate oxide breakdown and drain-source punch-through. The packing density of fully active bipolar or static non-complementary MOS circuits becomes limited by power dissipation. The packing density of circuits which are not fully active such as read-only memories, becomes limited by the area occupied by the devices, and the frequency is limited by the circuit time constants and by metal migration. The packing density of fully active dynamic or complementary MOS circuits is limited by the area occupied by the devices, and the frequency is limited by power dissipation and metal migration. It is concluded that read-only memories will reach approximately the same performance and packing density with MOS and bipolar technologies, while fully active circuits will reach the highest levels of integration with dynamic MOS or complementary MOS technologies.
Part II
Because the Schottky diode is a one-carrier device, it has both advantages and disadvantages with respect to the junction diode which is a two-carrier device. The advantage is that there are practically no excess minority carriers which must be swept out before the diode blocks current in the reverse direction, i.e. a much faster recovery time. The disadvantage of the Schottky diode is that for a high voltage device it is not possible to use conductivity modulation as in the p i n diode; since charge carriers are of one sign, no charge cancellation can occur and current becomes space charge limited. The Schottky diode design is developed in Section 2 and the characteristics of an optimally designed silicon Schottky diode are summarized in Fig. 9. Design criteria and quantitative comparison of junction and Schottky diodes is given in Table 1 and Fig. 10. Although somewhat approximate, the treatment allows a systematic quantitative comparison of the devices for any given application.
Part III
We interpret measurements of permittivity of perovskite strontium titanate as a function of orientation, temperature, electric field and frequency performed by Dr. Richard Neville. The free energy of the crystal is calculated as a function of polarization. The Curie-Weiss law and the LST relation are verified. A generalized LST relation is used to calculate the permittivity of strontium titanate from zero to optic frequencies. Two active optic modes are important. The lower frequency mode is attributed mainly to motion of the strontium ions with respect to the rest of the lattice, while the higher frequency active mode is attributed to motion of the titanium ions with respect to the oxygen lattice. An anomalous resonance which multi-domain strontium titanate crystals exhibit below 65°K is described and a plausible mechanism which explains the phenomenon is presented.