33 resultados para Precision Xtra®
em CaltechTHESIS
Resumo:
From studies of protoplanetary disks to extrasolar planets and planetary debris, we aim to understand the full evolution of a planetary system. Observational constraints from ground- and space-based instrumentation allows us to measure the properties of objects near and far and are central to developing this understanding. We present here three observational campaigns that, when combined with theoretical models, reveal characteristics of different stages and remnants of planet formation. The Kuiper Belt provides evidence of chemical and dynamical activity that reveals clues to its primordial environment and subsequent evolution. Large samples of this population can only be assembled at optical wavelengths, with thermal measurements at infrared and sub-mm wavelengths currently available for only the largest and closest bodies. We measure the size and shape of one particular object precisely here, in hopes of better understanding its unique dynamical history and layered composition.
Molecular organic chemistry is one of the most fundamental and widespread facets of the universe, and plays a key role in planet formation. A host of carbon-containing molecules vibrationally emit in the near-infrared when excited by warm gas, T~1000 K. The NIRSPEC instrument at the W.M. Keck Observatory is uniquely configured to study large ranges of this wavelength region at high spectral resolution. Using this facility we present studies of warm CO gas in protoplanetary disks, with a new code for precise excitation modeling. A parameterized suite of models demonstrates the abilities of the code and matches observational constraints such as line strength and shape. We use the models to probe various disk parameters as well, which are easily extensible to others with known disk emission spectra such as water, carbon dioxide, acetylene, and hydrogen cyanide.
Lastly, the existence of molecules in extrasolar planets can also be studied with NIRSPEC and reveals a great deal about the evolution of the protoplanetary gas. The species we observe in protoplanetary disks are also often present in exoplanet atmospheres, and are abundant in Earth's atmosphere as well. Thus, a sophisticated telluric removal code is necessary to analyze these high dynamic range, high-resolution spectra. We present observations of a hot Jupiter, revealing water in its atmosphere and demonstrating a new technique for exoplanet mass determination and atmospheric characterization. We will also be applying this atmospheric removal code to the aforementioned disk observations, to improve our data analysis and probe less abundant species. Guiding models using observations is the only way to develop an accurate understanding of the timescales and processes involved. The futures of the modeling and of the observations are bright, and the end goal of realizing a unified model of planet formation will require both theory and data, from a diverse collection of sources.
Resumo:
Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.
Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.
It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.
A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.
Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.
Resumo:
Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.
In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.
For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.
Resumo:
The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.
Resumo:
Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.
Resumo:
The solution behavior of linear polymer chains is well understood, having been the subject of intense study throughout the previous century. As plastics have become ubiquitous in everyday life, polymer science has grown into a major field of study. The conformation of a polymer in solution depends on the molecular architecture and its interactions with the surroundings. Developments in synthetic techniques have led to the creation of precision-tailored polymeric materials with varied topologies and functionalities. In order to design materials with the desired properties, it is imperative to understand the relationships between polymer architecture and their conformation and behavior. To meet that need, this thesis investigates the conformation and self-assembly of three architecturally complex macromolecular systems with rich and varied behaviors driven by the resolution of intramolecular conflicts. First we describe the development of a robust and facile synthetic approach to reproducible bottlebrush polymers (Chapter 2). The method was used to produce homologous series of bottlebrush polymers with polynorbornene backbones, which revealed the effect of side-chain and backbone length on the overall conformation in both good and theta solvent conditions (Chapter 3). The side-chain conformation was obtained from a series of SANS experiments and determined to be indistinguishable from the behavior of free linear polymer chains. Using deuterium-labeled bottlebrushes, we were able for the first time to directly observe the backbone conformation of a bottlebrush polymer which showed self-avoiding walk behavior. Secondly, a series of SANS experiments was conducted on a homologous series of Side Group Liquid Crystalline Polymers (SGLCPs) in a perdeuterated small molecule liquid crystal (5CB). Monodomain, aligned, dilute samples of SGLCP-b-PS block copolymers were seen to self-assemble into complex micellar structures with mutually orthogonally oriented anisotropies at different length scales (Chapter 4). Finally, we present the results from the first scattering experiments on a set of fuel-soluble, associating telechelic polymers. We observed the formation of supramolecular aggregates in dilute (≤0.5wt%) solutions of telechelic polymers and determined that the choice of solvent has a significant effect on the strength of association and the size of the supramolecules (Chapter 5). A method was developed for the direct estimation of supramolecular aggregation number from SANS data. The insight into structure-property relationships obtained from this work will enable the more targeted development of these molecular architectures for their respective applications.
Resumo:
Adaptive optics (AO) corrects distortions created by atmospheric turbulence and delivers diffraction-limited images on ground-based telescopes. The vastly improved spatial resolution and sensitivity has been utilized for studying everything from the magnetic fields of sunspots upto the internal dynamics of high-redshift galaxies. This thesis about AO science from small and large telescopes is divided into two parts: Robo-AO and magnetar kinematics.
In the first part, I discuss the construction and performance of the world’s first fully autonomous visible light AO system, Robo-AO, at the Palomar 60-inch telescope. Robo-AO operates extremely efficiently with an overhead < 50s, typically observing about 22 targets every hour. We have performed large AO programs observing a total of over 7,500 targets since May 2012. In the visible band, the images have a Strehl ratio of about 10% and achieve a contrast of upto 6 magnitudes at a separation of 1′′. The full-width at half maximum achieved is 110–130 milli-arcsecond. I describe how Robo-AO is used to constrain the evolutionary models of low-mass pre-main-sequence stars by measuring resolved spectral energy distributions of stellar multiples in the visible band, more than doubling the current sample. I conclude this part with a discussion of possible future improvements to the Robo-AO system.
In the second part, I describe a study of magnetar kinematics using high-resolution near-infrared (NIR) AO imaging from the 10-meter Keck II telescope. Measuring the proper motions of five magnetars with a precision of upto 0.7 milli-arcsecond/yr, we have more than tripled the previously known sample of magnetar proper motions and proved that magnetar kinematics are equivalent to those of radio pulsars. We conclusively showed that SGR 1900+14 and SGR 1806-20 were ejected from the stellar clusters with which they were traditionally associated. The inferred kinematic ages of these two magnetars are 6±1.8 kyr and 650±300 yr respectively. These ages are a factor of three to four times greater than their respective characteristic ages. The calculated braking index is close to unity as compared to three for the vacuum dipole model and 2.5-2.8 as measured for young pulsars. I conclude this section by describing a search for NIR counterparts of new magnetars and a future promise of polarimetric investigation of a magnetars’ NIR emission mechanism.
Resumo:
This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.
The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.
Resumo:
This dissertation describes studies of G protein-coupled receptors (GPCRs) and ligand-gated ion channels (LGICs) using unnatural amino acid mutagenesis to gain high precision insights into the function of these important membrane proteins.
Chapter 2 considers the functional role of highly conserved proline residues within the transmembrane helices of the D2 dopamine GPCR. Through mutagenesis employing unnatural α-hydroxy acids, proline analogs, and N-methyl amino acids, we find that lack of backbone hydrogen bond donor ability is important to proline function. At one proline site we additionally find that a substituent on the proline backbone N is important to receptor function.
In Chapter 3, side chain conformation is probed by mutagenesis of GPCRs and the muscle-type nAChR. Specific side chain rearrangements of highly conserved residues have been proposed to accompany activation of these receptors. These rearrangements were probed using conformationally-biased β-substituted analogs of Trp and Phe and unnatural stereoisomers of Thr and Ile. We also modeled the conformational bias of the unnatural Trp and Phe analogs employed.
Chapters 4 and 5 examine details of ligand binding to nAChRs. Chapter 4 describes a study investigating the importance of hydrogen bonds between ligands and the complementary face of muscle-type and α4β4 nAChRs. A hydrogen bond involving the agonist appears to be important for ligand binding in the muscle-type receptor but not the α4β4 receptor.
Chapter 5 describes a study characterizing the binding of varenicline, an actively prescribed smoking cessation therapeutic, to the α7 nAChR. Additionally, binding interactions to the complementary face of the α7 binding site were examined for a small panel of agonists. We identified side chains important for binding large agonists such as varenicline, but dispensable for binding the small agonist ACh.
Chapter 6 describes efforts to image nAChRs site-specifically modified with a fluorophore by unnatural amino acid mutagenesis. While progress was hampered by high levels of fluorescent background, improvements to sample preparation and alternative strategies for fluorophore incorporation are described.
Chapter 7 describes efforts toward a fluorescence assay for G protein association with a GPCR, with the ultimate goal of probing key protein-protein interactions along the G protein/receptor interface. A wide range of fluorescent protein fusions were generated, expressed in Xenopus oocytes, and evaluated for their ability to associate with each other.
Resumo:
Chapter I
Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.
Chapter II
A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.
EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.
EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.
Chapter III
A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.
Resumo:
The cytochromes P450 (P450s) are a remarkable class of heme enzymes that catalyze the metabolism of xenobiotics and the biosynthesis of signaling molecules. Controlled electron flow into the thiolate-ligated heme active site allows P450s to activate molecular oxygen and hydroxylate aliphatic C–H bonds via the formation of high-valent metal-oxo intermediates (compounds I and II). Due to the reactive nature and short lifetimes of these intermediates, many of the fundamental steps in catalysis have not been observed directly. The Gray group and others have developed photochemical methods, known as “flash-quench,” for triggering electron transfer (ET) and generating redox intermediates in proteins in the absence of native ET partners. Photo-triggering affords a high degree of temporal precision for the gating of an ET event; the initial ET and subsequent reactions can be monitored on the nanosecond-to-second timescale using transient absorption (TA) spectroscopies. Chapter 1 catalogues critical aspects of P450 structure and mechanism, including the native pathway for formation of compound I, and outlines the development of photochemical processes that can be used to artificially trigger ET in proteins. Chapters 2 and 3 describe the development of these photochemical methods to establish electronic communication between a photosensitizer and the buried P450 heme. Chapter 2 describes the design and characterization of a Ru-P450-BM3 conjugate containing a ruthenium photosensitizer covalently tethered to the P450 surface, and nanosecond-to-second kinetics of the photo-triggered ET event are presented. By analyzing data at multiple wavelengths, we have identified the formation of multiple ET intermediates, including the catalytically relevant compound II; this intermediate is generated by oxidation of a bound water molecule in the ferric resting state enzyme. The work in Chapter 3 probes the role of a tryptophan residue situated between the photosensitizer and heme in the aforementioned Ru-P450 BM3 conjugate. Replacement of this tryptophan with histidine does not perturb the P450 structure, yet it completely eliminates the ET reactivity described in Chapter 2. The presence of an analogous tryptophan in Ru-P450 CYP119 conjugates also is necessary for observing oxidative ET, but the yield of heme oxidation is lower. Chapter 4 offers a basic description of the theoretical underpinnings required to analyze ET. Single-step ET theory is first presented, followed by extensions to multistep ET: electron “hopping.” The generation of “hopping maps” and use of a hopping map program to analyze the rate advantage of hopping over single-step ET is described, beginning with an established rhenium-tryptophan-azurin hopping system. This ET analysis is then applied to the Ru-tryptophan-P450 systems described in Chapter 2; this strongly supports the presence of hopping in Ru-P450 conjugates. Chapter 5 explores the implementation of flash-quench and other phototriggered methods to examine the native reductive ET and gas binding events that activate molecular oxygen. In particular, TA kinetics that demonstrate heme reduction on the microsecond timescale for four Ru-P450 conjugates are presented. In addition, we implement laser flash-photolysis of P450 ferrous–CO to study the rates of CO rebinding in the thermophilic P450 CYP119 at variable temperature. Chapter 6 describes the development and implementation of air-sensitive potentiometric redox titrations to determine the solution reduction potentials of a series of P450 BM3 mutants, which were designed for non-native cyclopropanation of styrene in vivo. An important conclusion from this work is that substitution of the axial cysteine for serine shifts the wild type reduction potential positive by 130 mV, facilitating reduction by biological redox cofactors in the presence of poorly-bound substrates. While this mutation abolishes oxygenation activity, these mutants are capable of catalyzing the cyclopropanation of styrene, even within the confines of an E. coli cell. Four appendices are also provided, including photochemical heme oxidation in ruthenium-modified nitric oxide synthase (Appendix A), general protocols (Appendix B), Chapter-specific notes (Appendix C) and Matlab scripts used for data analysis (Appendix D).
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Thermal noise arising from mechanical loss in high reflective dielectric coatings is a significant source of noise in precision optical measurements. In particular, Advanced LIGO, a large scale interferometer aiming to observed gravitational wave, is expected to be limited by coating thermal noise in the most sensitive region around 30–300 Hz. Various theoretical calculations for predicting coating Brownian noise have been proposed. However, due to the relatively limited knowledge of the coating material properties, an accurate approximation of the noise cannot be achieved. A testbed that can directly observed coating thermal noise close to Advanced LIGO band will serve as an indispensable tool to verify the calculations, study material properties of the coating, and estimate the detector’s performance.
This dissertation reports a setup that has sensitivity to observe wide band (10Hz to 1kHz) thermal noise from fused silica/tantala coating at room temperature from fixed-spacer Fabry–Perot cavities. Important fundamental noises and technical noises associated with the setup are discussed. The coating loss obtained from the measurement agrees with results reported in the literature. The setup serves as a testbed to study thermal noise in high reflective mirrors from different materials. One example is a heterostructure of AlxGa1−xAs (AlGaAs). An optimized design to minimize thermo–optic noise in the coating is proposed and discussed in this work.
Resumo:
Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.
However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.
Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.
Resumo:
The free neutron beta decay correlation A0 between neutron polarization and electron emission direction provides the strongest constraint on the ratio λ = gA/gV of the Axial-vector to Vector coupling constants in Weak decay. In conjunction with the CKM Matrix element Vud and the neutron lifetime τn, λ provides a test of Standard Model assumptions for the Weak interaction. Leading high-precision measurements of A0 and τn in the 1995-2005 time period showed discrepancies with prior measurements and Standard Model predictions for the relationship between λ, τn, and Vud. The UCNA experiment was developed to measure A0 from decay of polarized ultracold neutrons (UCN), providing a complementary determination of λ with different systematic uncertainties from prior cold neutron beam experiments. This dissertation describes analysis of the dataset collected by UCNA in 2010, with emphasis on detector response calibrations and systematics. The UCNA measurement is placed in the context of the most recent τn results and cold neutron A0 experiments.