5 resultados para Clock and Data Recovery

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation primarily describes chemical-scale studies of G protein-coupled receptors and Cys-loop ligand-gated ion channels to better understand ligand binding interactions and the mechanism of channel activation using recently published crystal structures as a guide. These studies employ the use of unnatural amino acid mutagenesis and electrophysiology to measure subtle changes in receptor function.

In chapter 2, the role of a conserved aromatic microdomain predicted in the D3 dopamine receptor is probed in the closely related D2 and D4 dopamine receptors. This domain was found to act as a structural unit near the ligand binding site that is important for receptor function. The domain consists of several functionally important noncovalent interactions including hydrogen bond, aromatic-aromatic, and sulfur-π interactions that show strong couplings by mutant cycle analysis. We also assign an alternate interpretation for the linear fluorination plot observed at W6.48, a residue previously thought to participate in a cation-π interaction with dopamine.

Chapter 3 outlines attempts to incorporate chemically synthesized and in vitro acylated unnatural amino acids into mammalian cells. While our attempts were not successful, method optimizations and data for nonsense suppression with an in vivo acylated tRNA are included. This chapter is aimed to aid future researchers attempting unnatural amino acid mutagenesis in mammalian cells.

Chapter 4 identifies a cation-π interaction between glutamate and a tyrosine residue on loop C in the GluClβ receptor. Using the recently published crystal structure of the homologous GluClα receptor, other ligand-binding and protein-protein interactions are probed to determine the similarity between this invertebrate receptor and other more distantly related vertebrate Cys-loop receptors. We find that many of the interactions previously observed are conserved in the GluCl receptors, however care must be taken when extrapolating structural data.

Chapter 5 examines inherent properties of the GluClα receptor that are responsible for the observed glutamate insensitivity of the receptor. Chimera synthesis and mutagenesis reveal the C-terminal portion of the M4 helix and the C-terminus as contributing to formation of the decoupled state, where ligand binding is incapable of triggering channel gating. Receptor mutagenesis was unable to identify single residue mismatches or impaired protein-protein interactions within this domain. We conclude that M4 helix structure and/or membrane dynamics are likely the cause of ligand insensitivity in this receptor and that the M4 helix has an role important in the activation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From studies of protoplanetary disks to extrasolar planets and planetary debris, we aim to understand the full evolution of a planetary system. Observational constraints from ground- and space-based instrumentation allows us to measure the properties of objects near and far and are central to developing this understanding. We present here three observational campaigns that, when combined with theoretical models, reveal characteristics of different stages and remnants of planet formation. The Kuiper Belt provides evidence of chemical and dynamical activity that reveals clues to its primordial environment and subsequent evolution. Large samples of this population can only be assembled at optical wavelengths, with thermal measurements at infrared and sub-mm wavelengths currently available for only the largest and closest bodies. We measure the size and shape of one particular object precisely here, in hopes of better understanding its unique dynamical history and layered composition.

Molecular organic chemistry is one of the most fundamental and widespread facets of the universe, and plays a key role in planet formation. A host of carbon-containing molecules vibrationally emit in the near-infrared when excited by warm gas, T~1000 K. The NIRSPEC instrument at the W.M. Keck Observatory is uniquely configured to study large ranges of this wavelength region at high spectral resolution. Using this facility we present studies of warm CO gas in protoplanetary disks, with a new code for precise excitation modeling. A parameterized suite of models demonstrates the abilities of the code and matches observational constraints such as line strength and shape. We use the models to probe various disk parameters as well, which are easily extensible to others with known disk emission spectra such as water, carbon dioxide, acetylene, and hydrogen cyanide.

Lastly, the existence of molecules in extrasolar planets can also be studied with NIRSPEC and reveals a great deal about the evolution of the protoplanetary gas. The species we observe in protoplanetary disks are also often present in exoplanet atmospheres, and are abundant in Earth's atmosphere as well. Thus, a sophisticated telluric removal code is necessary to analyze these high dynamic range, high-resolution spectra. We present observations of a hot Jupiter, revealing water in its atmosphere and demonstrating a new technique for exoplanet mass determination and atmospheric characterization. We will also be applying this atmospheric removal code to the aforementioned disk observations, to improve our data analysis and probe less abundant species. Guiding models using observations is the only way to develop an accurate understanding of the timescales and processes involved. The futures of the modeling and of the observations are bright, and the end goal of realizing a unified model of planet formation will require both theory and data, from a diverse collection of sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.

The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.

The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.

This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.