9 resultados para Fault tolerant computing

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long paleoseismic histories are necessary for understanding the full range of behavior of faults, as the most destructive events often have recurrence intervals longer than local recorded history. The Sunda megathrust, the interface along which the Australian plate subducts beneath Southeast Asia, provides an ideal natural laboratory for determining a detailed paleoseismic history over many seismic cycles. The outer-arc islands above the seismogenic portion of the megathrust cyclically rise and subside in response to processes on the underlying megathrust, providing uncommonly good illumination of megathrust behavior. Furthermore, the growth histories of coral microatolls, which record tectonic uplift and subsidence via relative sea level, can be used to investigate the detailed coseismic and interseismic deformation patterns. One particularly interesting area is the Mentawai segment of the megathrust, which has been shown to characteristically fail in a series of ruptures over decades, rather than a single end-to-end rupture. This behavior has been termed a seismic “supercycle.” Prior to the current rupture sequence, which began in 2007, the segment previously ruptured during the 14th century, the late 16th to late 17th century, and most recently during historical earthquakes in 1797 and 1833. In this study, we examine each of these previous supercycles in turn.

First, we expand upon previous analysis of the 1797–1833 rupture sequence with a comprehensive review of previously published coral microatoll data and the addition of a significant amount of new data. We present detailed maps of coseismic uplift during the two great earthquakes and of interseismic deformation during the periods 1755–1833 and 1950–1997 and models of the corresponding slip and coupling on the underlying megathrust. We derive magnitudes of Mw 8.7–9.0 for the two historical earthquakes, and determine that the 1797 earthquake fundamentally changed the state of coupling on the fault for decades afterward. We conclude that while major earthquakes generally do not involve rupture of the entire Mentawai segment, they undoubtedly influence the progression of subsequent ruptures, even beyond their own rupture area. This concept is of vital importance for monitoring and forecasting the progression of the modern rupture sequence.

Turning our attention to the 14th century, we present evidence of a shallow slip event in approximately A.D. 1314, which preceded the “conventional” megathrust rupture sequence. We calculate a suite of slip models, slightly deeper and/or larger than the 2010 Pagai Islands earthquake, that are consistent with the large amount of subsidence recorded at our study site. Sea-level records from older coral microatolls suggest that these events occur at least once every millennium, but likely far less frequently than their great downdip neighbors. The revelation that shallow slip events are important contributors to the seismic cycle of the Mentawai segment further complicates our understanding of this subduction megathrust and our assessment of the region’s exposure to seismic and tsunami hazards.

Finally, we present an outline of the complex intervening rupture sequence that took place in the 16th and 17th centuries, which involved at least five distinct uplift events. We conclude that each of the supercycles had unique features, and all of the types of fault behavior we observe are consistent with highly heterogeneous frictional properties of the megathrust beneath the south-central Mentawai Islands. We conclude that the heterogeneous distribution of asperities produces terminations and overlap zones between fault ruptures, resulting in the seismic “supercycle” phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN --- the Community Seismic Network --- which uses relatively low-cost sensors deployed by members of the community, and (2) SAF --- the Situation Awareness Framework --- which integrates data from multiple sources, including the CSN, CISN --- the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California --- and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.