10 resultados para Keenan, Bernard
em CaltechTHESIS
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.
The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.
From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.
The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.
Resumo:
This investigation demonstrates an application of a flexible wall nozzle for testing in a supersonic wind tunnel. It is conservative to say that the versatility of this nozzle is such that it warrants the expenditure of time to carefully engineer a nozzle and incorporate it in the wind tunnel as a permanent part of the system. The gradients in the test section were kept within one percent of the calibrated Mach number, however, the gradients occurring over the bodies tested were only ± 0.2 percent in Mach number.
The conditions existing on a finite cone with a vertex angle of 75° were investigated by considering the pressure distribution on the cone and the shape of the shock wave. The pressure distribution on the surface of the 75° cone when based on upstream conditions does not show any discontinuities at the theoretical attachment Mach number.
Both the angle of the shock wave and the pressure distribution of the 75° cone are in very close agreement with the theoretical values given in the Kopal report, (Ref. 3).
The location of the intersection of the sonic line with the surface of the cone and with the shock wave are given for the cone. The blocking characteristics of the GALCIT supersonic wind tunnel were investigated with a series of 60° cones.
Resumo:
The problem of finding the depths of glaciers and the current methods are discussed briefly. Radar methods are suggested as a possible improvement for, or adjunct to, seismic and gravity survey methods. The feasibility of propagating electromagnetic waves in ice and the maximum range to be expected are then investigated theoretically with the aid of experimental data on the dielectric properties of ice. It is found that the maximum expected range is great enough to measure the depth of many glaciers at the lower radar frequencies if there is not too much liquid water present. Greater ranges can be attained by going to lower frequencies.
The results are given of two expeditions in two different years to the Seward Glacier in the Yukon Territory. Experiments were conducted on a small valley glacier whose depth was determined by seismic sounding. Many echoes were received but their identification was uncertain. Using the best echoes, a profile was obtained each year, but they were not in exact agreement with each other. It could not be definitely established that echoes had been received from bedrock. Agreement with seismic methods for a considerable number of glaciers would have to be obtained before radar methods could be relied upon. The presence of liquid water in the ice is believed to be one of the greatest obstacles. Besides increasing the attenuation and possibly reflecting energy, it makes it impossible to predict the velocity of propagation. The equipment used was far from adequate for such purposes, so many of the difficulties could be attributed to this. Partly because of this, and the fact that there are glaciers with very little liquid water present, radar methods are believed to be worthy of further research for the exploration of glaciers.
Resumo:
A model for some of the many physical-chemical and biological processes in intermittent sand filtration of wastewaters is described and an expression for oxygen transfer is formulated.
The model assumes that aerobic bacterial activity within the sand or soil matrix is limited, mostly by oxygen deficiency, while the surface is ponded with wastewater. Atmospheric oxygen reenters into the soil after infiltration ends. Aerobic activity is resumed, but the extent of penetration of oxygen is limited and some depths may be always anaerobic. These assumptions lead to the conclusion that the percolate shows large variations with respect to the concentration of certain contaminants, with some portions showing little change in a specific contaminant. Analyses of soil moisture in field studies and of effluent from laboratory sand columns substantiated the model.
The oxygen content of the system at sufficiently long times after addition of wastes can be described by a quasi-steady-state diffusion equation including a term for an oxygen sink. Measurements of oxygen content during laboratory and field studies show that the oxygen profile changes only slightly up to two days after the quasi-steady state is attained.
Results of these hypotheses and experimental verification can be applied in the operation of existing facilities and in the interpretation of data from pilot plant-studies.
Resumo:
A general definition of interpreted formal language is presented. The notion “is a part of" is formally developed and models of the resulting part theory are used as universes of discourse of the formal languages. It is shown that certain Boolean algebras are models of part theory.
With this development, the structure imposed upon the universe of discourse by a formal language is characterized by a group of automorphisms of the model of part theory. If the model of part theory is thought of as a static world, the automorphisms become the changes which take place in the world. Using this formalism, we discuss a notion of abstraction and the concept of definability. A Galois connection between the groups characterizing formal languages and a language-like closure over the groups is determined.
It is shown that a set theory can be developed within models of part theory such that certain strong formal languages can be said to determine their own set theory. This development is such that for a given formal language whose universe of discourse is a model of part theory, a set theory can be imbedded as a submodel of part theory so that the formal language has parts which are sets as its discursive entities.
Resumo:
The equations of relativistic, perfect-fluid hydrodynamics are cast in Eulerian form using six scalar "velocity-potential" fields, each of which has an equation of evolution. These equations determine the motion of the fluid through the equation
Uʋ=µ-1 (ø,ʋ + αβ,ʋ + ƟS,ʋ).
Einstein's equations and the velocity-potential hydrodynamical equations follow from a variational principle whose action is
I = (R + 16π p) (-g)1/2 d4x,
where R is the scalar curvature of spacetime and p is the pressure of the fluid. These equations are also cast into Hamiltonian form, with Hamiltonian density –T00 (-goo)-1/2.
The second variation of the action is used as the Lagrangian governing the evolution of small perturbations of differentially rotating stellar models. In Newtonian gravity this leads to linear dynamical stability criteria already known. In general relativity it leads to a new sufficient condition for the stability of such models against arbitrary perturbations.
By introducing three scalar fields defined by
ρ ᵴ = ∇λ + ∇x(xi + ∇xɣi)
(where ᵴ is the vector displacement of the perturbed fluid element, ρ is the mass-density, and i, is an arbitrary vector), the Newtonian stability criteria are greatly simplified for the purpose of practical applications. The relativistic stability criterion is not yet in a form that permits practical calculations, but ways to place it in such a form are discussed.
Resumo:
Experimental investigations were made of the nature of weak superconductivity in a structure having well-defined, controllable characteristics and geometry. Controlled experiments were made possible by using a thin-film structure which was entirely metallic and consisted of a superconducting film with a localized section that was weak in the sense that its transition temperature was depressed relative to the rest of the film. The depression of transition temperature was brought about by underlaying the superconductor with a normal metal.
The DC and AC electrical characteristics of this structure were studied. It was found that this structure exhibited a non-zero, time-average supercurrent at finite voltage to at least .2 mV, and generated an oscillating electric potential at a frequency given by the Josephson relation. The DC V-I characteristic and the amplitude of the AC oscillation were found to be consistent with a two- fluid (normal current-supercurrent) model of weak super-conductivity based on e thermodynamically irreversible process of repetitive phase-slip, and featuring a periodic time dependence in the amplitude of the superconducting order parameter.
The observed linewidth of the AC oscillation could be accounted for by incorporating Johnson noise in the two-fluid model.
Experimentally it was found that the behavior of a short (length on the order of the coherence distance) weak superconductor could be characterized by its critical current and normal-state resistance, and an empirical expression was obtained for the time dependence of the super-current and voltage.
It was found that the results could not be explained on the basis of the theory of the Josephson junction.