12 resultados para size-extensivity error

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three separate studies of roles that black holes might play in our universe.

In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.

In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.

In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discovery that the three ring polyamide Im-Py-Py-Dp containing imidazole (Im) and pyrrole (Py) carboxamides binds the DNA sequence 5'-(A,T)G(A,T)C(A,T)-3' as an antiparallel dimer offers a new model for the design of ligands for specific recognition of sequences in the minor groove containing both G,C and A,T base pairs. In Chapter 2, experiments are described in which the sequential addition of five N- methylpyrrolecarboxamides to the imidazole-pyrrole polyamide Im-Py-Py-Dp affords a series of six homologous polyamides, Im-(Py)2-7-Dp, that differ in the size of their binding site, apparent first order binding affinity, and sequence specificity. These results demonstrate that DNA sequences up to nine base pairs in length can be specifically recognized by imidazole-pyrrole polyamides containing three to seven rings by 2:1 polyamide-DNA complex formation in the minor groove. Recognition of a nine base pair site defines the new lower limit of the binding site size that can be recognized by polyamides containing exclusively imidazole and pyrrolecarboxamides. The results of this study should provide useful guidelines for the design of new polyamides that bind longer DNA sites with enhanced affinity and specificity.

In Chapter 3 the design and synthesis of the hairpin polyamide Im-Py-Im-Py-γ-Im- Py-Im-Py-Dp is described. Quantitative DNase I footprint titration experiments reveal that Im-Py-Im-Py-γ-Im-Py-Im-Py-Dp binds six base pair 5'-(A,T)GCGC(A,T)-3' sequences with 30-fold higher affinity than the unlinked polyamide Im-Py-Im-Py-Dp. The hairpin polyamide does not discriminate between A•T and T•A at the first and sixth positions of the binding site as three sites 5'-TGCGCT-3', 5'-TGCGCA-3', and 5 'AGCGCT- 3' are bound with similar affinity. However, Im-Py-Im-Py-γ-Im-Py-Im-PyDp is specific for and discriminates between G•C and C•G base pairs in the 5'-GCGC-3' core as evidenced by lower affinities for the mismatched sites 5'-AACGCA-3', 5'- TGCGTT-3', 5'-TGCGGT-3', and 5'-ACCGCT-3'.

In Chapter 4, experiments are described in which a kinetically stable hexa-aza Schiff base La3+ complex is covalently attached to a Tat(49-72) peptide which has been shown to bind the HIV-1 TAR RNA sequence. Although these metallo-peptides cleave TAR site-specifically in the hexanucleotide loop to afford products consistent with hydrolysis, a series of control experiments suggests that the observed cleavage is not caused by a sequence-specifically bound Tat(49-72)-La(L)3+ peptide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithmic DNA tiles systems are fascinating. From a theoretical perspective, they can result in simple systems that assemble themselves into beautiful, complex structures through fundamental interactions and logical rules. As an experimental technique, they provide a promising method for programmably assembling complex, precise crystals that can grow to considerable size while retaining nanoscale resolution. In the journey from theoretical abstractions to experimental demonstrations, however, lie numerous challenges and complications.

In this thesis, to examine these challenges, we consider the physical principles behind DNA tile self-assembly. We survey recent progress in experimental algorithmic self-assembly, and explain the simple physical models behind this progress. Using direct observation of individual tile attachments and detachments with an atomic force microscope, we test some of the fundamental assumptions of the widely-used kinetic Tile Assembly Model, obtaining results that fit the model to within error. We then depart from the simplest form of that model, examining the effects of DNA sticky end sequence energetics on tile system behavior. We develop theoretical models, sequence assignment algorithms, and a software package, StickyDesign, for sticky end sequence design.

As a demonstration of a specific tile system, we design a binary counting ribbon that can accurately count from a programmable starting value and stop growing after overflowing, resulting in a single system that can construct ribbons of precise and programmable length. In the process of designing the system, we explain numerous considerations that provide insight into more general tile system design, particularly with regards to tile concentrations, facet nucleation, the construction of finite assemblies, and design beyond the abstract Tile Assembly Model.

Finally, we present our crystals that count: experimental results with our binary counting system that represent a significant improvement in the accuracy of experimental algorithmic self-assembly, including crystals that count perfectly with 5 bits from 0 to 31. We show some preliminary experimental results on the construction of our capping system to stop growth after counters overflow, and offer some speculation on potential future directions of the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem motivating this investigation is that of pure axisymmetric torsion of an elastic shell of revolution. The analysis is carried out within the framework of the three-dimensional linear theory of elastic equilibrium for homogeneous, isotropic solids. The objective is the rigorous estimation of errors involved in the use of approximations based on thin shell theory.

The underlying boundary value problem is one of Neumann type for a second order elliptic operator. A systematic procedure for constructing pointwise estimates for the solution and its first derivatives is given for a general class of second-order elliptic boundary-value problems which includes the torsion problem as a special case.

The method used here rests on the construction of “energy inequalities” and on the subsequent deduction of pointwise estimates from the energy inequalities. This method removes certain drawbacks characteristic of pointwise estimates derived in some investigations of related areas.

Special interest is directed towards thin shells of constant thickness. The method enables us to estimate the error involved in a stress analysis in which the exact solution is replaced by an approximate one, and thus provides us with a means of assessing the quality of approximate solutions for axisymmetric torsion of thin shells.

Finally, the results of the present study are applied to the stress analysis of a circular cylindrical shell, and the quality of stress estimates derived here and those from a previous related publication are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

The slow, viscous flow past a thin screen is analyzed based on Stokes equations. The problem is reduced to an associated electric potential problem as introduced by Roscoe. Alternatively, the problem is formulated in terms of a Stokeslet distribution, which turns out to be equivalent to the first approach.

Special interest is directed towards the solution of the Stokes flow past a circular annulus. A "Stokeslet" formulation is used in this analysis. The problem is finally reduced to solving a Fredholm integral equation of the second kind. Numerical data for the drag coefficient and the mean velocity through the hole of the annulus are obtained.

Stokes flow past a circular screen with numerous holes is also attempted by assuming a set of approximate boundary conditions. An "electric potential" formulation is used, and the problem is also reduced to solving a Fredholm integral equation of the second kind. Drag coefficient and mean velocity through the screen are computed.

Part II

The purpose of this investigation is to formulate correctly a set of boundary conditions to be prescribed at the interface between a viscous flow region and a porous medium so that the problem of a viscous flow past a porous body can be solved.

General macroscopic equations of motion for flow through porous media are first derived by averaging Stokes equations over a volume element of the medium. These equations, including viscous stresses for the description, are more general than Darcy's law. They reduce to Darcy's law when the Darcy number becomes extremely small.

The interface boundary conditions of the first kind are then formulated with respect to the general macroscopic equations applied within the porous region. An application of such equations and boundary conditions to a Poiseuille shear flow problem demonstrates that there usually exists a thin interface layer immediately inside the porous medium in which the tangential velocity varies exponentially and Darcy's law does not apply.

With Darcy's law assumed within the porous region, interface boundary conditions of the second kind are established which relate the flow variables across the interface layer. The primary feature is a jump condition on the tangential velocity, which is found to be directly proportional to the normal gradient of the tangential velocity immediately outside the porous medium. This is in agreement with the experimental results of Beavers, et al.

The derived boundary conditions are applied in the solutions of two other problems: (1) Viscous flow between a rotating solid cylinder and a stationary porous cylinder, and (2) Stokes flow past a porous sphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I. The regions of sequence homology and non-homology between the DNA molecules of T2, T4, and T6 have been mapped by the electron microscopic heteroduplex method. The heteroduplex maps have been oriented with respect to the T4 genetic map. They show characteristic, reproducible patterns of substitution and deletion loops. All heteroduplex molecules show more than 85% homology. Some of the loop patterns in T2/T4 heteroduplexes are similar to those in T4/T6.

We find that the rII, the lysozyme and ac genes, the D region, and gene 52 are homologous in T2, T4, and T6. Genes 43 and 47 are probably homologous between T2 and T4. The region of greatest homology is that bearing the late genes. The host range region, which comprises a part of gene 37 and all of gene 38, is heterologous in T2, T4, and T6. The remainder of gene 37 is partially homologous in the T2/T4 heteroduplex (Beckendorf, Kim and Lielausis, 1972) but it is heterologous in T4/T6 and in T2/T6. Some of the tRNA genes are homologous and some are not. The internal protein genes in general seem to be non-homologous.

The molecular lengths of the T-even DNAs are the same within the limit of experimental error; their calculated molecular weights are correspondingly different due to unequal glucosylation. The size of the T2 genome is smaller than that of T4 or T6, but the terminally repetitious region in T2 is larger. There is a length distribution of the terminal repetition for any one phage DNA, indicating a variability in length of the DNA molecules packaged within the phage.

Part II. E. coli cells infected with phage strains carrying extensive deletions encompassing the gene for the phage ser-tRNA are missing the phage tRNAs normally present in wild type infected cells. By DNA-RNA hybridization we have demonstrated that the DNA complementary to the missing tRNAs is also absent in such deletion mutants. Thus the genes for these tRNAs must be clustered in the same region of the genome as the ser-tRNA gene. Physical mapping of several deletions of the ser-tRNA and lysozyme genes, by examination of heteroduplex DNA in the electron microscope, has enabled us to locate the cluster, to define its maximum size, and to order a few of the tRNA genes within it. That such deletions can be isolated indicates that the phage-specific tRNAs from this cluster are dispensable.

Part III. Genes 37 and 38 between closely related phages T2 and T4 have been compared by genetic, biochemical, and hetero-duplex studies. Homologous, partially homologous and non-homologous regions of the gene 37 have been mapped. The host range determinant which interacts with the gene 38 product is identified.

Part IV. A population of double-stranded ØX-RF DNA molecules carrying a deletion of about 9% of the wild-type DNA has been discovered in a sample cultivated under conditions where the phage lysozyme gene is nonessential. The structures of deleted monomers, dimers, and trimers have been studied by the electron microscope heteroduplex method. The dimers and trimers are shown to be head-to-tail repeats of the deleted monomers. Some interesting examples of the dynamical phenomenon of branch migration in vitro have been observed in heteroduplexes of deleted dimer and trimer strands with undeleted wild-type monomer viral strands.