7 resultados para RELIABLE RESULT

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amorphous metals that form fully glassy parts over a few millimeters in thickness are still relatively new materials. Their glassy structure gives them particularly high strengths, high yield strains, high hardness values, high resilience, and low damping losses, but this can also result in an extremely low tolerance to the presence of flaws in the material. Since this glassy structure lacks the ordered crystal structure, it also lacks the crystalline defect (dislocations) that provides the micromechanism of toughening and flaw insensitivity in conventional metals. Without a sufficient and reliable toughness that results in a large tolerance of damage in the material, metallic glasses will struggle to be adopted commercially. Here, we identify the origin of toughness in metallic glass as the competition between the intrinsic toughening mechanism of shear banding ahead of a crack and crack propagation by the cavitation of the liquid inside the shear bands. We present a detailed study over the first three chapters mainly focusing on the process of shear banding; its crucial role in giving rise to one of the most damage-tolerant materials known, its extreme sensitivity to the configurational state of a glass with moderate toughness, and how the configurational state can be changed with the addition of minor elements. The last chapter is a novel investigation into the cavitation barrier in glass-forming liquids, the competing process to shear banding. The combination of our results represents an increased understanding of the major influences on the fracture toughness of metallic glasses and thus provides a path for the improvement and development of tougher metallic glasses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.

The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.

The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.

The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heparin has been used as an anticoagulant drug for more than 70 years. The global distribution of contaminated heparin in 2007, which resulted in adverse clinical effects and over 100 deaths, emphasizes the necessity for safer alternatives to animal-sourced heparin. The structural complexity and heterogeneity of animal-sourced heparin not only impedes safe access to these biologically active molecules, but also hinders investigations on the significance of structural constituents at a molecular level. Efficient methods for preparing new synthetic heparins with targeted biological activity are necessary not only to ensure clinical safety, but to optimize derivative design to minimize potential side effects. Low molecular weight heparins have become a reliable alternative to heparin, due to their predictable dosages, long half-lives, and reduced side effects. However, heparin oligosaccharide synthesis is a challenging endeavor due to the necessity for complex protecting group manipulation and stereoselective glycosidic linkage chemistry, which often result in lengthy synthetic routes and low yields. Recently, chemoenzymatic syntheses have produced targeted ultralow molecular weight heparins with high-efficiency, but continue to be restricted by the substrate specificities of enzymes.

To address the need for access to homogeneous, complex glycosaminoglycan structures, we have synthesized novel heparan sulfate glycopolymers with well-defined carbohydrate structures and tunable chain length through ring-opening metathesis polymerization chemistry. These polymers recapitulate the key features of anticoagulant heparan sulfate by displaying the sulfation pattern responsible for heparin’s anticoagulant activity. The use of polymerization chemistry greatly simplifies the synthesis of complex glycosaminoglycan structures, providing a facile method to generate homogeneous macromolecules with tunable biological and chemical properties. Through the use of in vitro chromogenic substrate assays and ex vivo clotting assays, we found that the HS glycopolymers exhibited anticoagulant activity in a sulfation pattern and length-dependent manner. Compared to heparin standards, our short polymers did not display any activity. However, our longer polymers were able to incorporate in vitro and ex vivo characteristics of both low-molecular-weight heparin derivatives and heparin, displaying hybrid anticoagulant properties. These studies emphasize the significance of sulfation pattern specificity in specific carbohydrate-protein interactions, and demonstrate the effectiveness of multivalent molecules in recapitulating the activity of natural polysaccharides.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.

Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.

Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.