10 resultados para Religions (Proposed, universal, etc.)
em CaltechTHESIS
Resumo:
Politically the Colorado river is an interstate as well as an international stream. Physically the basin divides itself distinctly into three sections. The upper section from head waters to the mouth of San Juan comprises about 40 percent of the total of the basin and affords about 87 percent of the total runoff, or an average of about 15 000 000 acre feet per annum. High mountains and cold weather are found in this section. The middle section from the mouth of San Juan to the mouth of the Williams comprises about 35 percent of the total area of the basin and supplies about 7 percent of the annual runoff. Narrow canyons and mild weather prevail in this section. The lower third of the basin is composed of mainly hot arid plains of low altitude. It comprises some 25 percent of the total area of the basin and furnishes about 6 percent of the average annual runoff.
The proposed Diamond Creek reservoir is located in the middle section and is wholly within the boundary of Arizona. The site is at the mouth of Diamond Creek and is only 16 m. from Beach Spring, a station on the Santa Fe railroad. It is solely a power project with a limited storage capacity. The dam which creats the reservoir is of the gravity type to be constructed across the river. The walls and foundation are of granite. For a dam of 290 feet in height, the back water will be about 25 m. up the river.
The power house will be placed right below the dam perpendicular to the axis of the river. It is entirely a concrete structure. The power installation would consist of eighteen 37 500 H.P. vertical, variable head turbines, directly connected to 28 000 kwa. 110 000 v. 3 phase, 60 cycle generators with necessary switching and auxiliary apparatus. Each unit is to be fed by a separate penstock wholly embedded into the masonry.
Concerning the power market, the main electric transmission lines would extend to Prescott, Phoenix, Mesa, Florence etc. The mining regions of the mountains of Arizona would be the most adequate market. The demand of power in the above named places might not be large at present. It will, from the observation of the writer, rapidly increase with the wonderful advancement of all kinds of industrial development.
All these things being comparatively feasible, there is one difficult problem: that is the silt. At the Diamond Creek dam site the average annual silt discharge is about 82 650 acre feet. The geographical conditions, however, will not permit silt deposites right in the reservoir. So this design will be made under the assumption given in Section 4.
The silt condition and the change of lower course of the Colorado are much like those of the Yellow River in China. But one thing is different. On the Colorado most of the canyon walls are of granite, while those on the Yellow are of alluvial loess: so it is very hard, if not impossible, to get a favorable dam site on the lower part. As a visitor to this country, I should like to see the full development of the Colorado: but how about THE YELLOW!
Resumo:
Disorder and interactions both play crucial roles in quantum transport. Decades ago, Mott showed that electron-electron interactions can lead to insulating behavior in materials that conventional band theory predicts to be conducting. Soon thereafter, Anderson demonstrated that disorder can localize a quantum particle through the wave interference phenomenon of Anderson localization. Although interactions and disorder both separately induce insulating behavior, the interplay of these two ingredients is subtle and often leads to surprising behavior at the periphery of our current understanding. Modern experiments probe these phenomena in a variety of contexts (e.g. disordered superconductors, cold atoms, photonic waveguides, etc.); thus, theoretical and numerical advancements are urgently needed. In this thesis, we report progress on understanding two contexts in which the interplay of disorder and interactions is especially important.
The first is the so-called “dirty” or random boson problem. In the past decade, a strong-disorder renormalization group (SDRG) treatment by Altman, Kafri, Polkovnikov, and Refael has raised the possibility of a new unstable fixed point governing the superfluid-insulator transition in the one-dimensional dirty boson problem. This new critical behavior may take over from the weak-disorder criticality of Giamarchi and Schulz when disorder is sufficiently strong. We analytically determine the scaling of the superfluid susceptibility at the strong-disorder fixed point and connect our analysis to recent Monte Carlo simulations by Hrahsheh and Vojta. We then shift our attention to two dimensions and use a numerical implementation of the SDRG to locate the fixed point governing the superfluid-insulator transition there. We identify several universal properties of this transition, which are fully independent of the microscopic features of the disorder.
The second focus of this thesis is the interplay of localization and interactions in systems with high energy density (i.e., far from the usual low energy limit of condensed matter physics). Recent theoretical and numerical work indicates that localization can survive in this regime, provided that interactions are sufficiently weak. Stronger interactions can destroy localization, leading to a so-called many-body localization transition. This dynamical phase transition is relevant to questions of thermalization in isolated quantum systems: it separates a many-body localized phase, in which localization prevents transport and thermalization, from a conducting (“ergodic”) phase in which the usual assumptions of quantum statistical mechanics hold. Here, we present evidence that many-body localization also occurs in quasiperiodic systems that lack true disorder.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
As the worldwide prevalence of diabetes mellitus continues to increase, diabetic retinopathy remains the leading cause of visual impairment and blindness in many developed countries. Between 32 to 40 percent of about 246 million people with diabetes develop diabetic retinopathy. Approximately 4.1 million American adults 40 years and older are affected by diabetic retinopathy. This glucose-induced microvascular disease progressively damages the tiny blood vessels that nourish the retina, the light-sensitive tissue at the back of the eye, leading to retinal ischemia (i.e., inadequate blood flow), retinal hypoxia (i.e., oxygen deprivation), and retinal nerve cell degeneration or death. It is a most serious sight-threatening complication of diabetes, resulting in significant irreversible vision loss, and even total blindness.
Unfortunately, although current treatments of diabetic retinopathy (i.e., laser therapy, vitrectomy surgery and anti-VEGF therapy) can reduce vision loss, they only slow down but cannot stop the degradation of the retina. Patients require repeated treatment to protect their sight. The current treatments also have significant drawbacks. Laser therapy is focused on preserving the macula, the area of the retina that is responsible for sharp, clear, central vision, by sacrificing the peripheral retina since there is only limited oxygen supply. Therefore, laser therapy results in a constricted peripheral visual field, reduced color vision, delayed dark adaptation, and weakened night vision. Vitrectomy surgery increases the risk of neovascular glaucoma, another devastating ocular disease, characterized by the proliferation of fibrovascular tissue in the anterior chamber angle. Anti-VEGF agents have potential adverse effects, and currently there is insufficient evidence to recommend their routine use.
In this work, for the first time, a paradigm shift in the treatment of diabetic retinopathy is proposed: providing localized, supplemental oxygen to the ischemic tissue via an implantable MEMS device. The retinal architecture (e.g., thickness, cell densities, layered structure, etc.) of the rabbit eye exposed to ischemic hypoxic injuries was well preserved after targeted oxygen delivery to the hypoxic tissue, showing that the use of an external source of oxygen could improve the retinal oxygenation and prevent the progression of the ischemic cascade.
The proposed MEMS device transports oxygen from an oxygen-rich space to the oxygen-deficient vitreous, the gel-like fluid that fills the inside of the eye, and then to the ischemic retina. This oxygen transport process is purely passive and completely driven by the gradient of oxygen partial pressure (pO2). Two types of devices were designed. For the first type, the oxygen-rich space is underneath the conjunctiva, a membrane covering the sclera (white part of the eye), beneath the eyelids and highly permeable to oxygen in the atmosphere when the eye is open. Therefore, sub-conjunctival pO2 is very high during the daytime. For the second type, the oxygen-rich space is inside the device since pure oxygen is needle-injected into the device on a regular basis.
To prevent too fast or too slow permeation of oxygen through the device that is made of parylene and silicone (two widely used biocompatible polymers in medical devices), the material properties of the hybrid parylene/silicone were investigated, including mechanical behaviors, permeation rates, and adhesive forces. Then the thicknesses of parylene and silicone became important design parameters that were fine-tuned to reach the optimal oxygen permeation rate.
The passive MEMS oxygen transporter devices were designed, built, and tested in both bench-top artificial eye models and in-vitro porcine cadaver eyes. The 3D unsteady saccade-induced laminar flow of water inside the eye model was modeled by computational fluid dynamics to study the convective transport of oxygen inside the eye induced by saccade (rapid eye movement). The saccade-enhanced transport effect was also demonstrated experimentally. Acute in-vivo animal experiments were performed in rabbits and dogs to verify the surgical procedure and the device functionality. Various hypotheses were confirmed both experimentally and computationally, suggesting that both the two types of devices are very promising to cure diabetic retinopathy. The chronic implantation of devices in ischemic dog eyes is still underway.
The proposed MEMS oxygen transporter devices can be also applied to treat other ocular and systemic diseases accompanied by retinal ischemia, such as central retinal artery occlusion, carotid artery disease, and some form of glaucoma.
Resumo:
Topological superconductors are particularly interesting in light of the active ongoing experimental efforts for realizing exotic physics such as Majorana zero modes. These systems have excitations with non-Abelian exchange statistics, which provides a path towards topological quantum information processing. Intrinsic topological superconductors are quite rare in nature. However, one can engineer topological superconductivity by inducing effective p-wave pairing in materials which can be grown in the laboratory. One possibility is to induce the proximity effect in topological insulators; another is to use hybrid structures of superconductors and semiconductors.
The proposal of interfacing s-wave superconductors with quantum spin Hall systems provides a promising route to engineered topological superconductivity. Given the exciting recent progress on the fabrication side, identifying experiments that definitively expose the topological superconducting phase (and clearly distinguish it from a trivial state) raises an increasingly important problem. With this goal in mind, we proposed a detection scheme to get an unambiguous signature of topological superconductivity, even in the presence of ordinarily detrimental effects such as thermal fluctuations and quasiparticle poisoning. We considered a Josephson junction built on top of a quantum spin Hall material. This system allows the proximity effect to turn edge states in effective topological superconductors. Such a setup is promising because experimentalists have demonstrated that supercurrents indeed flow through quantum spin Hall edges. To demonstrate the topological nature of the superconducting quantum spin Hall edges, theorists have proposed examining the periodicity of Josephson currents respect to the phase across a Josephson junction. The periodicity of tunneling currents of ground states in a topological superconductor Josephson junction is double that of a conventional Josephson junction. In practice, this modification of periodicity is extremely difficult to observe because noise sources, such as quasiparticle poisoning, wash out the signature of topological superconductors. For this reason, We propose a new, relatively simple DC measurement that can compellingly reveal topological superconductivity in such quantum spin Hall/superconductor heterostructures. More specifically, We develop a general framework for capturing the junction's current-voltage characteristics as a function of applied magnetic flux. Our analysis reveals sharp signatures of topological superconductivity in the field-dependent critical current. These signatures include the presence of multiple critical currents and a non-vanishing critical current for all magnetic field strengths as a reliable identification scheme for topological superconductivity.
This system becomes more interesting as interactions between electrons are involved. By modeling edge states as a Luttinger liquid, we find conductance provides universal signatures to distinguish between normal and topological superconductors. More specifically, we use renormalization group methods to extract universal transport characteristics of superconductor/quantum spin Hall heterostructures where the native edge states serve as a lead. Interestingly, arbitrarily weak interactions induce qualitative changes in the behavior relative to the free-fermion limit, leading to a sharp dichotomy in conductance for the trivial (narrow superconductor) and topological (wide superconductor) cases. Furthermore, we find that strong interactions can in principle induce parafermion excitations at a superconductor/quantum spin Hall junction.
As we identify the existence of topological superconductor, we can take a step further. One can use topological superconductor for realizing Majorana modes by breaking time reversal symmetry. An advantage of 2D topological insulator is that networks required for braiding Majoranas along the edge channels can be obtained by adjoining 2D topological insulator to form corner junctions. Physically cutting quantum wells for this purpose, however, presents technical challenges. For this reason, I propose a more accessible means of forming networks that rely on dynamically manipulating the location of edge states inside of a single 2D topological insulator sheet. In particular, I show that edge states can effectively be dragged into the system's interior by gating a region near the edge into a metallic regime and then removing the resulting gapless carriers via proximity-induced superconductivity. This method allows one to construct rather general quasi-1D networks along which Majorana modes can be exchanged by electrostatic means.
Apart from 2D topological insulators, Majorana fermions can also be generated in other more accessible materials such as semiconductors. Following up on a suggestion by experimentalist Charlie Marcus, I proposed a novel geometry to create Majorana fermions by placing a 2D electron gas in proximity to an interdigitated superconductor-ferromagnet structure. This architecture evades several manufacturing challenges by allowing single-side fabrication and widening the class of 2D electron gas that may be used, such as the surface states of bulk semiconductors. Furthermore, it naturally allows one to trap and manipulate Majorana fermions through the application of currents. Thus, this structure may lead to the development of a circuit that enables fully electrical manipulation of topologically-protected quantum memory. To reveal these exotic Majorana zero modes, I also proposed an interference scheme to detect Majorana fermions that is broadly applicable to any 2D topological superconductor platform.
Resumo:
The Los Angeles Harbor at San Pedro with its natural advantages, and the big development of these now underway, will very soon be the key to the traffic routes of Southern California. The Atchison, Topeka, and Santa Fe railway company realizing this and, not wishing to be caught asleep, has planned to build a line from El Segundo to the harbor. The developments of the harbor are not the only developments taking place in these localities and the proposed new line is intended to include these also.
Resumo:
The hydroxyketone C-3, an intermediate in the stereo-selective total synthesis of dl-Desoxypodocarpic acid (ii), has been shown by both degradative and synthetic pathways to rearrange in the presence of base to diosphenol E-1 (5-isoabietic acid series). The exact spatial arrangements of the systems represented by formulas C-3 and E-1 have been investigated (as the p-bromobenzoates) by single-crystal X-ray diffraction analyses. The hydroxyketone F-1, the proposed intermediate in the rearrangement, has been synthesized. Its conversion to diosphenol E-1 has been studied, and a single-crystal analysis of the p-bromobenzoate derivative has been performed. The initially desired diosphenol C-6 has been prepared, and has been shown to be stable to the potassium t-butoxide rearrangement conditions. Oxidative cleavage of diosphenol E-1 and subsequent cyclization with the aid of polyphosphoric acid has been shown to lead to keto acid I-2 (benzobicyclo [3.3.1] nonane series) rather than keto acid H-2 (5-isoabietic acid series).
Resumo:
A mathematical model is proposed in this thesis for the control mechanism of free fatty acid-glucose metabolism in healthy individuals under resting conditions. The objective is to explain in a consistent manner some clinical laboratory observations such as glucose, insulin and free fatty acid responses to intravenous injection of glucose, insulin, etc. Responses up to only about two hours from the beginning of infusion are considered. The model is an extension of the one for glucose homeostasis proposed by Charette, Kadish and Sridhar (Modeling and Control Aspects of Glucose Homeostasis. Mathematical Biosciences, 1969). It is based upon a systems approach and agrees with the current theories of glucose and free fatty acid metabolism. The description is in terms of ordinary differential equations. Validation of the model is based on clinical laboratory data available at the present time. Finally procedures are suggested for systematically identifying the parameters associated with the free fatty acid portion of the model.