9 resultados para Hierarchy of beings
em CaltechTHESIS
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.
We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
Fluvial systems form landscapes and sedimentary deposits with a rich hierarchy of structures that extend from grain- to valley scale. Large-scale pattern formation in fluvial systems is commonly attributed to forcing by external factors, including climate change, tectonic uplift, and sea-level change. Yet over geologic timescales, rivers may also develop large-scale erosional and depositional patterns that do not bear on environmental history. This dissertation uses a combination of numerical modeling and topographic analysis to identify and quantify patterns in river valleys that form as a consequence of river meandering alone, under constant external forcing. Chapter 2 identifies a numerical artifact in existing, grid-based models that represent the co-evolution of river channel migration and bank strength over geologic timescales. A new, vector-based technique for bank-material tracking is shown to improve predictions for the evolution of meander belts, floodplains, sedimentary deposits formed by aggrading channels, and bedrock river valleys, particularly when spatial contrasts in bank strength are strong. Chapters 3 and 4 apply this numerical technique to establishing valley topography formed by a vertically incising, meandering river subject to constant external forcing—which should serve as the null hypothesis for valley evolution. In Chapter 3, this scenario is shown to explain a variety of common bedrock river valley types and smaller-scale features within them—including entrenched channels, long-wavelength, arcuate scars in valley walls, and bedrock-cored river terraces. Chapter 4 describes the age and geometric statistics of river terraces formed by meandering with constant external forcing, and compares them to terraces in natural river valleys. The frequency of intrinsic terrace formation by meandering is shown to reflect a characteristic relief-generation timescale, and terrace length is identified as a key criterion for distinguishing these terraces from terraces formed by externally forced pulses of vertical incision. In a separate study, Chapter 5 utilizes image and topographic data from the Mars Reconnaissance Orbiter to quantitatively identify spatial structures in the polar layered deposits of Mars, and identifies sequences of beds, consistently 1-2 meters thick, that have accumulated hundreds of kilometers apart in the north polar layered deposits.
Resumo:
This thesis advances our understanding of midlatitude storm tracks and how they respond to perturbations in the climate system. The midlatitude storm tracks are regions of maximal turbulent kinetic energy in the atmosphere. Through them, the bulk of the atmospheric transport of energy, water vapor, and angular momentum occurs in midlatitudes. Therefore, they are important regulators of climate, controlling basic features such as the distribution of surface temperatures, precipitation, and winds in midlatitudes. Storm tracks are robustly projected to shift poleward in global-warming simulations with current climate models. Yet the reasons for this shift have remained unclear. Here we show that this shift occurs even in extremely idealized (but still three-dimensional) simulations of dry atmospheres. We use these simulations to develop an understanding of the processes responsible for the shift and develop a conceptual model that accounts for it.
We demonstrate that changes in the convective static stability in the deep tropics alone can drive remote shifts in the midlatitude storm tracks. Through simulations with a dry idealized general circulation model (GCM), midlatitude storm tracks are shown to be located where the mean available potential energy (MAPE, a measure of the potential energy available to be converted into kinetic energy) is maximal. As the climate varies, even if only driven by tropical static stability changes, the MAPE maximum shifts primarily because of shifts of the maximum of near-surface meridional temperature gradients. The temperature gradients shift in response to changes in the width of the tropical Hadley circulation, whose width is affected by the tropical static stability. Storm tracks generally shift in tandem with shifts of the subtropical terminus of the Hadley circulation.
We develop a one-dimensional diffusive energy-balance model that links changes in the Hadley circulation to midlatitude temperature gradients and so to the storm tracks. It is the first conceptual model to incorporate a dynamical coupling between the tropical Hadley circulation and midlatitude turbulent energy transport. Numerical and analytical solutions of the model elucidate the circumstances of when and how the storm tracks shift in tandem with the terminus of the Hadley circulation. They illustrate how an increase of only the convective static stability in the deep tropics can lead to an expansion of the Hadley circulation and a poleward shift of storm tracks.
The simulations with the idealized GCM and the conceptual energy-balance model demonstrate a clear link between Hadley circulation dynamics and midlatitude storm track position. With the help of the hierarchy of models presented in this thesis, we obtain a closed theory of storm track shifts in dry climates. The relevance of this theory for more realistic moist climates is discussed.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Resumo:
Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.
The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.
Resumo:
Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.
An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).
The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.
A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.
Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.
Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.
Resumo:
A variety of neural signals have been measured as correlates to consciousness. In particular, late current sinks in layer 1, distributed activity across the cortex, and feedback processing have all been implicated. What are the physiological underpinnings of these signals? What computational role do they play in the brain? Why do they correlate to consciousness? This thesis begins to answer these questions by focusing on the pyramidal neuron. As the primary communicator of long-range feedforward and feedback signals in the cortex, the pyramidal neuron is set up to play an important role in establishing distributed representations. Additionally, the dendritic extent, reaching layer 1, is well situated to receive feedback inputs and contribute to current sinks in the upper layers. An investigation of pyramidal neuron physiology is therefore necessary to understand how the brain creates, and potentially uses, the neural correlates of consciousness. An important part of this thesis will be in establishing the computational role that dendritic physiology plays. In order to do this, a combined experimental and modeling approach is used.
This thesis beings with single-cell experiments in layer 5 and layer 2/3 pyramidal neurons. In both cases, dendritic nonlinearities are characterized and found to be integral regulators of neural output. Particular attention is paid to calcium spikes and NMDA spikes, which both exist in the apical dendrites, considerable distances from the spike initiation zone. These experiments are then used to create detailed multicompartmental models. These models are used to test hypothesis regarding spatial distribution of membrane channels, to quantify the effects of certain experimental manipulations, and to establish the computational properties of the single cell. We find that the pyramidal neuron physiology can carry out a coincidence detection mechanism. Further abstraction of these models reveals potential mechanisms for spike time control, frequency modulation, and tuning. Finally, a set of experiments are carried out to establish the effect of long-range feedback inputs onto the pyramidal neuron. A final discussion then explores a potential way in which the physiology of pyramidal neurons can establish distributed representations, and contribute to consciousness.