9 resultados para Taphonomic signatures
em CaltechTHESIS
Resumo:
Topological superconductors are particularly interesting in light of the active ongoing experimental efforts for realizing exotic physics such as Majorana zero modes. These systems have excitations with non-Abelian exchange statistics, which provides a path towards topological quantum information processing. Intrinsic topological superconductors are quite rare in nature. However, one can engineer topological superconductivity by inducing effective p-wave pairing in materials which can be grown in the laboratory. One possibility is to induce the proximity effect in topological insulators; another is to use hybrid structures of superconductors and semiconductors.
The proposal of interfacing s-wave superconductors with quantum spin Hall systems provides a promising route to engineered topological superconductivity. Given the exciting recent progress on the fabrication side, identifying experiments that definitively expose the topological superconducting phase (and clearly distinguish it from a trivial state) raises an increasingly important problem. With this goal in mind, we proposed a detection scheme to get an unambiguous signature of topological superconductivity, even in the presence of ordinarily detrimental effects such as thermal fluctuations and quasiparticle poisoning. We considered a Josephson junction built on top of a quantum spin Hall material. This system allows the proximity effect to turn edge states in effective topological superconductors. Such a setup is promising because experimentalists have demonstrated that supercurrents indeed flow through quantum spin Hall edges. To demonstrate the topological nature of the superconducting quantum spin Hall edges, theorists have proposed examining the periodicity of Josephson currents respect to the phase across a Josephson junction. The periodicity of tunneling currents of ground states in a topological superconductor Josephson junction is double that of a conventional Josephson junction. In practice, this modification of periodicity is extremely difficult to observe because noise sources, such as quasiparticle poisoning, wash out the signature of topological superconductors. For this reason, We propose a new, relatively simple DC measurement that can compellingly reveal topological superconductivity in such quantum spin Hall/superconductor heterostructures. More specifically, We develop a general framework for capturing the junction's current-voltage characteristics as a function of applied magnetic flux. Our analysis reveals sharp signatures of topological superconductivity in the field-dependent critical current. These signatures include the presence of multiple critical currents and a non-vanishing critical current for all magnetic field strengths as a reliable identification scheme for topological superconductivity.
This system becomes more interesting as interactions between electrons are involved. By modeling edge states as a Luttinger liquid, we find conductance provides universal signatures to distinguish between normal and topological superconductors. More specifically, we use renormalization group methods to extract universal transport characteristics of superconductor/quantum spin Hall heterostructures where the native edge states serve as a lead. Interestingly, arbitrarily weak interactions induce qualitative changes in the behavior relative to the free-fermion limit, leading to a sharp dichotomy in conductance for the trivial (narrow superconductor) and topological (wide superconductor) cases. Furthermore, we find that strong interactions can in principle induce parafermion excitations at a superconductor/quantum spin Hall junction.
As we identify the existence of topological superconductor, we can take a step further. One can use topological superconductor for realizing Majorana modes by breaking time reversal symmetry. An advantage of 2D topological insulator is that networks required for braiding Majoranas along the edge channels can be obtained by adjoining 2D topological insulator to form corner junctions. Physically cutting quantum wells for this purpose, however, presents technical challenges. For this reason, I propose a more accessible means of forming networks that rely on dynamically manipulating the location of edge states inside of a single 2D topological insulator sheet. In particular, I show that edge states can effectively be dragged into the system's interior by gating a region near the edge into a metallic regime and then removing the resulting gapless carriers via proximity-induced superconductivity. This method allows one to construct rather general quasi-1D networks along which Majorana modes can be exchanged by electrostatic means.
Apart from 2D topological insulators, Majorana fermions can also be generated in other more accessible materials such as semiconductors. Following up on a suggestion by experimentalist Charlie Marcus, I proposed a novel geometry to create Majorana fermions by placing a 2D electron gas in proximity to an interdigitated superconductor-ferromagnet structure. This architecture evades several manufacturing challenges by allowing single-side fabrication and widening the class of 2D electron gas that may be used, such as the surface states of bulk semiconductors. Furthermore, it naturally allows one to trap and manipulate Majorana fermions through the application of currents. Thus, this structure may lead to the development of a circuit that enables fully electrical manipulation of topologically-protected quantum memory. To reveal these exotic Majorana zero modes, I also proposed an interference scheme to detect Majorana fermions that is broadly applicable to any 2D topological superconductor platform.
Resumo:
Microbes have profoundly influenced the Earth’s environments through time. Records of these interactions come primarily from the development and implementation of proxies that relate known modern processes to chemical signatures in the sedimentary record. This thesis is presented in two parts, focusing first on novel proxy development in the modern and second on interpretation of past environments using well-established methods. Part 1, presented in two chapters, builds on previous observations that different microbial metabolisms produce vastly different lipid hydrogen isotopic compositions. Chapter 1 evaluates the potential environmental expression of metabolism-based fractionation differences by exploiting the natural microbial community gradients in hydrothermal springs. We find a very large range in isotopic composition that can be demonstrably linked to the microbial source(s) of the fatty acids at each sample site. In Chapter 2, anaerobic culturing techniques are used to evaluate the hydrogen isotopic fractionations produced by anaerobic microbial metabolisms. Although the observed fractionation patterns are similar to those reported for aerobic cultures for some organisms, others show large differences. Part 2 changes focus from the modern to the ancient and uses classical stratigraphic methods combined with isotope stratigraphy to interpret microbial and environmental changes during the latest Precambrian Era. Chapter 3 presents a detailed characterization of the facies, parasequence development, and stratigraphic architecture of the Ediacaran Khufai Formation. Chapter 4 presents measurements of carbon, oxygen, and sulfur isotopic ratios in stratigraphic context. Large oscillations in the isotopic composition of sulfate constrain the size of the marine sulfate reservoir and suggest incorporation of an enriched isotopic source. Because this data was measured in stratigraphic context, we can assert with confidence that these isotopic shifts are not related to stratigraphic surfaces or facies type but instead reflect the evolution of the ocean through time. This data integrates into the chemostratigraphic global record and contributes to the emerging picture of changing marine chemistry during the latest Precambrian Era.
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.
Resumo:
The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.
The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.
The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.
Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.
From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.
In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.
Resumo:
Since the discovery of the Higgs boson at the LHC, its use as a probe to search for beyond the standard model physics, such as supersymmetry, has become important, as seen in a recent search by the CMS experiment using razor variables in the diphoton final state. Motivated by this search, this thesis examines the LHC discovery potential of a SUSY scenario involving bottom squark pair production with a Higgs boson in the final state. We design and implement a software-based trigger using the razor variables for the CMS experiment to record events with a bottom quark-antiquark pair from a Higgs boson. We characterize the full range of signatures at the LHC from this Higgs-aware SUSY scenario and demonstrate the sensitivity of the CMS data to this model.
Resumo:
Let F = Ǫ(ζ + ζ –1) be the maximal real subfield of the cyclotomic field Ǫ(ζ) where ζ is a primitive qth root of unity and q is an odd rational prime. The numbers u1=-1, uk=(ζk-ζ-k)/(ζ-ζ-1), k=2,…,p, p=(q-1)/2, are units in F and are called the cyclotomic units. In this thesis the sign distribution of the conjugates in F of the cyclotomic units is studied.
Let G(F/Ǫ) denote the Galoi's group of F over Ǫ, and let V denote the units in F. For each σϵ G(F/Ǫ) and μϵV define a mapping sgnσ: V→GF(2) by sgnσ(μ) = 1 iff σ(μ) ˂ 0 and sgnσ(μ) = 0 iff σ(μ) ˃ 0. Let {σ1, ... , σp} be a fixed ordering of G(F/Ǫ). The matrix Mq=(sgnσj(vi) ) , i, j = 1, ... , p is called the matrix of cyclotomic signatures. The rank of this matrix determines the sign distribution of the conjugates of the cyclotomic units. The matrix of cyclotomic signatures is associated with an ideal in the ring GF(2) [x] / (xp+ 1) in such a way that the rank of the matrix equals the GF(2)-dimension of the ideal. It is shown that if p = (q-1)/ 2 is a prime and if 2 is a primitive root mod p, then Mq is non-singular. Also let p be arbitrary, let ℓ be a primitive root mod q and let L = {i | 0 ≤ i ≤ p-1, the least positive residue of defined by ℓi mod q is greater than p}. Let Hq(x) ϵ GF(2)[x] be defined by Hq(x) = g. c. d. ((Σ xi/I ϵ L) (x+1) + 1, xp + 1). It is shown that the rank of Mq equals the difference p - degree Hq(x).
Further results are obtained by using the reciprocity theorem of class field theory. The reciprocity maps for a certain abelian extension of F and for the infinite primes in F are associated with the signs of conjugates. The product formula for the reciprocity maps is used to associate the signs of conjugates with the reciprocity maps at the primes which lie above (2). The case when (2) is a prime in F is studied in detail. Let T denote the group of totally positive units in F. Let U be the group generated by the cyclotomic units. Assume that (2) is a prime in F and that p is odd. Let F(2) denote the completion of F at (2) and let V(2) denote the units in F(2). The following statements are shown to be equivalent. 1) The matrix of cyclotomic signatures is non-singular. 2) U∩T = U2. 3) U∩F2(2) = U2. 4) V(2)/ V(2)2 = ˂v1 V(2)2˃ ʘ…ʘ˂vp V(2)2˃ ʘ ˂3V(2)2˃.
The rank of Mq was computed for 5≤q≤929 and the results appear in tables. On the basis of these results and additional calculations the following conjecture is made: If q and p = (q -1)/ 2 are both primes, then Mq is non-singular.