16 resultados para Relative disturbance sensitivity

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The branching theory of solutions of certain nonlinear elliptic partial differential equations is developed, when the nonlinear term is perturbed from unforced to forced. We find families of branching points and the associated nonisolated solutions which emanate from a bifurcation point of the unforced problem. Nontrivial solution branches are constructed which contain the nonisolated solutions, and the branching is exhibited. An iteration procedure is used to establish the existence of these solutions, and a formal perturbation theory is shown to give asymptotically valid results. The stability of the solutions is examined and certain solution branches are shown to consist of minimal positive solutions. Other solution branches which do not contain branching points are also found in a neighborhood of the bifurcation point.

The qualitative features of branching points and their associated nonisolated solutions are used to obtain useful information about buckling of columns and arches. Global stability characteristics for the buckled equilibrium states of imperfect columns and arches are discussed. Asymptotic expansions for the imperfection sensitive buckling load of a column on a nonlinearly elastic foundation are found and rigorously justified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.

Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.

The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.

The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.

In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.

Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.

The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.

The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.

Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a toric Del Pezzo surface S, a new instance of mirror symmetry, said relative, is introduced and developed. On the A-model, this relative mirror symmetry conjecture concerns genus 0 relative Gromov-Witten of maximal tangency of S. These correspond, on the B-model, to relative periods of the mirror to S. Furthermore, for S not necessarily toric, two conjectures for BPS state counts are related. It is proven that the integrality of BPS state counts of the total space of the canonical bundle on S implies the integrality for the relative BPS state counts of S. Finally, a prediction of homological mirror symmetry for the open complement is explored. The B-model prediction is calculated in all cases and matches the known A-model computation for the projective plane.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches.

This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems.

Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired lattice constant. The film is grown strained on an available wafer substrate, but the thickness is below the dislocation nucleation threshold. By removing the film from the growth substrate, allowing the strain to relax elastically, and bonding it to a supportive handle, a template with the desired lattice constant is formed. Experimental efforts towards this structure and initial proof of concept are presented.

Cells with high radiative quality present the opportunity to recover a large amount of their radiative losses if they are incorporated in an ensemble that couples emission from one cell to another. This effect is well known, but has been explored previously in the context of sub cells that independently operate at their maximum power point. This analysis explicitly accounts for the system interaction and identifies ways to enhance overall performance by operating some cells in an ensemble at voltages that reduce the power converted in the individual cell. Series connected multijunctions, which by their nature facilitate strong optical coupling between sub-cells, are reoptimized with substantial performance benefit.

Photovoltaic efficiency is usually measured relative to a standard incident spectrum to allow comparison between systems. Deployed in the field systems may differ in energy production due to sensitivity to changes in the spectrum. The series connection constraint in particular causes system efficiency to decrease as the incident spectrum deviates from the standard spectral composition. This thesis performs a case study comparing performance of systems over a year at a particular location to identify the energy production penalty caused by series connection relative to independent electrical connection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elements with even atomic number (Z) in the interval 50 ≤ Z ≤ 58 have been resolved in the cosmic radiation using the Heavy Nuclei Experiment on the HEAO-3 satellite. Their relative abundances have been compared with the results expected from pure r-process material, pure s-process material, and solar system material, both with and without a modification due to possible first ionization potential effects. Such effects may be the result of the preferential acceleration, and hence enhancement in the cosmic rays, or those elements having low first ionization potentials. We find that our measurements are inconsistent with pure r-process material at the greater than 98% confidence level whether or not the first ionization potential adjustments are made.

In addition, we have compared our results with mixtures having varying ratios of pure r-process material to pure s-process material. We find that, if no first ionization potential effects are included,

(r/s)CRS/(r/s)SS = 0.20+0.18-0.14

where CRS refers to the cosmic ray source and SS refers to the solar system, consistent with having an almost pure s-process source. If the first ionization potential adjustments are applied

(r/s)CRS/(r/s)SS = 1.5+1.1-0.7

consistent with a solar system mixture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our understanding of the processes and mechanisms by which secondary organic aerosol (SOA) is formed is derived from laboratory chamber studies. In the atmosphere, SOA formation is primarily driven by progressive photooxidation of SOA precursors, coupled with their gas-particle partitioning. In the chamber environment, SOA-forming vapors undergo multiple chemical and physical processes that involve production and removal via gas-phase reactions; partitioning onto suspended particles vs. particles deposited on the chamber wall; and direct deposition on the chamber wall. The main focus of this dissertation is to characterize the interactions of organic vapors with suspended particles and the chamber wall and explore how these intertwined processes in laboratory chambers govern SOA formation and evolution.

A Functional Group Oxidation Model (FGOM) that represents SOA formation and evolution in terms of the competition between functionalization and fragmentation, the extent of oxygen atom addition, and the change of volatility, is developed. The FGOM contains a set of parameters that are to be determined by fitting of the model to laboratory chamber data. The sensitivity of the model prediction to variation of the adjustable parameters allows one to assess the relative importance of various pathways involved in SOA formation.

A critical aspect of the environmental chamber is the presence of the wall, which can induce deposition of SOA-forming vapors and promote heterogeneous reactions. An experimental protocol and model framework are first developed to constrain the vapor-wall interactions. By optimal fitting the model predictions to the observed wall-induced decay profiles of 25 oxidized organic compounds, the dominant parameter governing the extent of wall deposition of a compound is identified, i.e., wall accommodation coefficient. By correlating this parameter with the molecular properties of a compound via its volatility, the wall-induced deposition rate of an organic compound can be predicted based on its carbon and oxygen numbers in the molecule.

Heterogeneous transformation of δ-hydroxycarbonyl, a major first-generation product from long-chain alkane photochemistry, is observed on the surface of particles and walls. The uniqueness of this reaction scheme is the production of substituted dihydrofuran, which is highly reactive towards ozone, OH, and NO3, thereby opening a reaction pathway that is not usually accessible to alkanes. A spectrum of highly-oxygenated products with carboxylic acid, ester, and ether functional groups is produced from the substituted dihydrofuran chemistry, thereby affecting the average oxidation state of the alkane-derived SOA.

The vapor wall loss correction is applied to several chamber-derived SOA systems generated from both anthropogenic and biogenic sources. Experimental and modeling approaches are employed to constrain the partitioning behavior of SOA-forming vapors onto suspended particles vs. chamber walls. It is demonstrated that deposition of SOA-forming vapors to the chamber wall during photooxidation experiments can lead to substantial and systematic underestimation of SOA. Therefore, it is likely that a lack of proper accounting for vapor wall losses that suppress chamber-derived SOA yields contribute substantially to the underprediction of ambient SOA concentrations in atmospheric models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precision polarimetry of the cosmic microwave background (CMB) has become a mainstay of observational cosmology. The ΛCDM model predicts a polarization of the CMB at the level of a few μK, with a characteristic E-mode pattern. On small angular scales, a B-mode pattern arises from the gravitational lensing of E-mode power by the large scale structure of the universe. Inflationary gravitational waves (IGW) may be a source of B-mode power on large angular scales, and their relative contribution to primordial fluctuations is parameterized by a tensor-to-scalar ratio r. BICEP2 and Keck Array are a pair of CMB polarimeters at the South Pole designed and built for optimal sensitivity to the primordial B-mode peak around multipole l ~ 100. The BICEP2/Keck Array program intends to achieve a sensitivity to r ≥ 0.02. Auxiliary science goals include the study of gravitational lensing of E-mode into B-mode signal at medium angular scales and a high precision survey of Galactic polarization. These goals require low noise and tight control of systematics. We describe the design and calibration of the instrument. We also describe the analysis of the first three years of science data. BICEP2 observes a significant B-mode signal at 150 GHz in excess of the level predicted by the lensed-ΛCDM model, and Keck Array confirms the excess signal at > 5σ. We combine the maps from the two experiments to produce 150 GHz Q and U maps which have a depth of 57 nK deg (3.4 μK arcmin) over an effective area of 400 deg2 for an equivalent survey weight of 248000 μK2. We also show preliminary Keck Array 95 GHz maps. A joint analysis with the Planck collaboration reveals that much of BICEP2/Keck Array's observed 150 GHz signal at low l is more likely a Galactic dust foreground than a measurement of r. Marginalizing over dust and r, lensing B-modes are detected at 7.0σ significance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detection of biologically relevant targets, including small molecules, proteins, DNA, and RNA, is vital for fundamental research as well as clinical diagnostics. Sensors with biological elements provide a natural foundation for such devices because of the inherent recognition capabilities of biomolecules. Electrochemical DNA platforms are simple, sensitive, and do not require complex target labeling or expensive instrumentation. Sensitivity and specificity are added to DNA electrochemical platforms when the physical properties of DNA are harnessed. The inherent structure of DNA, with its stacked core of aromatic bases, enables DNA to act as a wire via DNA-mediated charge transport (DNA CT). DNA CT is not only robust over long molecular distances of at least 34 nm, but is also especially sensitive to anything that perturbs proper base stacking, including DNA mismatches, lesions, or DNA-binding proteins that distort the π-stack. Electrochemical sensors based on DNA CT have previously been used for single-nucleotide polymorphism detection, hybridization assays, and DNA-binding protein detection. Here, improvements to (i) the structure of DNA monolayers and (ii) the signal amplification with DNA CT platforms for improved sensitivity and detection are described.

First, improvements to the control over DNA monolayer formation are reported through the incorporation of copper-free click chemistry into DNA monolayer assembly. As opposed to conventional film formation involving the self-assembly of thiolated DNA, copper-free click chemistry enables DNA to be tethered to a pre-formed mixed alkylthiol monolayer. The total amount of DNA in the final film is directly related to the amount of azide in the underlying alkylthiol monolayer. DNA monolayers formed with this technique are significantly more homogeneous and lower density, with a larger amount of individual helices exposed to the analyte solution. With these improved monolayers, significantly more sensitive detection of the transcription factor TATA binding protein (TBP) is achieved.

Using low-density DNA monolayers, two-electrode DNA arrays were designed and fabricated to enable the placement of multiple DNA sequences onto a single underlying electrode. To pattern DNA onto the primary electrode surface of these arrays, a copper precatalyst for click chemistry was electrochemically activated at the secondary electrode. The location of the secondary electrode relative to the primary electrode enabled the patterning of up to four sequences of DNA onto a single electrode surface. As opposed to conventional electrochemical readout from the primary, DNA-modified electrode, a secondary microelectrode, coupled with electrocatalytic signal amplification, enables more sensitive detection with spatial resolution on the DNA array electrode surface. Using this two-electrode platform, arrays have been formed that facilitate differentiation between well-matched and mismatched sequences, detection of transcription factors, and sequence-selective DNA hybridization, all with the incorporation of internal controls.

For effective clinical detection, the two working electrode platform was multiplexed to contain two complementary arrays, each with fifteen electrodes. This platform, coupled with low density DNA monolayers and electrocatalysis with readout from a secondary electrode, enabled even more sensitive detection from especially small volumes (4 μL per well). This multiplexed platform has enabled the simultaneous detection of two transcription factors, TBP and CopG, with surface dissociation constants comparable to their solution dissociation constants.

With the sensitivity and selectivity obtained from the multiplexed, two working electrode array, an electrochemical signal-on assay for activity of the human methyltransferase DNMT1 was incorporated. DNMT1 is the most abundant human methyltransferase, and its aberrant methylation has been linked to the development of cancer. However, current methods to monitor methyltransferase activity are either ineffective with crude samples or are impractical to develop for clinical applications due to a reliance on radioactivity. Electrochemical detection of methyltransferase activity, in contrast, circumvents these issues. The signal-on detection assay translates methylation events into electrochemical signals via a methylation-specific restriction enzyme. Using the two working electrode platform combined with this assay, DNMT1 activity from tumor and healthy adjacent tissue lysate were evaluated. Our electrochemical measurements revealed significant differences in methyltransferase activity between tumor tissue and healthy adjacent tissue.

As differential activity was observed between colorectal tumor tissue and healthy adjacent tissue, ten tumor sets were subsequently analyzed for DNMT1 activity both electrochemically and by tritium incorporation. These results were compared to expression levels of DNMT1, measured by qPCR, and total DNMT1 protein content, measured by Western blot. The only trend detected was that hyperactivity was observed in the tumor samples as compared to the healthy adjacent tissue when measured electrochemically. These advances in DNA CT-based platforms have propelled this class of sensors from the purely academic realm into the realm of clinically relevant detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current technological advances in fabrication methods have provided pathways to creating architected structural meta-materials similar to those found in natural organisms that are structurally robust and lightweight, such as diatoms. Structural meta-materials are materials with mechanical properties that are determined by material properties at various length scales, which range from the material microstructure (nm) to the macro-scale architecture (μm – mm). It is now possible to exploit material size effect, which emerge at the nanometer length scale, as well as structural effects to tune the material properties and failure mechanisms of small-scale cellular solids, such as nanolattices. This work demonstrates the fabrication and mechanical properties of 3-dimensional hollow nanolattices in both tension and compression. Hollow gold nanolattices loaded in uniaxial compression demonstrate that strength and stiffness vary as a function of geometry and tube wall thickness. Structural effects were explored by increasing the unit cell angle from 30° to 60° while keeping all other parameters constant; material size effects were probed by varying the tube wall thickness, t, from 200nm to 635nm, at a constant relative density and grain size. In-situ uniaxial compression experiments reveal an order-of-magnitude increase in yield stress and modulus in nanolattices with greater lattice angles, and a 150% increase in the yield strength without a concomitant change in modulus in thicker-walled nanolattices for fixed lattice angles. These results imply that independent control of structural and material size effects enables tunability of mechanical properties of 3-dimensional architected meta-materials and highlight the importance of material, geometric, and microstructural effects in small-scale mechanics. This work also explores the flaw tolerance of 3D hollow-tube alumina kagome nanolattices with and without pre-fabricated notches, both in experiment and simulation. Experiments demonstrate that the hollow kagome nanolattices in uniaxial tension always fail at the same load when the ratio of notch length (a) to sample width (w) is no greater than 1/3, with no correlation between failure occurring at or away from the notch. For notches with (a/w) > 1/3, the samples fail at lower peak loads and this is attributed to the increased compliance as fewer unit cells span the un-notched region. Finite element simulations of the kagome tension samples show that the failure is governed by tensile loading for (a/w) < 1/3 but as (a/w) increases, bending begins to play a significant role in the failure. This work explores the flaw sensitivity of hollow alumina kagome nanolattices in tension, using experiments and simulations, and demonstrates that the discrete-continuum duality of architected structural meta-materials gives rise to their flaw insensitivity even when made entirely of intrinsically brittle materials.