27 resultados para Third Space

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.

Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.

The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.

The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.

In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.

Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.

The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.

The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.

Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.

The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.

The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.

The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.

Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.

The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.

Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.

Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DC and transient measurements of space-charge-limited currents through alloyed and symmetrical n^+ν n^+ structures made of nominally 75 kΩcm ν-type silicon are studied before and after the introduction of defects by 14 MeV neutron radiation. In the transient measurements, the current response to a large turn-on voltage step is analyzed. Right after the voltage step is applied, the current transient reaches a value which we shall call "initial current" value. At longer times, the transient current decays from the initial current value if traps are present.

Before the irradiation, the initial current density-voltage characteristics J(V) agree quantitatively with the theory of trap-free space-charge-limited current in solids. We obtain for the electron mobility a temperature dependence which indicates that scattering due to impurities is weak. This is expected for the high purity silicon used. The drift velocity-field relationships for electrons at room temperature and 77°K, derived from the initial current density-voltage characteristics, are shown to fit the relationships obtained with other methods by other workers. The transient current response for t > 0 remains practically constant at the initial value, thus indicating negligible trapping.

Measurement of the initial (trap-free) current density-voltage characteristics after the irradiation indicates that the drift velocity-field relationship of electrons in silicon is affected by the radiation only at low temperature in the low field range. The effect is not sufficiently pronounced to be readily analyzed and no formal description of it is offered. In the transient response after irradiation for t > 0, the current decays from its initial value, thus revealing the presence of traps. To study these traps, in addition to transient measurements, the DC current characteristics were measured and shown to follow the theory of trap-dominated space-charge-limited current in solids. This theory was applied to a model consisting of two discrete levels in the forbidden band gap. Calculations and experiments agreed and the capture cross-sections of the trapping levels were obtained. This is the first experimental case known to us through which the flow of space-charge-limited current is so simply representable.

These results demonstrate the sensitivity of space-charge-limited current flow as a tool to detect traps and changes in the drift velocity-field relationship of carriers caused by radiation. They also establish that devices based on the mode of space-charge-limited current flow will be affected considerably by any type of radiation capable of introducing traps. This point has generally been overlooked so far, but is obviously quite significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Trimesic acid (1, 3, 5-benzenetricarboxylic acid) crystallizes with a monoclinic unit cell of dimensions a = 26.52 A, b = 16.42 A, c = 26.55 A, and β = 91.53° with 48 molecules /unit cell. Extinctions indicated a space group of Cc or C2/c; a satisfactory structure was obtained in the latter with 6 molecules/asymmetric unit - C54O36H36 with a formula weight of 1261 g. Of approximately 12,000 independent reflections within the CuKα sphere, intensities of 11,563 were recorded visually from equi-inclination Weissenberg photographs.

The structure was solved by packing considerations aided by molecular transforms and two- and three-dimensional Patterson functions. Hydrogen positions were found on difference maps. A total of 978 parameters were refined by least squares; these included hydrogen parameters and anisotropic temperature factors for the C and O atoms. The final R factor was 0.0675; the final "goodness of fit" was 1.49. All calculations were carried out on the Caltech IBM 7040-7094 computer using the CRYRM Crystallographic Computing System.

The six independent molecules fall into two groups of three nearly parallel molecules. All molecules are connected by carboxylto- carboxyl hydrogen bond pairs to form a continuous array of sixmolecule rings with a chicken-wire appearance. These arrays bend to assume two orientations, forming pleated sheets. Arrays in different orientations interpenetrate - three molecules in one orientation passing through the holes of three parallel arrays in the alternate orientation - to produce a completely interlocking network. One third of the carboxyl hydrogen atoms were found to be disordered.

II. Optical transforms as related to x-ray diffraction patterns are discussed with reference to the theory of Fraunhofer diffraction.

The use of a systems approach in crystallographic computing is discussed with special emphasis on the way in which this has been done at the California Institute of Technology.

An efficient manner of calculating Fourier and Patterson maps on a digital computer is presented. Expressions for the calculation of to-scale maps for standard sections and for general-plane sections are developed; space-group-specific expressions in a form suitable for computers are given for all space groups except the hexagonal ones.

Expressions for the calculation of settings for an Eulerian-cradle diffractometer are developed for both the general triclinic case and the orthogonal case.

Photographic materials on pp. 4, 6, 10, and 20 are essential and will not reproduce clearly on Xerox copies. Photographic copies should be ordered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.