15 resultados para Substrate and source of explant

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.

RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.

Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes the design and synthesis of a true, heterogeneous, asymmetric catalyst. The catalyst consists of a thin film that resides on a high-surface- area hydrophilic solid and is composed of a chiral, hydrophilic organometallic complex dissolved in ethylene glycol. Reactions of prochiral organic reactants take place predominantly at the ethylene glycol-bulk organic interface.

The synthesis of this new heterogeneous catalyst is accomplished in a series of designed steps. A novel, water-soluble, tetrasulfonated 2,2'-bis (diphenylphosphino)-1,1'-binaphthyl (BINAP-4S0_3Na) is synthesized by direct sulfonation of 2,2'-bis(diphenylphosphino)-1,1'-binaphthyl (BINAP). The rhodium (I) complex of BINAP-4SO_3Na is prepared and is shown to be the first homogeneous catalyst to perform asymmetric reductions of prochiral 2-acetamidoacrylic acids in neat water with enantioselectivities as high as those obtained in non-aqueous solvents. The ruthenium (II) complex, [Ru(BINAP-4SO_3Na)(benzene)Cl]Cl is also synthesized and exhibits a broader substrate specificity as well as higher enantioselectivities for the homogeneous asymmetric reduction of prochiral 2-acylamino acid precursors in water. Aquation of the ruthenium-chloro bond in water is found to be detrimental to the enantioselectivity with some substrates. Replacement of water by ethylene glycol results in the same high e.e's as those found in neat methanol. The ruthenium complex is impregnated onto a controlled pore-size glass CPG-240 by the incipient wetness technique. Anhydrous ethylene glycol is used as the immobilizing agent in this heterogeneous catalyst, and a non-polar 1:1 mixture of chloroform and cyclohexane is employed as the organic phase.

Asymmetric reduction of 2-(6'-methoxy-2'-naphthyl)acrylic acid to the non-steroidal anti-inflammatory agent, naproxen, is accomplished with this heterogeneous catalyst at a third of the rate observed in homogeneous solution with an e.e. of 96% at a reaction temperature of 3°C and 1,400 psig of hydrogen. No leaching of the ruthenium complex into the bulk organic phase is found at a detection limit of 32 ppb. Recycling of the catalyst is possible without any loss in enantioselectivity. Long-term stability of this new heterogeneous catalyst is proven by a self-assembly test. That is, under the reaction conditions, the individual components of the present catalytic system self-assemble into the supported-catalyst configuration.

The strategies outlined here for the design and synthesis of this new heterogeneous catalyst are general, and can hopefully be applied to the development of other heterogeneous, asymmetric catalysts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.

The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.

The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.

The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design, synthesis and magnetic characterization of thiophene-based models for the polaronic ferromagnet are described. Synthetic strategies employing Wittig and Suzuki coupling were employed to produce polymers with extended π-systems. Oxidative doping using AsF_5 or I_2 produces radical cations (polarons) that are stable at room temperature. Magnetic characterization of the doped polymers, using SQUID-based magnetometry, indicates that in several instances ferromagnetic coupling of polarons occurs along the polymer chain. An investigation of the influence of polaron stability and delocalization on the magnitude of ferromagnetic coupling is pursued. A lower limit for mild, solution phase I_2 doping is established. A comparison of the variable temperature data of various polymers reveals that deleterious antiferromagnetic interactions are relatively insensitive to spin concentration, doping protocols or spin state. Comparison of the various polymers reveals useful design principles and suggests new directions for the development of magnetic organic materials. Novel strategies for solubilizing neutral polymeric materials in polar solvents are investigated.

The incorporation of stable bipyridinium spin-containing units into a polymeric high-spin array is explored. Preliminary results suggest that substituted diquat derivatives may serve as stable spin-containing units for the polaronic ferromagnet and are amenable to electrochemical doping. Synthetic efforts to prepare high-spin polymeric materials using viologens as a spin source have been unsuccessful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.

Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.

In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.

In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.

The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.

In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.

In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal noise arising from mechanical loss in high reflective dielectric coatings is a significant source of noise in precision optical measurements. In particular, Advanced LIGO, a large scale interferometer aiming to observed gravitational wave, is expected to be limited by coating thermal noise in the most sensitive region around 30–300 Hz. Various theoretical calculations for predicting coating Brownian noise have been proposed. However, due to the relatively limited knowledge of the coating material properties, an accurate approximation of the noise cannot be achieved. A testbed that can directly observed coating thermal noise close to Advanced LIGO band will serve as an indispensable tool to verify the calculations, study material properties of the coating, and estimate the detector’s performance.

This dissertation reports a setup that has sensitivity to observe wide band (10Hz to 1kHz) thermal noise from fused silica/tantala coating at room temperature from fixed-spacer Fabry–Perot cavities. Important fundamental noises and technical noises associated with the setup are discussed. The coating loss obtained from the measurement agrees with results reported in the literature. The setup serves as a testbed to study thermal noise in high reflective mirrors from different materials. One example is a heterostructure of AlxGa1−xAs (AlGaAs). An optimized design to minimize thermo–optic noise in the coating is proposed and discussed in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

(1) Equation of State of Komatiite

The equation of state (EOS) of a molten komatiite (27 wt% MgO) was detennined in the 5 to 36 GPa pressure range via shock wave compression from 1550°C and 0 bar. Shock wave velocity, US, and particle velocity, UP, in km/s follow the linear relationship US = 3.13(±0.03) + 1.47(±0.03) UP. Based on a calculated density at 1550°C, 0 bar of 2.745±0.005 glee, this US-UP relationship gives the isentropic bulk modulus KS = 27.0 ± 0.6 GPa, and its first and second isentropic pressure derivatives, K'S = 4.9 ± 0.1 and K"S = -0.109 ± 0.003 GPa-1.

The calculated liquidus compression curve agrees within error with the static compression results of Agee and Walker [1988a] to 6 GPa. We detennine that olivine (FO94) will be neutrally buoyant in komatiitic melt of the composition we studied near 8.2 GPa. Clinopyroxene would also be neutrally buoyant near this pressure. Liquidus garnet-majorite may be less dense than this komatiitic liquid in the 20-24 GPa interval, however pyropic-garnet and perovskite phases are denser than this komatiitic liquid in their respective liquidus pressure intervals to 36 GPa. Liquidus perovskite may be neutrally buoyant near 70 GPa.

At 40 GPa, the density of shock-compressed molten komatiite would be approximately equal to the calculated density of an equivalent mixture of dense solid oxide components. This observation supports the model of Rigden et al. [1989] for compressibilities of liquid oxide components. Using their theoretical EOS for liquid forsterite and fayalite, we calculate the densities of a spectrum of melts from basaltic through peridotitic that are related to the experimentally studied komatiitic liquid by addition or subtraction of olivine. At low pressure, olivine fractionation lowers the density of basic magmas, but above 14 GPa this trend is reversed. All of these basic to ultrabasic liquids are predicted to have similar densities at 14 GPa, and this density is approximately equal to the bulk (PREM) mantle. This suggests that melts derived from a peridotitic mantle may be inhibited from ascending from depths greater than 400 km.

The EOS of ultrabasic magmas was used to model adiabatic melting in a peridotitic mantle. If komatiites are formed by >15% partial melting of a peridotitic mantle, then komatiites generated by adiabatic melting come from source regions in the lower transition zone (≈500-670 km) or the lower mantle (>670 km). The great depth of incipient melting implied by this model, and the melt density constraint mentioned above, suggest that komatiitic volcanism may be gravitationally hindered. Although komatiitic magmas are thought to separate from their coexisting crystals at a temperature =200°C greater than that for modern MORBs, their ultimate sources are predicted to be diapirs that, if adiabatically decompressed from initially solid mantle, were more than 700°C hotter than the sources of MORBs and derived from great depth.

We considered the evolution of an initially molten mantle, i.e., a magma ocean. Our model considers the thermal structure of the magma ocean, density constraints on crystal segregation, and approximate phase relationships for a nominally chondritic mantle. Crystallization will begin at the core-mantle boundary. Perovskite buoyancy at > 70 GPa may lead to a compositionally stratified lower mantle with iron-enriched mangesiowiistite content increasing with depth. The upper mantle may be depleted in perovskite components. Olivine neutral buoyancy may lead to the formation of a dunite septum in the upper mantle, partitioning the ocean into upper and lower reservoirs, but this septum must be permeable.

(2) Viscosity Measurement with Shock Waves

We have examined in detail the analytical method for measuring shear viscosity from the decay of perturbations on a corrugated shock front The relevance of initial conditions, finite shock amplitude, bulk viscosity, and the sensitivity of the measurements to the shock boundary conditions are discussed. The validity of the viscous perturbation approach is examined by numerically solving the second-order Navier-Stokes equations. These numerical experiments indicate that shock instabilities may occur even when the Kontorovich-D'yakov stability criteria are satisfied. The experimental results for water at 15 GPa are discussed, and it is suggested that the large effective viscosity determined by this method may reflect the existence of ice VII on the Rayleigh path of the Hugoniot This interpretation reconciles the experimental results with estimates and measurements obtained by other means, and is consistent with the relationship of the Hugoniot with the phase diagram for water. Sound waves are generated at 4.8 MHz at in the water experiments at 15 GPa. The existence of anelastic absorption modes near this frequency would also lead to large effective viscosity estimates.

(3) Equation of State of Molybdenum at 1400°C

Shock compression data to 96 GPa for pure molybdenum, initially heated to 1400°C, are presented. Finite strain analysis of the data gives a bulk modulus at 1400°C, K'S. of 244±2 GPa and its pressure derivative, K'OS of 4. A fit of shock velocity to particle velocity gives the coefficients of US = CO+S UP to be CO = 4.77±0.06 km/s and S = 1.43±0.05. From the zero pressure sound speed, CO, a bulk modulus of 232±6 GPa is calculated that is consistent with extrapolation of ultrasonic elasticity measurements. The temperature derivative of the bulk modulus at zero pressure, θKOSθT|P, is approximately -0.012 GPa/K. A thermodynamic model is used to show that the thermodynamic Grüneisen parameter is proportional to the density and independent of temperature. The Mie-Grüneisen equation of state adequately describes the high temperature behavior of molybdenum under the present range of shock loading conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n-heptane/air premixed turbulent flames in the high-Karlovitz portion of the thin reaction zone regime are characterized and modeled in this thesis using Direct Numerical Simulations (DNS) with detailed chemistry. In order to perform these simulations, a time-integration scheme that can efficiently handle the stiffness of the equations solved is developed first. A first simulation with unity Lewis number is considered in order to assess the effect of turbulence on the flame in the absence of differential diffusion. A second simulation with non-unity Lewis numbers is considered to study how turbulence affects differential diffusion. In the absence of differential diffusion, minimal departure from the 1D unstretched flame structure (species vs. temperature profiles) is observed. In the non-unity Lewis number case, the flame structure lies between that of 1D unstretched flames with "laminar" non-unity Lewis numbers and unity Lewis number. This is attributed to effective Lewis numbers resulting from intense turbulent mixing and a first model is proposed. The reaction zone is shown to be thin for both flames, yet large chemical source term fluctuations are observed. The fuel consumption rate is found to be only weakly correlated with stretch, although local extinctions in the non-unity Lewis number case are well correlated with high curvature. These results explain the apparent turbulent flame speeds. Other variables that better correlate with this fuel burning rate are identified through a coordinate transformation. It is shown that the unity Lewis number turbulent flames can be accurately described by a set of 1D (in progress variable space) flamelet equations parameterized by the dissipation rate of the progress variable. In the non-unity Lewis number flames, the flamelet equations suggest a dependence on a second parameter, the diffusion of the progress variable. A new tabulation approach is proposed for the simulation of such flames with these dimensionally-reduced manifolds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of prophage integration by phage λ and the function and structure of the chromosomal elements required for λ integration have been studied with the use of λ deletion mutants. Since attφ, the substrate of the integration enzymes, is not essential for λ growth, and since attφ resides in a portion of the λ chromosome which is not necessary for vegetative growth, viable λ deletion mutants were isolated and examined to dissect the structure of attφ.

Deletion mutants were selected from wild type populations by treating the phage under conditions where phage are inactivated at a rate dependent on the DNA content of the particles. A number of deletion mutants were obtained in this way, and many of these mutants proved to have defects in integration. These defects were defined by analyzing the properties of Int-promoted recombination in these att mutants.

The types of mutants found and their properties indicated that attφ has three components: a cross-over point which is bordered on either side by recognition elements whose sequence is specifically required for normal integration. The interactions of the recognition elements in Int-promoted recombination between att mutants was examined and proved to be quite complex. In general, however, it appears that the λ integration system can function with a diverse array of mutant att sites.

The structure of attφ was examined by comparing the genetic properties of various att mutants with their location in the λ chromosome. To map these mutants, the techniques of heteroduplex DNA formation and electron microscopy were employed. It was found that integration cross-overs occur at only one point in attφ and that the recognition sequences that direct the integration enzymes to their site of action are quite small, less than 2000 nucleotides each. Furthermore, no base pair homology was detected between attφ and its bacterial analog, attB. This result clearly demonstrates that λ integration can occur between chromosomes which have little, if any, homology. In this respect, λ integration is unique as a system of recombination since most forms of generalized recombination require extensive base pair homology.

An additional study on the genetic and physical distances in the left arm of the λ genome was described. Here, a large number of conditional lethal nonsense mutants were isolated and mapped, and a genetic map of the entire left arm, comprising a total of 18 genes, was constructed. Four of these genes were discovered in this study. A series of λdg transducing phages was mapped by heteroduplex electron microscopy and the relationship between physical and genetic distances in the left arm was determined. The results indicate that recombination frequency in the left arm is an accurate reflection of physical distances, and moreover, there do not appear to be any undiscovered genes in this segment of the genome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In four chapters various aspects of earthquake source are studied.

Chapter I

Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.

Chapter II

The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.

Chapter III

Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."

Chapter IV

The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.