15 resultados para functional interpretation

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation primarily describes chemical-scale studies of G protein-coupled receptors and Cys-loop ligand-gated ion channels to better understand ligand binding interactions and the mechanism of channel activation using recently published crystal structures as a guide. These studies employ the use of unnatural amino acid mutagenesis and electrophysiology to measure subtle changes in receptor function.

In chapter 2, the role of a conserved aromatic microdomain predicted in the D3 dopamine receptor is probed in the closely related D2 and D4 dopamine receptors. This domain was found to act as a structural unit near the ligand binding site that is important for receptor function. The domain consists of several functionally important noncovalent interactions including hydrogen bond, aromatic-aromatic, and sulfur-π interactions that show strong couplings by mutant cycle analysis. We also assign an alternate interpretation for the linear fluorination plot observed at W6.48, a residue previously thought to participate in a cation-π interaction with dopamine.

Chapter 3 outlines attempts to incorporate chemically synthesized and in vitro acylated unnatural amino acids into mammalian cells. While our attempts were not successful, method optimizations and data for nonsense suppression with an in vivo acylated tRNA are included. This chapter is aimed to aid future researchers attempting unnatural amino acid mutagenesis in mammalian cells.

Chapter 4 identifies a cation-π interaction between glutamate and a tyrosine residue on loop C in the GluClβ receptor. Using the recently published crystal structure of the homologous GluClα receptor, other ligand-binding and protein-protein interactions are probed to determine the similarity between this invertebrate receptor and other more distantly related vertebrate Cys-loop receptors. We find that many of the interactions previously observed are conserved in the GluCl receptors, however care must be taken when extrapolating structural data.

Chapter 5 examines inherent properties of the GluClα receptor that are responsible for the observed glutamate insensitivity of the receptor. Chimera synthesis and mutagenesis reveal the C-terminal portion of the M4 helix and the C-terminus as contributing to formation of the decoupled state, where ligand binding is incapable of triggering channel gating. Receptor mutagenesis was unable to identify single residue mismatches or impaired protein-protein interactions within this domain. We conclude that M4 helix structure and/or membrane dynamics are likely the cause of ligand insensitivity in this receptor and that the M4 helix has an role important in the activation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main focus of this thesis is the use of high-throughput sequencing technologies in functional genomics (in particular in the form of ChIP-seq, chromatin immunoprecipitation coupled with sequencing, and RNA-seq) and the study of the structure and regulation of transcriptomes. Some parts of it are of a more methodological nature while others describe the application of these functional genomic tools to address various biological problems. A significant part of the research presented here was conducted as part of the ENCODE (ENCyclopedia Of DNA Elements) Project.

The first part of the thesis focuses on the structure and diversity of the human transcriptome. Chapter 1 contains an analysis of the diversity of the human polyadenylated transcriptome based on RNA-seq data generated for the ENCODE Project. Chapter 2 presents a simulation-based examination of the performance of some of the most popular computational tools used to assemble and quantify transcriptomes. Chapter 3 includes a study of variation in gene expression, alternative splicing and allelic expression bias on the single-cell level and on a genome-wide scale in human lymphoblastoid cells; it also brings forward a number of critical to the practice of single-cell RNA-seq measurements methodological considerations.

The second part presents several studies applying functional genomic tools to the study of the regulatory biology of organellar genomes, primarily in mammals but also in plants. Chapter 5 contains an analysis of the occupancy of the human mitochondrial genome by TFAM, an important structural and regulatory protein in mitochondria, using ChIP-seq. In Chapter 6, the mitochondrial DNA occupancy of the TFB2M transcriptional regulator, the MTERF termination factor, and the mitochondrial RNA and DNA polymerases is characterized. Chapter 7 consists of an investigation into the curious phenomenon of the physical association of nuclear transcription factors with mitochondrial DNA, based on the diverse collections of transcription factor ChIP-seq datasets generated by the ENCODE, mouseENCODE and modENCODE consortia. In Chapter 8 this line of research is further extended to existing publicly available ChIP-seq datasets in plants and their mitochondrial and plastid genomes.

The third part is dedicated to the analytical and experimental practice of ChIP-seq. As part of the ENCODE Project, a set of metrics for assessing the quality of ChIP-seq experiments was developed, and the results of this activity are presented in Chapter 9. These metrics were later used to carry out a global analysis of ChIP-seq quality in the published literature (Chapter 10). In Chapter 11, the development and initial application of an automated robotic ChIP-seq (in which these metrics also played a major role) is presented.

The fourth part presents the results of some additional projects the author has been involved in, including the study of the role of the Piwi protein in the transcriptional regulation of transposon expression in Drosophila (Chapter 12), and the use of single-cell RNA-seq to characterize the heterogeneity of gene expression during cellular reprogramming (Chapter 13).

The last part of the thesis provides a review of the results of the ENCODE Project and the interpretation of the complexity of the biochemical activity exhibited by mammalian genomes that they have revealed (Chapters 15 and 16), an overview of the expected in the near future technical developments and their impact on the field of functional genomics (Chapter 14), and a discussion of some so far insufficiently explored research areas, the future study of which will, in the opinion of the author, provide deep insights into many fundamental but not yet completely answered questions about the transcriptional biology of eukaryotes and its regulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of cell-cell interactions in the nervous system are mediated by immunoglobulin gene superfamily members. For example, neuroglian, a homophilic neural cell adhesion molecule in Drosophila, has an extracellular portion comprising six C- 2 type immunoglobulin-like domains followed by five fibronectin type III (FnIII) repeats. Neuroglian shares this domain organization and significant sequence identity with Ll, a murine neural adhesion molecule that could be a functional homologue. Here I report the crystal structure of a proteolytic fragment containing the first two FnIII repeats of neuroglian (NgFn 1,2) at 2.0Å. The interpretation of photomicrographs of rotary shadowed Ng, the entire extracellular portion of neuroglian, and NgFnl-5, the five neuroglian Fn III domains, is also discussed.

The structure of NgFn 1,2 consists of two roughly cylindrical β-barrel structural motifs arranged in a head-to-tail fashion with the domains meeting at an angle of ~120, as defined by the cylinder axes. The folding topology of each domain is identical to that previously observed for single FnIII domains from tenascin and fibronectin. The domains of NgFn1,2 are related by an approximate two fold screw axis that is nearly parallel to the longest dimension of the fragment. Assuming this relative orientation is a general property of tandem FnIII repeats, the multiple tandem FnIII domains in neuroglian and other proteins are modeled as thin straight rods with two domain zig-zag repeats. When combined with the dimensions of pairs of tandem immunoglobulin-like domains from CD4 and CD2, this model suggests that neuroglian is a long narrow molecule (20 - 30 Å in diameter) that extends up to 370Å from the cell surface.

In photomicrographs, rotary shadowed Ng and NgFn1-5 appear to be highly flexible rod-like molecules. NgFn 1-5 is observed to bend in at least two positions and has a mean total length consistent with models generated from the NgFn1,2 structure. Ng molecules have up to four bends and a mean total length of 392 Å, consistent with a head-to-tail packing of neuroglian's C2-type domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Waking up from a dreamless sleep, I open my eyes, recognize my wife’s face and am filled with joy. In this thesis, I used functional Magnetic Resonance Imaging (fMRI) to gain insights into the mechanisms involved in this seemingly simple daily occurrence, which poses at least three great challenges to neuroscience: how does conscious experience arise from the activity of the brain? How does the brain process visual input to the point of recognizing individual faces? How does the brain store semantic knowledge about people that we know? To start tackling the first question, I studied the neural correlates of unconscious processing of invisible faces. I was unable to image significant activations related to the processing of completely invisible faces, despite existing reports in the literature. I thus moved on to the next question and studied how recognition of a familiar person was achieved in the brain; I focused on finding invariant representations of person identity – representations that would be activated any time we think of a familiar person, read their name, see their picture, hear them talk, etc. There again, I could not find significant evidence for such representations with fMRI, even in regions where they had previously been found with single unit recordings in human patients (the Jennifer Aniston neurons). Faced with these null outcomes, the scope of my investigations eventually turned back towards the technique that I had been using, fMRI, and the recently praised analytical tools that I had been trusting, Multivariate Pattern Analysis. After a mostly disappointing attempt at replicating a strong single unit finding of a categorical response to animals in the right human amygdala with fMRI, I put fMRI decoding to an ultimate test with a unique dataset acquired in the macaque monkey. There I showed a dissociation between the ability of fMRI to pick up face viewpoint information and its inability to pick up face identity information, which I mostly traced back to the poor clustering of identity selective units. Though fMRI decoding is a powerful new analytical tool, it does not rid fMRI of its inherent limitations as a hemodynamics-based measure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work seeks to understand past and present surface conditions on the Moon using two different but complementary approaches: topographic analysis using high-resolution elevation data from recent spacecraft missions and forward modeling of the dominant agent of lunar surface modification, impact cratering. The first investigation focuses on global surface roughness of the Moon, using a variety of statistical parameters to explore slopes at different scales and their relation to competing geological processes. We find that highlands topography behaves as a nearly self-similar fractal system on scales of order 100 meters, and there is a distinct change in this behavior above and below approximately 1 km. Chapter 2 focuses this analysis on two localized regions: the lunar south pole, including Shackleton crater, and the large mare-filled basins on the nearside of the Moon. In particular, we find that differential slope, a statistical measure of roughness related to the curvature of a topographic profile, is extremely useful in distinguishing between geologic units. Chapter 3 introduces a numerical model that simulates a cratered terrain by emplacing features of characteristic shape geometrically, allowing for tracking of both the topography and surviving rim fragments over time. The power spectral density of cratered terrains is estimated numerically from model results and benchmarked against a 1-dimensional analytic model. The power spectral slope is observed to vary predictably with the size-frequency distribution of craters, as well as the crater shape. The final chapter employs the rim-tracking feature of the cratered terrain model to analyze the evolving size-frequency distribution of craters under different criteria for identifying "visible" craters from surviving rim fragments. A geometric bias exists that systematically over counts large or small craters, depending on the rim fraction required to count a given feature as either visible or erased.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.

First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.

Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.

We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.

Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.

Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of this thesis (Chapters I and II), the synthesis, characterization, reactivity and photophysics of per(difluoroborated) tetrakis(pyrophosphito)diplatinate(II) (Pt(POPBF2)) are discussed. Pt(POP-BF2) was obtained by reaction of [Pt2(POP)4]4- with neat boron trifluoride diethyl etherate (BF3·Et2O). While Pt(POP-BF2) and [Pt2(POP)4]4- have similar structures and absorption spectra, they differ in significant ways. Firstly, as discussed in Chapter I, the former is less susceptible to oxidation, as evidenced by the reversibility of its oxidation by I2. Secondly, while the first excited triplet states (T1) of both Pt(POP-BF2) and [Pt2(POP)4]4- exhibit long lifetimes (ca. 0.01 ms at room temperature) and substantial zero-field splitting (40 cm-1), Pt(POP-BF2) also has a remarkably long-lived (1.6 ns at room temperature) singlet excited state (S1), indicating slow intersystem crossing (ISC). Fluorescence lifetime and quantum yield (QY) of Pt(POP-BF2) were measured over a range of temperatures, providing insight into the slow ISC process. The remarkable spectroscopic and photophysical properties of Pt(POP-BF2), both in solution and as a microcrystalline powder, form the theme of Chapter II.

In the second part of the thesis (Chapters III and IV), the electrochemical reduction of CO2 to CO by [(L)Mn(CO)3]- catalysts is investigated using density functional theory (DFT). As discussed in Chapter III, the turnover frequency (TOF)-limiting step is the dehydroxylation of [(bpy)Mn(CO)3(CO2H)]0/- (bpy = bipyridine) by trifluoroethanol (TFEH) to form [(bpy)Mn(CO)4]+/0. Because the dehydroxylation of [(bpy)Mn(CO)3(CO2H)]- is faster, maximum TOF (TOFmax) is achieved at potentials sufficient to completely reduce [(bpy)Mn(CO)3(CO2H)]0 to [(bpy)Mn(CO)3(CO2H)]-. Substitution of bipyridine with bipyrimidine reduces the overpotential needed, but at the expense of TOFmax. In Chapter IV, the decoration of the bipyrimidine ligand with a pendant alcohol is discussed as a strategy to increase CO2 reduction activity. Our calculations predict that the pendant alcohol acts in concert with an external TFEH molecule, the latter acidifying the former, resulting in a ~ 80,000-fold improvement in the rate of TOF-limiting dehydroxylation of [(L)Mn(CO)3(CO2H)]-.

An interesting strategy for the co-upgrading of light olefins and alkanes into heavier alkanes is the subject of Appendix B. The proposed scheme involves dimerization of the light olefin, operating in tandem with transfer hydrogenation between the olefin dimer and the light alkane. The work presented therein involved a Ta olefin dimerization catalyst and a silica-supported Ir transfer hydrogenation catalyst. Olefin dimer was formed under reaction conditions; however, this did not undergo transfer hydrogenation with the light alkane. A significant challenge is that the Ta catalyst selectively produces highly branched dimers, which are unable to undergo transfer hydrogenation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hopanoids are a class of sterol-like lipids produced by select bacteria. Their preservation in the rock record for billions of years as fossilized hopanes lends them geological significance. Much of the structural diversity present in this class of molecules, which likely underpins important biological functions, is lost during fossilization. Yet, one type of modification that persists during preservation is methylation at C-2. The resulting 2-methylhopanoids are prominent molecular fossils and have an intriguing pattern over time, exhibiting increases in abundance associated with Ocean Anoxic Events during the Phanerozoic. This thesis uses diverse methods to address what the presence of 2-methylhopanes tells us about the microbial life and environmental conditions of their ancient depositional settings. Through an environmental survey of hpnP, the gene encoding the C-2 hopanoid methylase, we found that many different taxa are capable of producing 2-methylhopanoids in more diverse modern environments than expected. This study also revealed that hpnP is significantly overrepresented in organisms that are plant symbionts, in environments associated with plants, and with metabolisms that support plant-microbe interactions; collectively, these correlations provide a clue about the biological importance of 2-methylhopanoids. Phylogenetic reconstruction of the evolutionary history of hpnP revealed that 2-methylhopanoid production arose in the Alphaproteobacteria, indicating that the origin of these molecules is younger than originally thought. Additionally, we took genetic approach to understand the role of 2-methylhopanoids in Cyanobacteria using the filamentous symbiotic Nostoc punctiforme. We found that hopanoids likely aid in rigidifying the cell membrane but do not appear to provide resistance to osmotic or outer membrane stressors, as has been shown in other organisms. The work presented in this thesis supports previous findings that 2-methylhopanoids are not biomarkers for oxygenic photosynthesis and provides new insights by defining their distribution in modern environments, identifying their evolutionary origin, and investigating their role in Cyanobacteria. These efforts in modern settings aid the formation of a robust interpretation of 2-methylhopanes in the rock record.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An array of two spark chambers and six trays of plastic scintillation counters was used to search for unaccompanied fractionally charged particles in cosmic rays near sea level. No acceptable events were found with energy losses by ionization between 0.04 and 0.7 that of unit-charged minimum-ionizing particles. New 90%-confidence upper limits were thereby established for the fluxes of fractionally charged particles in cosmic rays, namely, (1.04 ± 0.07)x10-10 and (2.03 ± 0.16)x10-10 cm-2sr-1sec-1 for minimum-ionizing particles with charges 1/3 and 2/3, respectively.

In order to be certain that the spark chambers could have functioned for the low levels of ionization expected from particles with small fractional charges, tests were conducted to estimate the efficiency of the chambers as they had been used in this experiment. These tests showed that the spark-chamber system with the track-selection criteria used might have been over 99% efficient for the entire range of energy losses considered.

Lower limits were then obtained for the mass of a quark by considering the above flux limits and a particular model for the production of quarks in cosmic rays. In this model, which is one involving the multi-peripheral Regge hypothesis, the production cross section and a corresponding mass limit are critically dependent on the Regge trajectory assigned to a quark. If quarks are "elementary'' with a flat trajectory, the mass of a quark can be expected to be at least 6 ± 2 BeV/c2. If quarks have a trajectory with unit slope, just as the existing hadrons do, the mass of a quark might be as small as 1.3 ± 0.2 BeV/c2. For a trajectory with unit slope and a mass larger than a couple of BeV/c2, the production cross section may be so low that quarks might never be observed in nature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In four chapters various aspects of earthquake source are studied.

Chapter I

Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.

Chapter II

The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.

Chapter III

Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."

Chapter IV

The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Everett interpretation of quantum mechanics is an increasingly popular alternative to the traditional Copenhagen interpretation, but there are a few major issues that prevent the widespread adoption. One of these issues is the origin of probabilities in the Everett interpretation, which this thesis will attempt to survey. The most successful resolution of the probability problem thus far is the decision-theoretic program, which attempts to frame probabilities as outcomes of rational decision making. This marks a departure from orthodox interpretations of probabilities in the physical sciences, where probabilities are thought to be objective, stemming from symmetry considerations. This thesis will attempt to offer evaluations on the decision-theoretic program.