19 resultados para NOR
em CaltechTHESIS
Resumo:
Hematopoiesis is a well-established system used to study developmental choices amongst cells with multiple lineage potentials, as well as the transcription factor network interactions that drive these developmental paths. Multipotent progenitors travel from the bone marrow to the thymus where T-cell development is initiated and these early T-cell precursors retain lineage plasticity even after initiating a T-cell program. The development of these early cells is driven by Notch signaling and the combinatorial expression of many transcription factors, several of which are also involved in the development of other cell lineages. The ETS family transcription factor PU.1 is involved in the development of progenitor, myeloid, and lymphoid cells, and can divert progenitor T-cells from the T-lineage to a myeloid lineage. This diversion of early T-cells by PU.1 can be blocked by Notch signaling. The PU.1 and Notch interaction creates a switch wherein PU.1 in the presence of Notch promotes T-cell identity and PU.1 in the absence of Notch signaling promotes a myeloid identity. Here we characterized an early T-cell cell line, Scid.adh.2c2, as a good model system for studying the myeloid vs. lymphoid developmental choice dependent on PU.1 and Notch signaling. We then used the Scid.adh.2c2 system to identify mechanisms mediating PU.1 and Notch signaling interactions during early T-cell development. We show that the mechanism by which Notch signaling is protecting pro-T cells is neither degradation nor modification of the PU.1 protein. Instead we give evidence that Notch signaling is blocking the PU.1-driven inhibition of a key set of T-regulatory genes including Myb, Tcf7, and Gata3. We show that the protection of Gata3 from PU.1-mediated inhibition, by Notch signaling and Myb, is important for retaining a T-lineage identity. We also discuss a PU.1-driven mechanism involving E-protein inhibition that leads to the inhibition of Notch target genes. This is mechanism may be used as a lockdown mechanism in pro-T-cells that have made the decision to divert to the myeloid pathway.
Resumo:
Interleukin-2 (IL-2) is an important mediator in the vertebrate immune system. IL-2 is a potent growth factor that mature T lymphocytes use as a proliferation signal and the production of IL-2 is crucial for the clonal expansion of antigen-specific T cells in the primary immune response. IL-2 driven proliferation is dependent on the interaction of the lymphokine with its cognate multichain receptor. IL-2 expression is induced only upon stimulation and transcriptional activation of the IL-2 gene relies extensively on the coordinate interaction of numerous inducible and constitutive trans-acting factors. Over the past several years, thousands of papers have been published regarding molecular and cellular aspects of IL-2 gene expression and IL-2 function. The vast majority of these reports describe work that has been carried out in vitro. However, considerably less is known about control of IL-2 gene expression and IL-2 function in vivo.
To gain new insight into the regulation of IL-2 gene expression in vivo, anatomical and developmental patterns of IL-2 gene expression in the mouse were established by employing in situ hybridization and immunohistochemical staining methodologies to tissue sections generated from normal mice and mutant animals in which T -cell development was perturbed. Results from these studies revealed several interesting aspects of IL-2 gene expression, such as (1) induction of IL-2 gene expression and protein synthesis in the thymus, the primary site of T-cell development in the body, (2) cell-type specificity of IL-2 gene expression in vivo, (3) participation of IL-2 in the extrathymic expansion of mature T cells in particular tissues, independent of an acute immune response to foreign antigen, (4) involvement of IL-2 in maintaining immunologic balance in the mucosal immune system, and (5) potential function of IL-2 in early events associated with hematopoiesis.
Extensive analysis of IL-2 mRNA accumulation and protein production in the murine thymus at various stages of development established the existence of two classes of intrathymic IL-2 producing cells. One class of intrathymic IL-2 producers was found exclusively in the fetal thymus. Cells belonging to this subset were restricted to the outermost region of the thymus. IL-2 expression in the fetal thymus was highly transient; a dramatic peak ofiL-2 mRNA accumulation was identified at day 14.5 of gestation and maximal IL-2 protein production was observed 12 hours later, after which both IL-2 mRNA and protein levels rapidly decreased. Significantly, the presence of IL-2 expressing cells in the day 14-15 fetal thymus was not contingent on the generation of T-cell receptor (TcR) positive cells. The second class of IL-2 producing cells was also detectable in the fetal thymus (cells found in this class represented a minority subset of IL-2 producers in the fetal thymus) but persist in the thymus during later stages of development and after birth. Intrathymic IL-2 producers in postnatal animals were located in the subcapsular region and cortex, indicating that these cells reside in the same areas where immature T cells are consigned. The frequency of IL-2 expressing cells in the postnatal thymus was extremely low, indicating that induction of IL-2 expression and protein synthesis are indicative of a rare activation event. Unlike the fetal class of intrathymic IL-2 producers, the presence of IL-2 producing cells in the postnatal thymus was dependent on to the generation of TcR+ cells. Subsequent examination of intrathymic IL-2 production in mutant postnatal mice unable to produce either αβ or γδ T cells showed that postnatal IL-2 producers in the thymus belong to both αβ and γδ lineages. Additionally, further studies indicated that IL-2 synthesis by immature αβ -T cells depends on the expression of bonafide TcR αβ-heterodimers. Taken altogether, IL-2 production in the postnatal thymus relies on the generation of αβ or γδ-TcR^+ cells and induction of IL-2 protein synthesis can be linked to an activation event mediated via the TcR.
With regard to tissue specificity of IL-2 gene expression in vivo, analysis of whole body sections obtained from normal neonatal mouse pups by in situ hybridization demonstrated that IL-2 mRNA^+ cells were found in both lymphoid and nonlymphoid tissues with which T cells are associated, such as the thymus (as described above), dermis and gut. Tissues devoid of IL-2 mRNA^+ cells included brain, heart, lung, liver, stomach, spine, spinal cord, kidney, and bladder. Additional analysis of isolated tissues taken from older animals revealed that IL-2 expression was undetectable in bone marrow and in nonactivated spleen and lymph nodes. Thus, it appears that extrathymic IL-2 expressing cells in nonimmunologically challenged animals are relegated to particular epidermal and epithelial tissues in which characterized subsets of T cells reside and thatinduction of IL-2 gene expression associated with these tissues may be a result of T-cell activation therein.
Based on the neonatal in situ hybridization results, a detailed investigation into possible induction of IL-2 expression resulting in IL-2 protein synthesis in the skin and gut revealed that IL-2 expression is induced in the epidermis and intestine and IL-2 protein is available to drive cell proliferation of resident cells and/or participate in immune function in these tissues. Pertaining to IL-2 expression in the skin, maximal IL-2 mRNA accumulation and protein production were observed when resident Vγ_3^+ T-cell populations were expanding. At this age, both IL-2 mRNA^+ cells and IL-2 protein production were intimately associated with hair follicles. Likewise, at this age a significant number of CD3ε^+ cells were also found in association with follicles. The colocalization of IL-2 expression and CD3ε^+ cells suggests that IL-2 expression is induced when T cells are in contact with hair follicles. In contrast, neither IL-2 mRNA nor IL-2 protein were readily detected once T-cell density in the skin reached steady-state proportions. At this point, T cells were no longer found associated with hair follicles but were evenly distributed throughout the epidermis. In addition, IL-2 expression in the skin was contingent upon the presence of mature T cells therein and induction of IL-2 protein synthesis in the skin did not depend on the expression of a specific TcR on resident T cells. These newly disclosed properties of IL-2 expression in the skin indicate that IL-2 may play an additional role in controlling mature T-cell proliferation by participating in the extrathymic expansion of T cells, particularly those associated with the epidermis.
Finally, regarding IL-2 expression and protein synthesis in the gut, IL-2 producing cells were found associated with the lamina propria of neonatal animals and gut-associated IL-2 production persisted throughout life. In older animals, the frequency of IL-2 producing cells in the small intestine was not identical to that in the large intestine and this difference may reflect regional specialization of the mucosal immune system in response to enteric antigen. Similar to other instances of IL-2 gene expression in vivo, a failure to generate mature T cells also led to an abrogation of IL-2 protein production in the gut. The presence of IL-2 producing cells in the neonatal gut suggested that these cells may be generated during fetal development. Examination of the fetal gut to determine the distribution of IL-2 producing cells therein indicated that there was a tenfold increase in the number of gut-associated IL-2 producers at day 20 of gestation compared to that observed four days earlier and there was little difference between the frequency of IL-2 producing cells in prenatal versus neonatal gut. The origin of these fetally-derived IL-2 producing cells is unclear. Prior to the immigration of IL-2 inducible cells to the fetal gut and/or induction of IL-2 expression therein, IL-2 protein was observed in the fetal liver and fetal omentum, as well as the fetal thymus. Considering that induction of IL-2 protein synthesis may be an indication of future functional capability, detection of IL-2 producing cells in the fetal liver and fetal omentum raises the possibility that IL-2 producing cells in the fetal gut may be extrathymic in origin and IL-2 producing cells in these fetal tissues may not belong solely to the T lineage. Overall, these results provide increased understanding of the nature of IL-2 producing cells in the gut and how the absence of IL-2 production therein and in fetal hematopoietic tissues can result in the acute pathology observed in IL-2 deficient animals.
Resumo:
This dissertation comprises three essays that use theory-based experiments to gain understanding of how cooperation and efficiency is affected by certain variables and institutions in different types of strategic interactions prevalent in our society.
Chapter 2 analyzes indefinite horizon two-person dynamic favor exchange games with private information in the laboratory. Using a novel experimental design to implement a dynamic game with a stochastic jump signal process, this study provides insights into a relation where cooperation is without immediate reciprocity. The primary finding is that favor provision under these conditions is considerably less than under the most efficient equilibrium. Also, individuals do not engage in exact score-keeping of net favors, rather, the time since the last favor was provided affects decisions to stop or restart providing favors.
Evidence from experiments in Cournot duopolies is presented in Chapter 3 where players indulge in a form of pre-play communication, termed as revision phase, before playing the one-shot game. During this revision phase individuals announce their tentative quantities, which are publicly observed, and revisions are costless. The payoffs are determined only by the quantities selected at the end under real time revision, whereas in a Poisson revision game, opportunities to revise arrive according to a synchronous Poisson process and the tentative quantity corresponding to the last revision opportunity is implemented. Contrasting results emerge. While real time revision of quantities results in choices that are more competitive than the static Cournot-Nash, significantly lower quantities are implemented in the Poisson revision games. This shows that partial cooperation can be sustained even when individuals interact only once.
Chapter 4 investigates the effect of varying the message space in a public good game with pre-play communication where player endowments are private information. We find that neither binary communication nor a larger finite numerical message space results in any efficiency gain relative to the situation without any form of communication. Payoffs and public good provision are higher only when participants are provided with a discussion period through unrestricted text chat.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
Assembling a nervous system requires exquisite specificity in the construction of neuronal connectivity. One method by which such specificity is implemented is the presence of chemical cues within the tissues, differentiating one region from another, and the presence of receptors for those cues on the surface of neurons and their axons that are navigating within this cellular environment.
Connections from one part of the nervous system to another often take the form of a topographic mapping. One widely studied model system that involves such a mapping is the vertebrate retinotectal projection-the set of connections between the eye and the optic tectum of the midbrain, which is the primary visual center in non-mammals and is homologous to the superior colliculus in mammals. In this projection the two-dimensional surface of the retina is mapped smoothly onto the two-dimensional surface of the tectum, such that light from neighboring points in visual space excites neighboring cells in the brain. This mapping is implemented at least in part via differential chemical cues in different regions of the tectum.
The Eph family of receptor tyrosine kinases and their cell-surface ligands, the ephrins, have been implicated in a wide variety of processes, generally involving cellular movement in response to extracellular cues. In particular, they possess expression patterns-i.e., complementary gradients of receptor in retina and ligand in tectum- and in vitro and in vivo activities and phenotypes-i.e., repulsive guidance of axons and defective mapping in mutants, respectively-consistent with the long-sought retinotectal chemical mapping cues.
The tadpole of Xenopus laevis, the South African clawed frog, is advantageous for in vivo retinotectal studies because of its transparency and manipulability. However, neither the expression patterns nor the retinotectal roles of these proteins have been well characterized in this system. We report here comprehensive descriptions in swimming stage tadpoles of the messenger RNA expression patterns of eleven known Xenopus Eph and ephrin genes, including xephrin-A3, which is novel, and xEphB2, whose expression pattern has not previously been published in detail. We also report the results of in vivo protein injection perturbation studies on Xenopus retinotectal topography, which were negative, and of in vitro axonal guidance assays, which suggest a previously unrecognized attractive activity of ephrins at low concentrations on retinal ganglion cell axons. This raises the possibility that these axons find their correct targets in part by seeking out a preferred concentration of ligands appropriate to their individual receptor expression levels, rather than by being repelled to greater or lesser degrees by the ephrins but attracted by some as-yet-unknown cue(s).
Resumo:
A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.
A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.
The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.
Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.
The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.
Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.
PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.
Resumo:
This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.
Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.
In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.
In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.
The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.
In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.
In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
The Low Energy Telescopes on the Voyager spacecraft are used to measure the elemental composition (2 ≤ Z ≤ 28) and energy spectra (5 to 15 MeV /nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events are selected which have SEP abundance ratios approximately independent of energy/nucleon. The abundances for these events are compared from flare to flare and are compared to solar abundances from other sources: spectroscopy of the photosphere and corona, and solar wind measurements.
The selected SEP composition results may be described by an average composition plus a systematic flare-to-flare deviation about the average. For each of the four events, the ratios of the SEP abundances to the four-flare average SEP abundances are approximately monotonic functions of nuclear charge Z in the range 6 ≤ Z ≤ 28. An exception to this Z-dependent trend occurs for He, whose abundance relative to Si is nearly the same in all four events.
The four-flare average SEP composition is significantly different from the solar composition determined by photospheric spectroscopy: The elements C, N and O are depleted in SEPs by a factor of about five relative to the elements Na, Mg, Al, Si, Ca, Cr, Fe and Ni. For some elemental abundance ratios (e.g. Mg/O), the difference between SEP and photospheric results is persistent from flare to flare and is apparently not due to a systematic difference in SEP energy/nucleon spectra between the elements, nor to propagation effects which would result in a time-dependent abundance ratio in individual flare events.
The four-flare average SEP composition is in agreement with solar wind abundance results and with a number of recent coronal abundance measurements. The evidence for a common depletion of oxygen in SEPs, the corona and the solar wind relative to the photosphere suggests that the SEPs originate in the corona and that both the SEPs and solar wind sample a coronal composition which is significantly and persistently different from that of the photosphere.
Resumo:
The access of 1.2-40 MeV protons and 0.4-1.0 MeV electrons from interplanetary space to the polar cap regions has been investigated with an experiment on board a low altitude, polar orbiting satellite (OG0-4).
A total of 333 quiet time observations of the electron polar cap boundary give a mapping of the boundary between open and closed geomagnetic field lines which is an order of magnitude more comprehensive than previously available.
Persistent features (north/south asymmetries) in the polar cap proton flux, which are established as normal during solar proton events, are shown to be associated with different flux levels on open geomagnetic field lines than on closed field lines. The pole in which these persistent features are observed is strongly correlated to the sector structure of the interplanetary magnetic field and uncorrelated to the north/south component of this field. The features were observed in the north (south) pole during a negative (positive) sector 91% of the time, while the solar field had a southward component only 54% of the time. In addition, changes in the north/south component have no observable effect on the persistent features.
Observations of events associated with co-rotating regions of enhanced proton flux in interplanetary space are used to establish the characteristics of the 1.2 - 40 MeV proton access windows: the access window for low polar latitudes is near the earth, that for one high polar latitude region is ~250 R⊕ behind the earth, while that for the other high polar latitude region is ~1750 R⊕ behind the earth. All of the access windows are of approximately the same extent (~120 R⊕). The following phenomena contribute to persistent polar cap features: limited interplanetary regions of enhanced flux propagating past the earth, radial gradients in the interplanetary flux, and anisotropies in the interplanetary flux.
These results are compared to the particle access predictions of the distant geomagnetic tail configurations proposed by Michel and Dessler, Dungey, and Frank. The data are consistent with neither the model of Michel and Dessler nor that of Dungey. The model of Frank can yield a consistent access window configuration provided the following constraints are satisfied: the merging rate for open field lines at one polar neutral point must be ~5 times that at the other polar neutral point, related to the solar magnetic field configuration in a consistent fashion, the migration time for open field lines to move across the polar cap region must be the same in both poles, and the open field line merging rate at one of the polar neutral points must be at least as large as that required for almost all the open field lines to have merged in 0 (one hour). The possibility of satisfying these constraints is investigated in some detail.
The role played by interplanetary anisotropies in the observation of persistent polar cap features is discussed. Special emphasis is given to the problem of non-adiabatic particle entry through regions where the magnetic field is changing direction. The degree to which such particle entry can be assumed to be nearly adiabatic is related to the particle rigidity, the angle through which the field turns, and the rate at which the field changes direction; this relationship is established for the case of polar cap observations.
Resumo:
Part I
The infection of E. coli by ΦX174 at 15°C is abortive; the cells are killed by the infection but neither mature phage nor SS (single-stranded) DNA are synthesized. Parental RF (replicative form) is formed and subsequently replicated at 15°C. The RF made at 15°C shows normal infectivity and full competence to act as precursor to progeny SS DNA after an increase in temperature to 37°C. The investigations suggest that all of the proteins required for SS DNA synthesis and phage maturation are present in the abortive infection at 15°C.
Three possible causes are suggested for the abortive infection at 15°C: (a) A virus-coded protein whose role is essential to the infection is made at 15°C and assumes its native conformation, but its rate of activity is too low at this temperature to sustain the infection process. (b) Virus maturation may involve the formation of a DNA-protein complex and conformational changes which have an energy threshold infrequently reached at 15°C. (c) A host-coded protein present in uninfected cells, and whose activity is essential to the infection at all temperatures, but not to the host at 15°C, is inactive at 15°C. An hypothesis of this type is offered which proposes that the temperature-limiting factor in SS DNA synthesis in vivo may reflect a temperature-dependent property of the host DNA polymerase.
Part II
Three distinct stages are demonstrated in the process whereby ΦX174 invades its host: (1) Attachment: The phage attach to the cell in a manner that does not irreversibly alter the phage particle and which exhibits "single-hit" kinetics. The total charge on the phage particle is demonstrated to be important in determining the rate at which stable attachment is effected. The proteins specified by ΦX cistrons II, III and VII play roles, which may be indirect, in the attachment reaction. (2) Eclipse: 'The attached phage undergo a conformational change. Some of the altered phage particles spontaneously detach from the cell (in a non-infective form) while the remainder are more tightly bound to the cell. The altered phage particles detached (spontaneously or chemically) from such complexes have at least 40% of their DNA extruded from the phage coat. It is proposed that this particle is, or derives from, a direct intermediate in the penetration of the viral DNA.
The kinetics for the eclipse of attached phage particles are first-order with respect to phage concentration and biphasic; about 85% of the phage eclipse at one rate (k = 0.86 min-1) and the remainder do so at a distinctly lesser rate (k = 0.21 min-1).
The eclipse event is very temperature-dependent and has the relatively high Arrhenius activation energy of 36.6 kcal/mole, indicating the cooperative nature of the process. The temperature threshold for eclipse is 17 to 18°C.
At present no specific ΦX cistron is identified as affecting the eclipse process. (3) DNA penetration: A fraction of the attached, eclipsed phage particles corresponding in number to the plaque-forming units complete DNA penetration. The penetrated DNA is found in the cell as RF, and the empty phage protein coat remains firmly attached to the exterior of the cell. This step is inhibited by prior irradiation of the phage with relatively high doses of UV light and is insensitive to the presence of KCN and NaN3. Temporally excluded superinfecting phages do not achieve DNA penetration.
Both eclipsed phage particles and empty phage protein coats may be dissociated from infected cells; some of their properties are described.
Resumo:
The feedback coding problem for Gaussian systems in which the noise is neither white nor statistically independent between channels is formulated in terms of arbitrary linear codes at the transmitter and at the receiver. This new formulation is used to determine a number of feedback communication systems. In particular, the optimum linear code that satisfies an average power constraint on the transmitted signals is derived for a system with noiseless feedback and forward noise of arbitrary covariance. The noisy feedback problem is considered and signal sets for the forward and feedback channels are obtained with an average power constraint on each. The general formulation and results are valid for non-Gaussian systems in which the second order statistics are known, the results being applicable to the determination of error bounds via the Chebychev inequality.