19 resultados para Complexity of Distribution
em CaltechTHESIS
Resumo:
Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.
Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.
An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.
A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.
Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.
An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".
Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.
What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.
Resumo:
Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.
Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.
The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.
Resumo:
The evoked response, a signal present in the electro-encephalogram when specific sense modalities are stimulated with brief sensory inputs, has not yet revealed as much about brain function as it apparently promised when first recorded in the late 1940's. One of the problems has been to record the responses at a large number of points on the surface of the head; thus in order to achieve greater spatial resolution than previously attained, a 50-channel recording system was designed to monitor experiments with human visually evoked responses.
Conventional voltage versus time plots of the responses were found inadequate as a means of making qualitative studies of such a large data space. This problem was solved by creating a graphical display of the responses in the form of equipotential maps of the activity at successive instants during the complete response. In order to ascertain the necessary complexity of any models of the responses, factor analytic procedures were used to show that models characterized by only five or six independent parameters could adequately represent the variability in all recording channels.
One type of equivalent source for the responses which meets these specifications is the electrostatic dipole. Two different dipole models were studied: the dipole in a homogeneous sphere and the dipole in a sphere comprised of two spherical shells (of different conductivities) concentric with and enclosing a homogeneous sphere of a third conductivity. These models were used to determine nonlinear least squares fits of dipole parameters to a given potential distribution on the surface of a spherical approximation to the head. Numerous tests of the procedures were conducted with problems having known solutions. After these theoretical studies demonstrated the applicability of the technique, the models were used to determine inverse solutions for the evoked response potentials at various times throughout the responses. It was found that reliable estimates of the location and strength of cortical activity were obtained, and that the two models differed only slightly in their inverse solutions. These techniques enabled information flow in the brain, as indicated by locations and strengths of active sites, to be followed throughout the evoked response.
Resumo:
Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.
Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.
The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.
Resumo:
Cdc48/p97 is an essential, highly abundant hexameric member of the AAA (ATPase associated with various cellular activities) family. It has been linked to a variety of processes throughout the cell but it is best known for its role in the ubiquitin proteasome pathway. In this system it is believed that Cdc48 behaves as a segregase, transducing the chemical energy of ATP hydrolysis into mechanical force to separate ubiquitin-conjugated proteins from their tightly-bound partners.
Current models posit that Cdc48 is linked to its substrates through a variety of adaptor proteins, including a family of seven proteins (13 in humans) that contain a Cdc48-binding UBX domain. As such, due to the complexity of the network of adaptor proteins for which it serves as the hub, Cdc48/p97 has the potential to exert a profound influence on the ubiquitin proteasome pathway. However, the number of known substrates of Cdc48/p97 remains relatively small, and smaller still is the number of substrates that have been linked to a specific UBX domain protein. As such, the goal of this dissertation research has been to discover new substrates and better understand the functions of the Cdc48 network. With this objective in mind, we established a proteomic screen to assemble a catalog of candidate substrate/targets of the Ubx adaptor system.
Here we describe the implementation and optimization of a cutting-edge quantitative mass spectrometry method to measure relative changes in the Saccharomyces cerevisiae proteome. Utilizing this technology, and in order to better understand the breadth of function of Cdc48 and its adaptors, we then performed a global screen to identify accumulating ubiquitin conjugates in cdc48-3 and ubxΔ mutants. In this screen different ubx mutants exhibited reproducible patterns of conjugate accumulation that differed greatly from each other, pointing to various unexpected functional specializations of the individual Ubx proteins.
As validation of our mass spectrometry findings, we then examined in detail the endoplasmic-reticulum bound transcription factor Spt23, which we identified as a putative Ubx2 substrate. In these studies ubx2Δ cells were deficient in processing of Spt23 to its active p90 form, and in localizing p90 to the nucleus. Additionally, consistent with reduced processing of Spt23, ubx2Δ cells demonstrated a defect in expression of their target gene OLE1, a fatty acid desaturase. Overall, this work demonstrates the power of proteomics as a tool to identify new targets of various pathways and reveals Ubx2 as a key regulator lipid membrane biosynthesis.
Resumo:
Terpenes represent about half of known natural products, with terpene synthases catalyzing reactions to increase the complexity of substrates and generate cyclizations of the linear diphosphate substrates, therefore forming rings and stereocenters. With their diverse functionality, terpene synthases may be highly evolvable, with the ability to accept a wide range of non-natural compounds and with high product selectivity. Our hypothesis is that directed evolution of terpene synthases can be used to increase selectivity of the synthase on a specific substrate. In the first part of the work presented herein, three natural terpene synthases, Cop2, BcBOT2, and SSCG_02150, were tested for activity against the natural substrate and a non-natural substrate, called Surrogate 1, and the relative activities on both the natural and non-natural substrates were compared. In the second part of this work, a terpene synthase variant of BcBOT2 that has been evolved for thermostability, was used for directed evolution for increased activity and selectivity on the non-natural substrate referred to as Surrogate 2. Mutations for this evolution were introduced using random mutagenesis, with error prone polymerase chain reactions, and using site-specific saturation mutagenesis, in which an NNK library is designed with a specific active site amino acid targeted for mutation. The mutant enzymes were then screened and selected for enhancement of the desired functionality. Two neutral mutants, 19B7 W367F and 19B7 W118Q, were found to maintain activity on Surrogate 2, as measured by the screen.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Resumo:
The main focus of this thesis is the use of high-throughput sequencing technologies in functional genomics (in particular in the form of ChIP-seq, chromatin immunoprecipitation coupled with sequencing, and RNA-seq) and the study of the structure and regulation of transcriptomes. Some parts of it are of a more methodological nature while others describe the application of these functional genomic tools to address various biological problems. A significant part of the research presented here was conducted as part of the ENCODE (ENCyclopedia Of DNA Elements) Project.
The first part of the thesis focuses on the structure and diversity of the human transcriptome. Chapter 1 contains an analysis of the diversity of the human polyadenylated transcriptome based on RNA-seq data generated for the ENCODE Project. Chapter 2 presents a simulation-based examination of the performance of some of the most popular computational tools used to assemble and quantify transcriptomes. Chapter 3 includes a study of variation in gene expression, alternative splicing and allelic expression bias on the single-cell level and on a genome-wide scale in human lymphoblastoid cells; it also brings forward a number of critical to the practice of single-cell RNA-seq measurements methodological considerations.
The second part presents several studies applying functional genomic tools to the study of the regulatory biology of organellar genomes, primarily in mammals but also in plants. Chapter 5 contains an analysis of the occupancy of the human mitochondrial genome by TFAM, an important structural and regulatory protein in mitochondria, using ChIP-seq. In Chapter 6, the mitochondrial DNA occupancy of the TFB2M transcriptional regulator, the MTERF termination factor, and the mitochondrial RNA and DNA polymerases is characterized. Chapter 7 consists of an investigation into the curious phenomenon of the physical association of nuclear transcription factors with mitochondrial DNA, based on the diverse collections of transcription factor ChIP-seq datasets generated by the ENCODE, mouseENCODE and modENCODE consortia. In Chapter 8 this line of research is further extended to existing publicly available ChIP-seq datasets in plants and their mitochondrial and plastid genomes.
The third part is dedicated to the analytical and experimental practice of ChIP-seq. As part of the ENCODE Project, a set of metrics for assessing the quality of ChIP-seq experiments was developed, and the results of this activity are presented in Chapter 9. These metrics were later used to carry out a global analysis of ChIP-seq quality in the published literature (Chapter 10). In Chapter 11, the development and initial application of an automated robotic ChIP-seq (in which these metrics also played a major role) is presented.
The fourth part presents the results of some additional projects the author has been involved in, including the study of the role of the Piwi protein in the transcriptional regulation of transposon expression in Drosophila (Chapter 12), and the use of single-cell RNA-seq to characterize the heterogeneity of gene expression during cellular reprogramming (Chapter 13).
The last part of the thesis provides a review of the results of the ENCODE Project and the interpretation of the complexity of the biochemical activity exhibited by mammalian genomes that they have revealed (Chapters 15 and 16), an overview of the expected in the near future technical developments and their impact on the field of functional genomics (Chapter 14), and a discussion of some so far insufficiently explored research areas, the future study of which will, in the opinion of the author, provide deep insights into many fundamental but not yet completely answered questions about the transcriptional biology of eukaryotes and its regulation.
Resumo:
Oxygenic photosynthesis fundamentally transformed our planet by releasing molecular oxygen and altering major biogeochemical cycles, and this exceptional metabolism relies on a redox-active cubane cluster of four manganese atoms. Not only is manganese essential for producing oxygen, but manganese is also only oxidized by oxygen and oxygen-derived species. Thus the history of manganese oxidation provides a valuable perspective on our planet’s environmental past, the ancient availability of oxygen, and the evolution of oxygenic photosynthesis. Broadly, the general trends of the geologic record of manganese deposition is a chronicle of ancient manganese oxidation: manganese is introduced into the fluid Earth as Mn(II) and it will remain only a trace component in sedimentary rocks until it is oxidized, forming Mn(III,IV) insoluble precipitates that are concentrated in the rock record. Because these manganese oxides are highly favorable electron acceptors, they often undergo reduction in sediments through anaerobic respiration and abiotic reaction pathways.
The following dissertation presents five chapters investigating manganese cycling both by examining ancient examples of manganese enrichments in the geologic record and exploring the mineralogical products of various pathways of manganese oxide reduction that may occur in sediments. The first chapter explores the mineralogical record of manganese and reports abundant manganese reduction recorded in six representative manganese-enriched sedimentary sequences. This is followed by a second chapter that further analyzes the earliest significant manganese deposit 2.4 billon years ago, and determines that it predated the origin of oxygenic photosynthesis and thus is supporting evidence for manganese-oxidizing photosynthesis as an evolutionary precursor prior to oxygenic photosynthesis. The lack of oxygen during this early manganese deposition was partially established using oxygen-sensitive detrital grains, and so a third chapter delves into what these grains mean for oxygen constraints using a mathematical model. The fourth chapter returns to processes affecting manganese post-deposition, and explores the relationships between manganese mineral products and (bio)geochemical reduction processes to understand how various manganese minerals can reveal ancient environmental conditions and biological metabolisms. Finally, a fifth chapter considers whether manganese can be mobilized and enriched in sedimentary rocks and determines that manganese was concentrated secondarily in a 2.5 billion-year-old example from South Africa. Overall, this thesis demonstrates how microbial processes, namely photosynthesis and metal oxide-reducing metabolisms, are linked to and recorded in the rich complexity of the manganese mineralogical record.
Resumo:
Magnetic resonance techniques have given us a powerful means for investigating dynamical processes in gases, liquids and solids. Dynamical effects manifest themselves in both resonance line shifts and linewidths, and, accordingly, require detailed analyses to extract desired information. The success of a magnetic resonance experiment depends critically on relaxation mechanisms to maintain thermal equilibrium between spin states. Consequently, there must be an interaction between the excited spin states and their immediate molecular environment which promote changes in spin orientation while excess magnetic energy is coupled into other degrees of freedom by non-radiative processes. This is well known as spin-lattice relaxation. Certain dynamical processes cause fluctuations in the spin state energy levels leading to spin-spin relaxation and, here again, the environment at the molecular level plays a significant role in the magnitude of interaction. Relatively few electron spin relaxation studies of solutions have been conducted and the present work is addressed toward the extension of our knowledge in this area and the retrieval of dynamical information from line shape analyses on a time scale comparable to diffusion controlled phenomena.
Specifically, the electron spin relaxation of three Mn+23d5 complexes, Mn(CH3CN)6+2, MnCl4-2 in acetonitrile has been studied in considerable detail. The effective spin Hamiltonian constants were carefully evaluated under a wide range of experimental conditions. Resonance widths of these Mn+2 complexes were studied in the presence of various excess ligand ions and as a function of concentration, viscosity, temperature and frequency (X-band, ~9.5 Ԍ Hz and K-band, ~35 Ԍ Hz).
A number of interesting conclusions were drawn from these studies. For the Et4NCl-4-2 system several relaxation mechanisms leading to resonance broadening were observed. One source appears to arise through spin-orbit interactions caused by modulation of the ligand field resulting from transient distortions of the complex imparted by solvent fluctuations in the immediate surroundings of the paramagnetic ion. An additional spin relaxation was assigned to the formation of ion pairs [Et4N+…MnCl4-2] and it was possible to estimate the dissociation constant for this specie in acetonitrile.
The Bu4NBr-MnBr4-2 study was considerably more interesting. As in the former case, solvent fluctuations and ion-pairing of the paramagnetic complex [Bu4N+…MnBr4-2] provide significant relaxation for the electronic spin system. Most interesting, without doubt, is the onset of a new relaxation mechanism leading to resonance broadening which is best interpreted as chemical exchange. Thus, assuming that resonance widths were simply governed by electron spin state lifetimes, we were able to extract dynamical information from an interaction in which the initial and final states are the same
MnBr4-2 + Br- = MnBr4-2 + Br-.
The bimolecular rate constants were obtained at six different temperatures and their magnitudes suggested that the exchange is probably diffusion controlled with essentially a zero energy of activation. The most important source of spin relaxation in this system stems directly from dipolar interactions between the manganese 3d5 electrons. Moreover, the dipolar broadening is strongly frequency dependent indicating a deviation between the transverse and longitudinal relaxation times. We are led to the conclusion that the 3d5 spin states of ion-paired MnBr4-2 are significantly correlated so that dynamical processes are also entering the picture. It was possible to estimate the correlation time, Td, characterizing this dynamical process.
In Part II we study nuclear magnetic relaxation of bromine ions in the MnBr4-2-Bu4NBr-acetonitrile system. Essentially we monitor the 79Br and 81Br linewidths in response to the [MnBr4-2]/[Br-] ratio with the express purpose of supporting our contention that exchange is occurring between "free" bromine ions in the solvent and bromine in the first coordination sphere of the paramagnetic anion. The complexity of the system elicited a two-part study: (1) the linewidth behavior of Bu4NBr in anhydrous CH3CN in the absence of MnBr4-2 and (2) in the presence of MnBr4-2. It was concluded in study (1) that dynamical association, Bu4NBr k1= Bu4N+ + Br-, was modulating field-gradient interactions at frequencies high enough to provide an estimation of the unimolecular rate constant, k1. A comparison of the two isotopic bromine linewidth-mole fraction results led to the conclusion that quadrupole interactions provided the dominant relaxation mechanism. In study (2) the "residual" bromine linewidths for both 79Br and 81Br are clearly controlled by quadrupole interactions which appear to be modulated by very rapid dynamical processes other than molecular reorientation. We conclude that the "residual" linewidth has its origin in chemical exchange and that bromine nuclei exchange rapidly between a "free" solvated ion and the paramagnetic complex, MnBr4-2.
Resumo:
Picric acid possesses the property, which is rare among strong electrolytes, of having a convenient distribution ratio between water and certain organic solvents such as benzene, chloroform, etc. Because of this property, picric acid offers peculiar advantages for studying the well known deviations of strong electrolytes from the law of mass action, for; by means of distribution experiments, the activities of picric acid in various aqueous solutions may be compared.
In order to interpret the results of such distribution experiments, it is necessary to know the degree of ionization of picric acid in aqueous solutions.
At least three series of determinations of the equivalent conductance of picric acid have been published, but the results are not concordant; and therefore, the degree of ionization cannot be calculated with any degree of certainty.
The object of the present investigation was to redetermine the conductance of picric acid solutions in order to obtain satisfactory data from which the degrees of ionization of its solutions might be calculated.