938 resultados para PART II
Resumo:
Part I. Complexes of Biological Bases and Oligonucleotides with RNA
The physical nature of complexes of several biological bases and oligonucleotides with single-stranded ribonucleic acids have been studied by high resolution proton magnetic resonance spectroscopy. The importance of various forces in the stabilization of these complexes is also discussed.
Previous work has shown that purine forms an intercalated complex with single-stranded nucleic acids. This complex formation led to severe and stereospecific broadening of the purine resonances. From the field dependence of the linewidths, T1 measurements of the purine protons and nuclear Overhauser enhancement experiments, the mechanism for the line broadening was ascertained to be dipole-dipole interactions between the purine protons and the ribose protons of the nucleic acid.
The interactions of ethidium bromide (EB) with several RNA residues have been studied. EB forms vertically stacked aggregates with itself as well as with uridine, 3'-uridine monophosphate and 5'-uridine monophosphate and forms an intercalated complex with uridylyl (3' → 5') uridine and polyuridylic acid (poly U). The geometry of EB in the intercalated complex has also been determined.
The effect of chain length of oligo-A-nucleotides on their mode of interaction with poly U in D20 at neutral pD have also been studied. Below room temperatures, ApA and ApApA form a rigid triple-stranded complex involving a stoichiometry of one adenine to two uracil bases, presumably via specific adenine-uracil base pairing and cooperative base stacking of the adenine bases. While no evidence was obtained for the interaction of ApA with poly U above room temperature, ApApA exhibited complex formation of a 1:1 nature with poly U by forming Watson-Crick base pairs. The thermodynamics of these systems are discussed.
Part II. Template Recognition and the Degeneracy of the Genetic Code
The interaction of ApApG and poly U was studied as a model system for the codon-anticodon interaction of tRNA and mRNA in vivo. ApApG was shown to interact with poly U below ~20°C. The interaction was of a 1:1 nature which exhibited the Hoogsteen bonding scheme. The three bases of ApApG are in an anti conformation and the guanosine base appears to be in the lactim tautomeric form in the complex.
Due to the inadequacies of previous models for the degeneracy of the genetic code in explaining the observed interactions of ApApG with poly U, the "tautomeric doublet" model is proposed as a possible explanation of the degenerate interactions of tRNA with mRNA during protein synthesis in vivo.
Resumo:
This contribution is the first part of a four-part series documenting the development of B:RUN, a software program which reads data for common spreadsheets and presents them as low-resolution maps of slates and processes. The program emerged from a need which arose during a project in Brunei Darussalam for a 'low level' approach for researchers to communicate findings as efficiently and expeditiously as possible. Part I provides a overview of the concept and design elements of B:RUN. Part II will highlight results of the economics components of the program evaluating different fishing regimes, sailing distances from ports and fleet operating costs. Environmental aspects will be presented in Part III in the form of overlay maps. Part IV will summarize the implications of B:RUN results to coastal and fishery resources management in Brunei Darussalam and show how this approach can be adapted to other coastlines and used as a teaching and training tool. The following three parts will be published in future editions of Naga, the ICLARM Quarterly. The program is available through ICLARM.
Resumo:
These three papers describe an approach to the synthesis of solutions to a class of mechanical design problems; these involve transmission and transformation of mechanical forces and motion, and can be described by a set of inputs and outputs. The approach involves (1) identifying a set of primary functional elements and rules of combining them, and (2) developing appropriate representations and reasoning procedures for synthesising solution concepts using these elements and their combination rules; these synthesis procedures can produce an exhaustive set of solution concepts, in terms of their topological as well as spatial configurations, to a given design problem. This paper (Part III) describes a constraint propagation procedure which, using a knowledge base of spatial information about a set of primary functional elements, can produce possible spatial configurations of solution concepts generated in Part II.
Resumo:
The potential adverse human health and climate impacts of emissions from UK airports have become a significant political issue, yet the emissions, air quality impacts and health impacts attributable to UK airports remain largely unstudied. We produce an inventory of UK airport emissions - including aircraft landing and takeoff (LTO) operations and airside support equipment - with uncertainties quantified. The airports studied account for more than 95% of UK air passengers in 2005. We estimate that in 2005, UK airports emitted 10.2 Gg [-23 to +29%] of NOx, 0.73 Gg [-29 to +32%] of SO2, 11.7 Gg [-42 to +77%] of CO, 1.8 Gg [-59 to +155%] of HC, 2.4 Tg [-13 to +12%] of CO2, and 0.31 Gg [-36 to +45%] of PM2.5. This translates to 2.5 Tg [-12 to +12%] CO2-eq using Global Warming Potentials for a 100-year time horizon. Uncertainty estimates were based on analysis of data from aircraft emissions measurement campaigns and analyses of aircraft operations.The First-Order Approximation (FOA3) - currently the standard approach used to estimate particulate matter emissions from aircraft - is compared to measurements and it is shown that there are discrepancies greater than an order of magnitude for 40% of cases for both organic carbon and black carbon emissions indices. Modified methods to approximate organic carbon emissions, arising from incomplete combustion and lubrication oil, and black carbon are proposed. These alterations lead to factor 8 and a 44% increase in the annual emissions estimates of black and organic carbon particulate matter, respectively, leading to a factor 3.4 increase in total PM2.5 emissions compared to the current FOA3 methodology. Our estimates of emissions are used in Part II to quantify the air quality and health impacts of UK airports, to assess mitigation options, and to estimate the impacts of a potential London airport expansion. © 2011 Elsevier Ltd.
Resumo:
We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
This paper presents a three-dimensional continuum damage mechanics-based material model which was implemented in an implicit finite element code to simulate the progressive intralaminar degradation of fibre reinforced laminates. The damage model is based on ply failure mechanisms and uses seven damage variables assigned to tensile, compressive and shear damage at a ply level. Non-linear behaviour and irreversibility were taken into account and modelled. Some issues on the numerical implementation of the damage model are discussed and solutions proposed. Applications of the methodology are presented in Part II
Resumo:
Bottom hinged oscillating wave surge converters are known to be an efficient method of extracting power from ocean waves. The present work deals with experimental and numerical studies of wave interactions with an oscillating wave surge converter. It focuses on two aspects: (1) viscous effects on device performance under normal operating conditions; and (2) effects of slamming on device survivability under extreme conditions. Part I deals with the viscous effects while the extreme sea conditions will be presented in Part II. The numerical simulations are performed using the commercial CFD package ANSYS FLUENT. The comparison between numerical results and experimental measurements shows excellent agreement in terms of capturing local features of the flow as well as the dynamics of the device. A series of simulations is conducted with various wave conditions, flap configurations and model scales to investigate the viscous and scaling effects on the device. It is found that the diffraction/radiation effects dominate the device motion and that the viscous effects are negligible for wide flaps.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
This study examines criteria for the existence of two stable states of the Atlantic Meridional Overturning Circulation (AMOC) using a combination of theory and simulations from a numerical coupled atmosphere–ocean climate model. By formulating a simple collection of state parameters and their relationships, the authors reconstruct the North Atlantic Deep Water (NADW) OFF state behavior under a varying external salt-flux forcing. This part (Part I) of the paper examines the steady-state solution, which gives insight into the mechanisms that sustain the NADW OFF state in this coupled model; Part II deals with the transient behavior predicted by the evolution equation. The nonlinear behavior of the Antarctic Intermediate Water (AAIW) reverse cell is critical to the OFF state. Higher Atlantic salinity leads both to a reduced AAIW reverse cell and to a greater vertical salinity gradient in the South Atlantic. The former tends to reduce Atlantic salt export to the Southern Ocean, while the latter tends to increases it. These competing effects produce a nonlinear response of Atlantic salinity and salt export to salt forcing, and the existence of maxima in these quantities. Thus the authors obtain a natural and accurate analytical saddle-node condition for the maximal surface salt flux for which a NADW OFF state exists. By contrast, the bistability indicator proposed by De Vries and Weber does not generally work in this model. It is applicable only when the effect of the AAIW reverse cell on the Atlantic salt budget is weak.
Resumo:
As a major mode of intraseasonal variability, which interacts with weather and climate systems on a near-global scale, the Madden – Julian Oscillation (MJO) is a crucial source of predictability for numerical weather prediction (NWP) models. Despite its global significance and comprehensive investigation, improvements in the representation of the MJO in an NWP context remain elusive. However, recent modifications to the model physics in the ECMWF model led to advances in the representation of atmospheric variability and the unprecedented propagation of the MJO signal through the entire integration period. In light of these recent advances, a set of hindcast experiments have been designed to assess the sensitivity of MJO simulation to the formulation of convection. Through the application of established MJO diagnostics, it is shown that the improvements in the representation of the MJO can be directly attributed to the modified convective parametrization. Furthermore, the improvements are attributed to the move from a moisture-convergent- to a relative-humidity-dependent formulation for organized deep entrainment. It is concluded that, in order to understand the physical mechanisms through which a relative-humidity-dependent formulation for entrainment led to an improved simulation of the MJO, a more process-based approach should be taken. T he application of process-based diagnostics t o t he hindcast experiments presented here will be the focus of Part II of this study.
Resumo:
A common bias among global climate models (GCMs) is that they exhibit tropospheric southern annular mode (SAM) variability that is much too persistent in the Southern Hemisphere (SH) summertime. This is of concern for the ability to accurately predict future SH circulation changes, so it is important that it be understood and alleviated. In this two-part study, specifically targeted experiments with the Canadian Middle Atmosphere Model (CMAM) are used to improve understanding of the enhanced summertime SAM persistence. Given the ubiquity of this bias among comprehensive GCMs, it is likely that the results will be relevant for other climate models. Here, in Part I, the influence of climatological circulation biases on SAM variability is assessed, with a particular focus on two common biases that could enhance summertime SAM persistence: the too-late breakdown of the Antarctic stratospheric vortex and the equatorward bias in the SH tropospheric midlatitude jet. Four simulations are used to investigate the role of each of these biases in CMAM. Nudging and bias correcting procedures are used to systematically remove zonal-mean stratospheric variability and/or remove climatological zonal wind biases. The SAM time-scale bias is not alleviated by improving either the timing of the stratospheric vortex breakdown or the climatological jet structure. Even in the absence of stratospheric variability and with an improved climatological circulation, the model time scales are biased long. This points toward a bias in internal tropospheric dynamics that is not caused by the tropospheric jet structure bias. The underlying cause of this is examined in more detail in Part II of this study.