155 resultados para SIMPLE SEQUENCES
Resumo:
A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.
Resumo:
In this study we quantify the relationship between the aerosol optical depth increase from a volcanic eruption and the severity of the subsequent surface temperature decrease. This investigation is made by simulating 10 different sizes of eruption in a global circulation model (GCM) by changing stratospheric sulfate aerosol optical depth at each time step. The sizes of the simulated eruptions range from Pinatubo‐sized up to the magnitude of supervolcanic eruptions around 100 times the size of Pinatubo. From these simulations we find that there is a smooth monotonic relationship between the global mean maximum aerosol optical depth anomaly and the global mean temperature anomaly and we derive a simple mathematical expression which fits this relationship well. We also construct similar relationships between global mean aerosol optical depth and the temperature anomaly at every individual model grid box to produce global maps of best‐fit coefficients and fit residuals. These maps are used with caution to find the eruption size at which a local temperature anomaly is clearly distinct from the local natural variability and to approximate the temperature anomalies which the model may simulate following a Tambora‐sized eruption. To our knowledge, this is the first study which quantifies the relationship between aerosol optical depth and resulting temperature anomalies in a simple way, using the wealth of data that is available from GCM simulations.
Resumo:
Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.
Resumo:
Background Efficient gene expression involves a trade-off between (i) premature termination of protein synthesis; and (ii) readthrough, where the ribosome fails to dissociate at the terminal stop. Sense codons that are similar in sequence to stop codons are more susceptible to nonsense mutation, and are also likely to be more susceptible to transcriptional or translational errors causing premature termination. We therefore expect this trade-off to be influenced by the number of stop codons in the genetic code. Although genetic codes are highly constrained, stop codon number appears to be their most volatile feature. Results In the human genome, codons readily mutable to stops are underrepresented in coding sequences. We construct a simple mathematical model based on the relative likelihoods of premature termination and readthrough. When readthrough occurs, the resultant protein has a tail of amino acid residues incorrectly added to the C-terminus. Our results depend strongly on the number of stop codons in the genetic code. When the code has more stop codons, premature termination is relatively more likely, particularly for longer genes. When the code has fewer stop codons, the length of the tail added by readthrough will, on average, be longer, and thus more deleterious. Comparative analysis of taxa with a range of stop codon numbers suggests that genomes whose code includes more stop codons have shorter coding sequences. Conclusions We suggest that the differing trade-offs presented by alternative genetic codes may result in differences in genome structure. More speculatively, multiple stop codons may mitigate readthrough, counteracting the disadvantage of a higher rate of nonsense mutation. This could help explain the puzzling overrepresentation of stop codons in the canonical genetic code and most variants.
Resumo:
Mobile genetic elements are widespread in Pseudomonas syringae, and often associate with virulence genes. Genome reannotation of the model bean pathogen P. syringae pv. phaseolicola 1448A identified seventeen types of insertion sequences and two miniature inverted-repeat transposable elements (MITEs) with a biased distribution, representing 2.8% of the chromosome, 25.8% of the 132-kb virulence plasmid and 2.7% of the 52-kb plasmid. Employing an entrapment vector containing sacB, we estimated that transposition frequency oscillated between 2.661025 and 1.161026, depending on the clone, although it was stable for each clone after consecutive transfers in culture media. Transposition frequency was similar for bacteria grown in rich or minimal media, and from cells recovered from compatible and incompatible plant hosts, indicating that growth conditions do not influence transposition in strain 1448A. Most of the entrapped insertions contained a full-length IS801 element, with the remaining insertions corresponding to sequences smaller than any transposable element identified in strain 1448A, and collectively identified as miniature sequences. From these, fragments of 229, 360 and 679-nt of the right end of IS801 ended in a consensus tetranucleotide and likely resulted from one-ended transposition of IS801. An average 0.7% of the insertions analyzed consisted of IS801 carrying a fragment of variable size from gene PSPPH_0008/PSPPH_0017, showing that IS801 can mobilize DNA in vivo. Retrospective analysis of complete plasmids and genomes of P. syringae suggests, however, that most fragments of IS801 are likely the result of reorganizations rather than one-ended transpositions, and that this element might preferentially contribute to genome flexibility by generating homologous regions of recombination. A further miniature sequence previously found to affect host range specificity and virulence, designated MITEPsy1 (100-nt), represented an average 2.4% of the total number of insertions entrapped in sacB, demonstrating for the first time the mobilization of a MITE in bacteria.
Resumo:
Trimethyltin compounds Me3SnR(R = CHCH2, CFCF2, or CCPh) are selective reagents for the synthesis of unsaturated hydrocarbyl derivatives such as trans-PtCl(R)(PPhEt2)2, by R/Cl exchange or oxidative addition (e.g., to Pt(PPh3)3); single crystal X-ray analyses of two such compounds (R = CHCH2 or CCPh) show that the trans-influence of R has only a low sensitivity to hybridisation at carbon, with sp3 > sp ⩾ sp2.
Resumo:
We investigate a simplified form of variational data assimilation in a fully nonlinear framework with the aim of extracting dynamical development information from a sequence of observations over time. Information on the vertical wind profile, w(z ), and profiles of temperature, T (z , t), and total water content, qt (z , t), as functions of height, z , and time, t, are converted to brightness temperatures at a single horizontal location by defining a two-dimensional (vertical and time) variational assimilation testbed. The profiles of T and qt are updated using a vertical advection scheme. A basic cloud scheme is used to obtain the fractional cloud amount and, when combined with the temperature field, this information is converted into a brightness temperature, using a simple radiative transfer scheme. It is shown that our model exhibits realistic behaviour with regard to the prediction of cloud, but the effects of nonlinearity become non-negligible in the variational data assimilation algorithm. A careful analysis of the application of the data assimilation scheme to this nonlinear problem is presented, the salient difficulties are highlighted, and suggestions for further developments are discussed.
Resumo:
We present a simple device for multiplex quantitative enzyme-linked immunosorbant assays (ELISA) made from a novel melt-extruded microcapillary film (MCF) containing a parallel array of 200µm capillaries along its length. To make ELISA devices different protein antigens or antibodies were immobilised inside individual microcapillaries within long reels of MCF extruded from fluorinated ethylene propylene (FEP). Short pieces of coated film were cut and interfaced with a pipette, allowing sequential uptake of samples and detection solutions into all capillaries from a reagent well. As well as being simple to produce, these FEP MCF devices have excellent light transmittance allowing direct optical interrogation of the capillaries for simple signal quantification. Proof of concept experiments demonstrate both quantitative and multiplex assays in FEP MCF devices using a standard direct ELISA procedure and read using a flatbed scanner. This new multiplex immunoassay platform should find applications ranging from lab detection to point-of-care and field diagnostics.
Resumo:
Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.