980 resultados para Data-bank
Resumo:
After decades of slow progress, the pace of research on membrane protein structures is beginning to quicken thanks to various improvements in technology, including protein engineering and microfocus X-ray diffraction. Here we review these developments and, where possible, highlight generic new approaches to solving membrane protein structures based on recent technological advances. Rational approaches to overcoming the bottlenecks in the field are urgently required as membrane proteins, which typically comprise ~30% of the proteomes of organisms, are dramatically under-represented in the structural database of the Protein Data Bank.
Resumo:
Approximately 60% of pharmaceuticals target membrane proteins; 30% of the human genome codes for membrane proteins yet they represent less than 1% of known unique crystal structures deposited in the Protein Data Bank (PDB), with 50% of structures derived from recombinant membrane proteins having been synthesized in yeasts. G protein-coupled receptors (GPCRs) are an important class of membrane proteins that are not naturally abundant in their native membranes. Unfortunately their recombinant synthesis often suffers from low yields; moreover, function may be lost during extraction and purification from cell membranes, impeding research aimed at structural and functional determination. We therefore devised two novel strategies to improve functional yields of recombinant membrane proteins in the yeast Saccharomyces cerevisiae. We used human adenosine A2A receptor (hA2AR) as a model GPRC since it is functionally and structurally well characterised.In the first strategy, we investigated whether it is possible to provide yeast cells with a selective advantage (SA) in producing the fusion protein hA2AR-Ura3p when grown in medium lacking uracil; Ura3p is a decarboxylase that catalyzes the sixth enzymatic step in the de novo biosynthesis of pyrimidines, generating uridine monophosphate. The first transformant (H1) selected using the SA strategy gave high total yields of hA2AR-Ura3p, but low functional yields as determined by radio-ligand binding, leading to the discovery that the majority of the hA2AR-Ura3p had been internalized to the vacuole. The yeast deletion strain spt3Δ is thought to have slower translation rates and improved folding capabilities compared to wild-type cells and was therefore utilised for the SA strategy to generate a second transformant, SU1, which gave higher functional yields than H1. Subsequently hA2AR-Ura3p from H1 was solubilised with n-dodecyl-β-D-maltoside and cholesteryl hemisuccinate, which yielded functional hA2AR-Ura3p at the highest yield of all approaches used. The second strategy involved using knowledge of translational processes to improve recombinant protein synthesis to increase functional yield. Modification of existing expression vectors with an internal ribosome entry site (IRES) inserted into the 5ˊ untranslated region (UTR) of the gene encoding hA2AR was employed to circumvent regulatory controls on recombinant synthesis in the yeast host cell. The mechanisms involved were investigated through the use of yeast deletion strains and drugs that cause translation inhibition, which is known to improve protein folding and yield. The data highlight the potential to use deletion strains to increase IRES-mediated expression of recombinant hA2AR. Overall, the data presented in this thesis provide mechanistic insights into two novel strategies that can increase functional membrane protein yields in the eukaryotic microbe, S. cerevisiae.
Resumo:
The Protein pKa Database (PPD) v1.0 provides a compendium of protein residue-specific ionization equilibria (pKa values), as collated from the primary literature, in the form of a web-accessible postgreSQL relational database. Ionizable residues play key roles in the molecular mechanisms that underlie many biological phenomena, including protein folding and enzyme catalysis. The PPD serves as a general protein pKa archive and as a source of data that allows for the development and improvement of pKa prediction systems. The database is accessed through an HTML interface, which offers two fast, efficient search methods: an amino acid-based query and a Basic Local Alignment Search Tool search. Entries also give details of experimental techniques and links to other key databases, such as National Center for Biotechnology Information and the Protein Data Bank, providing the user with considerable background information.
Resumo:
Full text: The idea of producing proteins from recombinant DNA hatched almost half a century ago. In his PhD thesis, Peter Lobban foresaw the prospect of inserting foreign DNA (from any source, including mammalian cells) into the genome of a λ phage in order to detect and recover protein products from Escherichia coli [ 1 and 2]. Only a few years later, in 1977, Herbert Boyer and his colleagues succeeded in the first ever expression of a peptide-coding gene in E. coli — they produced recombinant somatostatin [ 3] followed shortly after by human insulin. The field has advanced enormously since those early days and today recombinant proteins have become indispensable in advancing research and development in all fields of the life sciences. Structural biology, in particular, has benefitted tremendously from recombinant protein biotechnology, and an overwhelming proportion of the entries in the Protein Data Bank (PDB) are based on heterologously expressed proteins. Nonetheless, synthesizing, purifying and stabilizing recombinant proteins can still be thoroughly challenging. For example, the soluble proteome is organized to a large part into multicomponent complexes (in humans often comprising ten or more subunits), posing critical challenges for recombinant production. A third of all proteins in cells are located in the membrane, and pose special challenges that require a more bespoke approach. Recent advances may now mean that even these most recalcitrant of proteins could become tenable structural biology targets on a more routine basis. In this special issue, we examine progress in key areas that suggests this is indeed the case. Our first contribution examines the importance of understanding quality control in the host cell during recombinant protein production, and pays particular attention to the synthesis of recombinant membrane proteins. A major challenge faced by any host cell factory is the balance it must strike between its own requirements for growth and the fact that its cellular machinery has essentially been hijacked by an expression construct. In this context, Bill and von der Haar examine emerging insights into the role of the dependent pathways of translation and protein folding in defining high-yielding recombinant membrane protein production experiments for the common prokaryotic and eukaryotic expression hosts. Rather than acting as isolated entities, many membrane proteins form complexes to carry out their functions. To understand their biological mechanisms, it is essential to study the molecular structure of the intact membrane protein assemblies. Recombinant production of membrane protein complexes is still a formidable, at times insurmountable, challenge. In these cases, extraction from natural sources is the only option to prepare samples for structural and functional studies. Zorman and co-workers, in our second contribution, provide an overview of recent advances in the production of multi-subunit membrane protein complexes and highlight recent achievements in membrane protein structural research brought about by state-of-the-art near-atomic resolution cryo-electron microscopy techniques. E. coli has been the dominant host cell for recombinant protein production. Nonetheless, eukaryotic expression systems, including yeasts, insect cells and mammalian cells, are increasingly gaining prominence in the field. The yeast species Pichia pastoris, is a well-established recombinant expression system for a number of applications, including the production of a range of different membrane proteins. Byrne reviews high-resolution structures that have been determined using this methylotroph as an expression host. Although it is not yet clear why P. pastoris is suited to producing such a wide range of membrane proteins, its ease of use and the availability of diverse tools that can be readily implemented in standard bioscience laboratories mean that it is likely to become an increasingly popular option in structural biology pipelines. The contribution by Columbus concludes the membrane protein section of this volume. In her overview of post-expression strategies, Columbus surveys the four most common biochemical approaches for the structural investigation of membrane proteins. Limited proteolysis has successfully aided structure determination of membrane proteins in many cases. Deglycosylation of membrane proteins following production and purification analysis has also facilitated membrane protein structure analysis. Moreover, chemical modifications, such as lysine methylation and cysteine alkylation, have proven their worth to facilitate crystallization of membrane proteins, as well as NMR investigations of membrane protein conformational sampling. Together these approaches have greatly facilitated the structure determination of more than 40 membrane proteins to date. It may be an advantage to produce a target protein in mammalian cells, especially if authentic post-translational modifications such as glycosylation are required for proper activity. Chinese Hamster Ovary (CHO) cells and Human Embryonic Kidney (HEK) 293 cell lines have emerged as excellent hosts for heterologous production. The generation of stable cell-lines is often an aspiration for synthesizing proteins expressed in mammalian cells, in particular if high volumetric yields are to be achieved. In his report, Buessow surveys recent structures of proteins produced using stable mammalian cells and summarizes both well-established and novel approaches to facilitate stable cell-line generation for structural biology applications. The ambition of many biologists is to observe a protein's structure in the native environment of the cell itself. Until recently, this seemed to be more of a dream than a reality. Advances in nuclear magnetic resonance (NMR) spectroscopy techniques, however, have now made possible the observation of mechanistic events at the molecular level of protein structure. Smith and colleagues, in an exciting contribution, review emerging ‘in-cell NMR’ techniques that demonstrate the potential to monitor biological activities by NMR in real time in native physiological environments. A current drawback of NMR as a structure determination tool derives from size limitations of the molecule under investigation and the structures of large proteins and their complexes are therefore typically intractable by NMR. A solution to this challenge is the use of selective isotope labeling of the target protein, which results in a marked reduction of the complexity of NMR spectra and allows dynamic processes even in very large proteins and even ribosomes to be investigated. Kerfah and co-workers introduce methyl-specific isotopic labeling as a molecular tool-box, and review its applications to the solution NMR analysis of large proteins. Tyagi and Lemke next examine single-molecule FRET and crosslinking following the co-translational incorporation of non-canonical amino acids (ncAAs); the goal here is to move beyond static snap-shots of proteins and their complexes and to observe them as dynamic entities. The encoding of ncAAs through codon-suppression technology allows biomolecules to be investigated with diverse structural biology methods. In their article, Tyagi and Lemke discuss these approaches and speculate on the design of improved host organisms for ‘integrative structural biology research’. Our volume concludes with two contributions that resolve particular bottlenecks in the protein structure determination pipeline. The contribution by Crepin and co-workers introduces the concept of polyproteins in contemporary structural biology. Polyproteins are widespread in nature. They represent long polypeptide chains in which individual smaller proteins with different biological function are covalently linked together. Highly specific proteases then tailor the polyprotein into its constituent proteins. Many viruses use polyproteins as a means of organizing their proteome. The concept of polyproteins has now been exploited successfully to produce hitherto inaccessible recombinant protein complexes. For instance, by means of a self-processing synthetic polyprotein, the influenza polymerase, a high-value drug target that had remained elusive for decades, has been produced, and its high-resolution structure determined. In the contribution by Desmyter and co-workers, a further, often imposing, bottleneck in high-resolution protein structure determination is addressed: The requirement to form stable three-dimensional crystal lattices that diffract incident X-ray radiation to high resolution. Nanobodies have proven to be uniquely useful as crystallization chaperones, to coax challenging targets into suitable crystal lattices. Desmyter and co-workers review the generation of nanobodies by immunization, and highlight the application of this powerful technology to the crystallography of important protein specimens including G protein-coupled receptors (GPCRs). Recombinant protein production has come a long way since Peter Lobban's hypothesis in the late 1960s, with recombinant proteins now a dominant force in structural biology. The contributions in this volume showcase an impressive array of inventive approaches that are being developed and implemented, ever increasing the scope of recombinant technology to facilitate the determination of elusive protein structures. Powerful new methods from synthetic biology are further accelerating progress. Structure determination is now reaching into the living cell with the ultimate goal of observing functional molecular architectures in action in their native physiological environment. We anticipate that even the most challenging protein assemblies will be tackled by recombinant technology in the near future.
Resumo:
Historically, recombinant membrane protein production has been a major challenge meaning that many fewer membrane protein structures have been published than those of soluble proteins. However, there has been a recent, almost exponential increase in the number of membrane protein structures being deposited in the Protein Data Bank. This suggests that empirical methods are now available that can ensure the required protein supply for these difficult targets. This review focuses on methods that are available for protein production in yeast, which is an important source of recombinant eukaryotic membrane proteins. We provide an overview of approaches to optimize the expression plasmid, host cell and culture conditions, as well as the extraction and purification of functional protein for crystallization trials in preparation for structural studies.
Resumo:
Membrane proteins account for a third of the eukaryotic proteome, but are greatly under-represented in the Protein Data Bank. Unfortunately, recent technological advances in X-ray crystallography and EM cannot account for the poor solubility and stability of membrane protein samples. A limitation of conventional detergent-based methods is that detergent molecules destabilize membrane proteins, leading to their aggregation. The use of orthologues, mutants and fusion tags has helped improve protein stability, but at the expense of not working with the sequence of interest. Novel detergents such as glucose neopentyl glycol (GNG), maltose neopentyl glycol (MNG) and calixarene-based detergents can improve protein stability without compromising their solubilizing properties. Styrene maleic acid lipid particles (SMALPs) focus on retaining the native lipid bilayer of a membrane protein during purification and biophysical analysis. Overcoming bottlenecks in the membrane protein structural biology pipeline, primarily by maintaining protein stability, will facilitate the elucidation of many more membrane protein structures in the near future.
Resumo:
Lehet-e beszélni a 2011-ig felgyülemlett empirikus tapasztalatok tükrében egy egységes válságlefolyásról, amely a fejlett ipari országok egészére általában jellemző, és a meghatározó országok esetében is megragadható? Megállapíthatók-e olyan univerzális változások a kibocsátás, a munkapiacok, a fogyasztás, valamint a beruházás tekintetében, amelyek jól illeszkednek a korábbi tapasztalatokhoz, nem kevésbé az ismert makromodellek predikcióihoz? A válasz – legalábbis jelen sorok írásakor – nemleges: sem a válság lefolyásának jellegzetességeiben és a makrogazdasági teljesítmények romlásának ütemében, sem a visszacsúszás mértékében és időbeli kiterjedésében sincsenek jól azonosítható közös jegyek, olyanok, amelyek a meglévő elméleti keretekbe jól beilleszthetők. A tanulmány áttekinti a válsággal és a makrogazdasági sokkokkal foglalkozó empirikus irodalom – a pénzügyi globalizáció értelmezései nyomán – relevánsnak tartott munkáit. Ezt követően egy 60 év távlatát átfogó vizsgálatban próbáljuk megítélni a recessziós időszakokban az amerikai gazdaság teljesítményét azzal a célkitűzéssel, hogy az elmúlt válság súlyosságának megítélése kellően objektív lehessen, legalább a fontosabb makrováltozók elmozdulásának nagyságrendje tekintetében. / === / Based on the empirical evidence accumulated until 2011, using official statistics from the OECD data bank and the US Commerce Department, the article addresses the question whether one can, or cannot, speak about generally observable recession/crisis patterns, such that were to be universally recognized in all major industrial countries (the G7). The answer to this question is a firm no. Changes and volatility in most major macroeconomic indicators such as output-gap, labor market distortions and large deviations from trend in consumption and in investment did all, respectively, exhibit wide differences in depth and width across the G7 countries. The large deviations in output-gaps and especially strong distortions in labor market inputs and hours per capita worked over the crisis months can hardly be explained by the existing model classes of DSGE and those of the real business cycle. Especially bothering are the difficulties in fitting the data into any established model whether business cycle or some other types, in which financial distress reduces economic activity. It is argued that standard business cycle models with financial market imperfections have no mechanism for generating deviation from standard theory, thus they do not shed light on the key factors underlying the 2007–2009 recession. That does not imply that the financial crisis is unimportant in understanding the recession, but it does indicate however, that we do not fully understand the channels through which financial distress reduced labor input. Long historical trends on the privately held portion of the federal debt in the US economy indicate that the standard macro proposition of public debt crowding out private investment and thus inhibiting growth, can be strongly challenged in so far as this ratio is neither a direct indicator of growth slowing down, nor for recession.
Resumo:
Intense precipitation events (IPE) have been causing great social and economic losses in the affected regions. In the Amazon, these events can have serious impacts, primarily for populations living on the margins of its countless rivers, because when water levels are elevated, floods and/or inundations are generally observed. Thus, the main objective of this research is to study IPE, through Extreme Value Theory (EVT), to estimate return periods of these events and identify regions of the Brazilian Amazon where IPE have the largest values. The study was performed using daily rainfall data of the hydrometeorological network managed by the National Water Agency (Agência Nacional de Água) and the Meteorological Data Bank for Education and Research (Banco de Dados Meteorológicos para Ensino e Pesquisa) of the National Institute of Meteorology (Instituto Nacional de Meteorologia), covering the period 1983-2012. First, homogeneous rainfall regions were determined through cluster analysis, using the hierarchical agglomerative Ward method. Then synthetic series to represent the homogeneous regions were created. Next EVT, was applied in these series, through Generalized Extreme Value (GEV) and the Generalized Pareto Distribution (GPD). The goodness of fit of these distributions were evaluated by the application of the Kolmogorov-Smirnov test, which compares the cumulated empirical distributions with the theoretical ones. Finally, the composition technique was used to characterize the prevailing atmospheric patterns for the occurrence of IPE. The results suggest that the Brazilian Amazon has six pluvial homogeneous regions. It is expected more severe IPE to occur in the south and in the Amazon coast. More intense rainfall events are expected during the rainy or transitions seasons of each sub-region, with total daily precipitation of 146.1, 143.1 and 109.4 mm (GEV) and 201.6, 209.5 and 152.4 mm (GPD), at least once year, in the south, in the coast and in the northwest of the Brazilian Amazon, respectively. For the south Amazonia, the composition analysis revealed that IPE are associated with the configuration and formation of the South Atlantic Convergence Zone. Along the coast, intense precipitation events are associated with mesoscale systems, such Squall Lines. In Northwest Amazonia IPE are apparently associated with the Intertropical Convergence Zone and/or local convection.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Nucleic acids (DNA and RNA) play essential roles in the central dogma of biology for the storage and transfer of genetic information. The unique chemical and conformational structures of nucleic acids – the double helix composed of complementary Watson-Crick base pairs, provide the structural basis to carry out their biological functions. DNA double helix can dynamically accommodate Watson-Crick and Hoogsteen base-pairing, in which the purine base is flipped by ~180° degrees to adopt syn rather than anti conformation as in Watson-Crick base pairs. There is growing evidence that Hoogsteen base pairs play important roles in DNA replication, recognition, damage or mispair accommodation and repair. Here, we constructed a database for existing Hoogsteen base pairs in DNA duplexes by a structure-based survey from the Protein Data Bank, and structural analyses based on the resulted Hoogsteen structures revealed that Hoogsteen base pairs occur in a wide variety of biological contexts and can induce DNA kinking towards the major groove. As there were documented difficulties in modeling Hoogsteen or Watson-Crick by crystallography, we collaborated with the Richardsons’ lab and identified potential Hoogsteen base pairs that were mis-modeled as Watson-Crick base pairs which suggested that Hoogsteen can be more prevalent than it was thought to be. We developed solution NMR method combined with the site-specific isotope labeling to characterize the formation of, or conformational exchange with Hoogsteen base pairs in large DNA-protein complexes under solution conditions, in the absence of the crystal packing force. We showed that there are enhanced chemical exchange, potentially between Watson-Crick and Hoogsteen, at a sharp kink site in the complex formed by DNA and the Integration Host Factor protein. In stark contrast to B-form DNA, we found that Hoogsteen base pairs are strongly disfavored in A-form RNA duplex. Chemical modifications N1-methyl adenosine and N1-methyl guanosine that block Watson-Crick base-pairing, can be absorbed as Hoogsteen base pairs in DNA, but rather potently destabilized A-form RNA and caused helix melting. The intrinsic instability of Hoogsteen base pairs in A-form RNA endows the N1-methylation as a functioning post-transcriptional modification that was known to facilitate RNA folding, translation and potentially play roles in the epitranscriptome. On the other hand, the dynamic property of DNA that can accommodate Hoogsteen base pairs could be critical to maintaining the genome stability.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
VITULLO, Nadia Aurora Vanti. Avaliação do banco de dissertações e teses da Associação Brasileira de Antropologia: uma análise cienciométrica. 2001. 143 f. Dissertaçao (Mestrado) - Curso de Mestrado em Biblioteconomia e Ciência da Informação, Pontifícia Universidade Católica de Campinas, Campinas, 2001.
Resumo:
Les protéines membranaires intégrales jouent un rôle indispensable dans la survie des cellules et 20 à 30% des cadres de lectures ouverts codent pour cette classe de protéines. La majorité des protéines membranaires se trouvant sur la Protein Data Bank n’ont pas une orientation et une insertion connue. L’orientation, l’insertion et la conformation que les protéines membranaires ont lorsqu’elles interagissent avec une bicouche lipidique sont importantes pour la compréhension de leur fonction, mais ce sont des caractéristiques difficiles à obtenir par des méthodes expérimentales. Des méthodes computationnelles peuvent réduire le temps et le coût de l’identification des caractéristiques des protéines membranaires. Dans le cadre de ce projet de maîtrise, nous proposons une nouvelle méthode computationnelle qui prédit l’orientation et l’insertion d’une protéine dans une membrane. La méthode est basée sur les potentiels de force moyenne de l’insertion membranaire des chaînes latérales des acides aminés dans une membrane modèle composèe de dioléoylphosphatidylcholine.
Resumo:
This work aims to analyze risks related to information technology (IT) in procedures related to data migration. This is done considering ALEPH, Integrated Libray System (ILS) that migrated data to the Library Module present in the software called Sistema Integrado de Gestão de Atividades Acadêmicas (SIGAA) at the Zila Mamede Central Library at the Federal University of Rio Grande do Norte (UFRN) in Natal/Brazil. The methodological procedure used was of a qualitative exploratory research with the realization of case study at the referred library in order to better understand this phenomenon. Data collection was able once there was use of a semi-structured interview that was applied with (11) subjects that are employed at the library as well as in the Technology Superintendence at UFRN. In order to examine data Content analysis as well as thematic review process was performed. After data migration the results of the interview were then linked to both analysis units and their system register with category correspondence. The main risks detected were: data destruction; data loss; data bank communication failure; user response delay; data inconsistency and duplicity. These elements point out implication and generate disorders that affect external and internal system users and lead to stress, work duplicity and hassles. Thus, some measures were taken related to risk management such as adequate planning, central management support, and pilot test simulations. For the advantages it has reduced of: risk, occurrence of problems and possible unforeseen costs, and allows achieving organizational objectives, among other. It is inferred therefore that the risks present in data bank conversion in libraries exist and some are predictable, however, it is seen that librarians do not know or ignore and are not very worried in the identification risks in data bank conversion, their acknowledge would minimize or even extinguish them. Another important aspect to consider is the existence of few empirical research that deal specifically with this subject and thus presenting the new of new approaches in order to promote better understanding of the matter in the corporate environment of the information units
Resumo:
Desde hace cerca de dos siglos, los hidratos de gas han ganado un rol importante en la ingeniería de procesos, debido a su impacto económico y ambiental en la industria -- Cada día, más compañías e ingenieros ganan interés en este tema, a medida que nuevos desafíos muestran a los hidratos de gas como un factor crucial, haciendo su estudio una solución para un futuro próximo -- Los gases de hidrato son estructuras similares al hielo, compuestos de moléculas huéspedes de agua conteniendo compuestos gaseosos -- Existen naturalmente en condiciones de presiones altas y bajas temperaturas, condiciones típicas de algunos procesos químicos y petroquímicos [1] -- Basado en el trabajo doctoral de Windmeier [2] y el trabajo doctoral the Rock [3], la descripción termodinámica de las fases de los hidratos de gas es implementada siguiendo el estado del arte de la ciencia y la tecnología -- Con ayuda del Dortmund Data Bank (DDB) y el paquete de software correspondiente (DDBSP) [26], el desempeño del método fue mejorado y comparado con una gran cantidad de datos publicados alrededor del mundo -- También, la aplicabilidad de la predicción de los hidratos de gas fue estudiada enfocada en la ingeniería de procesos, con un caso de estudio relacionado con la extracción, producción y transporte del gas natural -- Fue determinado que la predicción de los hidratos de gas es crucial en el diseño del proceso del gas natural -- Donde, en las etapas de tratamiento del gas y procesamiento de líquido no se presenta ninguna formación, en la etapa de deshidratación una temperatura mínima de 290.15 K es crítica y para la extracción y transporte el uso de inhibidores es esencial -- Una composición másica de 40% de etilenglicol fue encontrada apropiada para prevenir la formación de hidrato de gas en la extracción y una composición másica de 20% de metanol en el transporte