957 resultados para DATABASES
Resumo:
INTRODUCTION HIV care and treatment programmes worldwide are transforming as they push to deliver universal access to essential prevention, care and treatment services to persons living with HIV and their communities. The characteristics and capacity of these HIV programmes affect patient outcomes and quality of care. Despite the importance of ensuring optimal outcomes, few studies have addressed the capacity of HIV programmes to deliver comprehensive care. We sought to describe such capacity in HIV programmes in seven regions worldwide. METHODS Staff from 128 sites in 41 countries participating in the International epidemiologic Databases to Evaluate AIDS completed a site survey from 2009 to 2010, including sites in the Asia-Pacific region (n=20), Latin America and the Caribbean (n=7), North America (n=7), Central Africa (n=12), East Africa (n=51), Southern Africa (n=16) and West Africa (n=15). We computed a measure of the comprehensiveness of care based on seven World Health Organization-recommended essential HIV services. RESULTS Most sites reported serving urban (61%; region range (rr): 33-100%) and both adult and paediatric populations (77%; rr: 29-96%). Only 45% of HIV clinics that reported treating children had paediatricians on staff. As for the seven essential services, survey respondents reported that CD4+ cell count testing was available to all but one site, while tuberculosis (TB) screening and community outreach services were available in 80 and 72%, respectively. The remaining four essential services - nutritional support (82%), combination antiretroviral therapy adherence support (88%), prevention of mother-to-child transmission (PMTCT) (94%) and other prevention and clinical management services (97%) - were uniformly available. Approximately half (46%) of sites reported offering all seven services. Newer sites and sites in settings with low rankings on the UN Human Development Index (HDI), especially those in the President's Emergency Plan for AIDS Relief focus countries, tended to offer a more comprehensive array of essential services. HIV care programme characteristics and comprehensiveness varied according to the number of years the site had been in operation and the HDI of the site setting, with more recently established clinics in low-HDI settings reporting a more comprehensive array of available services. Survey respondents frequently identified contact tracing of patients, patient outreach, nutritional counselling, onsite viral load testing, universal TB screening and the provision of isoniazid preventive therapy as unavailable services. CONCLUSIONS This study serves as a baseline for on-going monitoring of the evolution of care delivery over time and lays the groundwork for evaluating HIV treatment outcomes in relation to site capacity for comprehensive care.
Resumo:
Background Simple Sequence Repeats (SSRs) are widely used in population genetic studies but their classical development is costly and time-consuming. The ever-increasing available DNA datasets generated by high-throughput techniques offer an inexpensive alternative for SSRs discovery. Expressed Sequence Tags (ESTs) have been widely used as SSR source for plants of economic relevance but their application to non-model species is still modest. Methods Here, we explored the use of publicly available ESTs (GenBank at the National Center for Biotechnology Information-NCBI) for SSRs development in non-model plants, focusing on genera listed by the International Union for the Conservation of Nature (IUCN). We also search two model genera with fully annotated genomes for EST-SSRs, Arabidopsis and Oryza, and used them as controls for genome distribution analyses. Overall, we downloaded 16 031 555 sequences for 258 plant genera which were mined for SSRsand their primers with the help of QDD1. Genome distribution analyses in Oryza and Arabidopsis were done by blasting the sequences with SSR against the Oryza sativa and Arabidopsis thaliana reference genomes implemented in the Basal Local Alignment Tool (BLAST) of the NCBI website. Finally, we performed an empirical test to determine the performance of our EST-SSRs in a few individuals from four species of two eudicot genera, Trifolium and Centaurea. Results We explored a total of 14 498 726 EST sequences from the dbEST database (NCBI) in 257 plant genera from the IUCN Red List. We identify a very large number (17 102) of ready-to-test EST-SSRs in most plant genera (193) at no cost. Overall, dinucleotide and trinucleotide repeats were the prevalent types but the abundance of the various types of repeat differed between taxonomic groups. Control genomes revealed that trinucleotide repeats were mostly located in coding regions while dinucleotide repeats were largely associated with untranslated regions. Our results from the empirical test revealed considerable amplification success and transferability between congenerics. Conclusions The present work represents the first large-scale study developing SSRs by utilizing publicly accessible EST databases in threatened plants. Here we provide a very large number of ready-to-test EST-SSR (17 102) for 193 genera. The cross-species transferability suggests that the number of possible target species would be large. Since trinucleotide repeats are abundant and mainly linked to exons they might be useful in evolutionary and conservation studies. Altogether, our study highly supports the use of EST databases as an extremely affordable and fast alternative for SSR developing in threatened plants.
Resumo:
The population of space debris increased drastically during the last years. Collisions involving massive objects may produce large number of fragments leading to significantly growth of the space debris population. An effective remediation measure in order to stabilize the population in LEO, is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period and spin axis orientation. If we observe a rotating object, the observer sees different surface areas of the object which leads to changes in the measured intensity. Rotating objects will produce periodic brightness vari ations with frequencies which are related to the spin periods. Photometric monitoring is the real tool for remote diagnostics of the satellite rotation around its center of mass. This information is also useful, for example, in case of contingency. Moreover, it is also important to take into account the orientation of non-spherical body (e.g. space debris) in the numerical integration of its motion when a close approach with the another spacecr aft is predicted. We introduce the two databases of light curves: the AIUB data base, which contains about a thousand light curves of LEO, MEO and high-altitude debris objects (including a few functional objects) obtained over more than seven years, and the data base of the Astronomical Observatory of Odessa University (Ukraine), which contains the results of more than 10 years of photometric monitoring of functioning satellites and large space debris objects in low Earth orbit. AIUB used its 1m ZIMLAT telescope for all light curves. For tracking low-orbit satellites, the Astronomical Observatory of Odessa used the KT-50 telescope, which has an alt-azimuth mount and allows tracking objects moving at a high angular velocity. The diameter of the KT-50 main mirror is 0.5 m, and the focal length is 3 m. The Odessa's Atlas of light curves includes almost 5,5 thousand light curves for ~500 correlated objects from a time period of 2005-2014. The processing of light curves and the determination of the rotation period in the inertial frame is challenging. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for confirmation, will be presented. The rotation of the Envisat satellite after its sudden failure will be analyzed. The deceleration of its rotation rate within 3 years is studied together with the attempt to determine the orientation of the rotation axis.
Resumo:
This paper describes the spatial data handling procedures used to create a vector database of the Connecticut shoreline from Coastal Survey Maps. The appendix contains detailed information on how the procedures were implemented using Geographic Transformer Software 5 and ArcGIS 8.3. The project was a joint project of the Connecticut Department of Environmental Protection and the University of Connecticut Center for Geographic Information and Analysis.
Resumo:
Correct species identifications are of tremendous importance for invasion ecology, as mistakes could lead to misdirecting limited resources against harmless species or inaction against problematic ones. DNA barcoding is becoming a promising and reliable tool for species identifications, however the efficacy of such molecular taxonomy depends on gene region(s) that provide a unique sequence to differentiate among species and on availability of reference sequences in existing genetic databases. Here, we assembled a list of aquatic and terrestrial non-indigenous species (NIS) and checked two leading genetic databases for corresponding sequences of six genome regions used for DNA barcoding. The genetic databases were checked in 2010, 2012, and 2016. All four aquatic kingdoms (Animalia, Chromista, Plantae and Protozoa) were initially equally represented in the genetic databases, with 64, 65, 69, and 61% of NIS included, respectively. Sequences for terrestrial NIS were present at rates of 58 and 78% for Animalia and Plantae, respectively. Six years later, the number of sequences for aquatic NIS increased to 75, 75, 74, and 63% respectively, while those for terrestrial NIS increased to 74 and 88% respectively. Genetic databases are marginally better populated with sequences of terrestrial NIS of plants compared to aquatic NIS and terrestrial NIS of animals. The rate at which sequences are added to databases is not equal among taxa. Though some groups of NIS are not detectable at all based on available data - mostly aquatic ones - encouragingly, current availability of sequences of taxa with environmental and/or economic impact is relatively good and continues to increase with time.
Resumo:
Despite the fact that input–output (IO) tables form a central part of the System of National Accounts, each individual country's national IO table exhibits more or less different features and characteristics, reflecting the country's socioeconomic idiosyncrasies. Consequently, the compilers of a multi-regional input–output table (MRIOT) are advised to thoroughly examine the conceptual as well as methodological differences among countries in the estimation of basic statistics for national IO tables and, if necessary, to carry out pre-adjustment of these tables into a common format prior to the MRIOT compilation. The objective of this study is to provide a practical guide for harmonizing national IO tables to construct a consistent MRIOT, referring to the adjustment practices used by the Institute of Developing Economies, JETRO (IDE-JETRO) in compiling the Asian International Input–Output Table.
Resumo:
Geographic Information Systems are developed to handle enormous volumes of data and are equipped with numerous functionalities intended to capture, store, edit, organise, process and analyse or represent the geographically referenced information. On the other hand, industrial simulators for driver training are real-time applications that require a virtual environment, either geospecific, geogeneric or a combination of the two, over which the simulation programs will be run. In the final instance, this environment constitutes a geographic location with its specific characteristics of geometry, appearance, functionality, topography, etc. The set of elements that enables the virtual simulation environment to be created and in which the simulator user can move, is usually called the Visual Database (VDB). The main idea behind the work being developed approaches a topic that is of major interest in the field of industrial training simulators, which is the problem of analysing, structuring and describing the virtual environments to be used in large driving simulators. This paper sets out a methodology that uses the capabilities and benefits of Geographic Information Systems for organising, optimising and managing the visual Database of the simulator and for generally enhancing the quality and performance of the simulator.
Resumo:
The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks
Resumo:
Over the last few years, the Pennsylvania State University (PSU) under the sponsorship of the US Nuclear Regulatory Commission (NRC) has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT) Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT) Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and the Japan Nuclear Energy Safety (JNES) Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid), namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
Resumo:
El objetivo principal de este proyecto ha sido introducir aprendizaje automático en la aplicación FleSe. FleSe es una aplicación web que permite realizar consultas borrosas sobre bases de datos nítidos. Para llevar a cabo esta función la aplicación utiliza unos criterios para definir los conceptos borrosos usados para llevar a cabo las consultas. FleSe además permite que el usuario cambie estas personalizaciones. Es aquí donde introduciremos el aprendizaje automático, de tal manera que los criterios por defecto cambien y aprendan en función de las personalizaciones que van realizando los usuarios. Los objetivos secundarios han sido familiarizarse con el desarrollo y diseño web, al igual que recordar y ampliar el conocimiento sobre lógica borrosa y el lenguaje de programación lógica Ciao-Prolog. A lo largo de la realización del proyecto y sobre todo después del estudio de los resultados se demuestra que la agrupación de los usuarios marca la diferencia con la última versión de la aplicación. Esto se basa en la siguiente idea, podemos usar un algoritmo de aprendizaje automático sobre las personalizaciones de los criterios de todos los usuarios, pero la gran diversidad de opiniones de los usuarios puede llevar al algoritmo a concluir criterios erróneos o no representativos. Para solucionar este problema agrupamos a los usuarios intentando que cada grupo tengan la misma opinión o mismo criterio sobre el concepto. Y después de haber realizado las agrupaciones usar el algoritmo de aprendizaje automático para precisar el criterio por defecto de cada grupo de usuarios. Como posibles mejoras para futuras versiones de la aplicación FleSe sería un mejor control y manejo del ejecutable plserver. Este archivo se encarga de permitir a la aplicación web usar el lenguaje de programación lógica Ciao-Prolog para llevar a cabo la lógica borrosa relacionada con las consultas. Uno de los problemas más importantes que ofrece plserver es que bloquea el hilo de ejecución al intentar cargar un archivo con errores y en caso de ocurrir repetidas veces bloquea todas las peticiones siguientes bloqueando la aplicación. Pensando en los usuarios y posibles clientes, sería también importante permitir que FleSe trabajase con bases de datos de SQL en vez de almacenar la base de datos en los archivos de Prolog. Otra posible mejora basarse en distintas características a la hora de agrupar los usuarios dependiendo de los conceptos borrosos que se van ha utilizar en las consultas. Con esto se conseguiría que para cada concepto borroso, se generasen distintos grupos de usuarios, los cuales tendrían opiniones distintas sobre el concepto en cuestión. Así se generarían criterios por defecto más precisos para cada usuario y cada concepto borroso.---ABSTRACT---The main objective of this project has been to introduce machine learning in the application FleSe. FleSe is a web application that makes fuzzy queries over databases with precise information, using defined criteria to define the fuzzy concepts used by the queries. The application allows the users to change and custom these criteria. On this point is where the machine learning would be introduced, so FleSe learn from every new user customization of the criteria in order to generate a new default value of it. The secondary objectives of this project were get familiar with web development and web design in order to understand the how the application works, as well as refresh and improve the knowledge about fuzzy logic and logic programing. During the realization of the project and after the study of the results, I realized that clustering the users in different groups makes the difference between this new version of the application and the previous. This conclusion follows the next idea, we can use an algorithm to introduce machine learning over the criteria that people have, but the problem is the diversity of opinions and judgements that exists, making impossible to generate a unique correct criteria for all the users. In order to solve this problem, before using the machine learning methods, we cluster the users in order to make groups that have the same opinion, and afterwards, use the machine learning methods to precise the default criteria of each users group. The future improvements that could be important for the next versions of FleSe will be to control better the behaviour of the plserver file, that cost many troubles at the beginning of this project and it also generate important errors in the previous version. The file plserver allows the web application to use Ciao-Prolog, a logic programming language that control and manage all the fuzzy logic. One of the main problems with plserver is that when the user uploads a file with errors, it will block the thread and when this happens multiple times it will start blocking all the requests. Oriented to the customer, would be important as well to allow FleSe to manage and work with SQL databases instead of store the data in the Prolog files. Another possible improvement would that the cluster algorithm would be based on different criteria depending on the fuzzy concepts that the selected Prolog file have. This will generate more meaningful clusters, and therefore, the default criteria offered to the users will be more precise.
Resumo:
Expressed sequence tags (ESTs) are randomly sequenced cDNA clones. Currently, nearly 3 million human and 2 million mouse ESTs provide valuable resources that enable researchers to investigate the products of gene expression. The EST databases have proven to be useful tools for detecting homologous genes, for exon mapping, revealing differential splicing, etc. With the increasing availability of large amounts of poorly characterised eukaryotic (notably human) genomic sequence, ESTs have now become a vital tool for gene identification, sometimes yielding the only unambiguous evidence for the existence of a gene expression product. However, BLAST-based Web servers available to the general user have not kept pace with these developments and do not provide appropriate tools for querying EST databases with large highly spliced genes, often spanning 50 000–100 000 bases or more. Here we describe Gene2EST (http://woody.embl-heidelberg.de/gene2est/), a server that brings together a set of tools enabling efficient retrieval of ESTs matching large DNA queries and their subsequent analysis. RepeatMasker is used to mask dispersed repetitive sequences (such as Alu elements) in the query, BLAST2 for searching EST databases and Artemis for graphical display of the findings. Gene2EST combines these components into a Web resource targeted at the researcher who wishes to study one or a few genes to a high level of detail.
Resumo:
The ARKdb genome databases provide comprehensive public repositories for genome mapping data from farmed species and other animals (http://www.thearkdb.org) providing a resource similar in function to that offered by GDB or MGD for human or mouse genome mapping data, respectively. Because we have attempted to build a generic mapping database, the system has wide utility, particularly for those species for which development of a specific resource would be prohibitive. The ARKdb genome database model has been implemented for 10 species to date. These are pig, chicken, sheep, cattle, horse, deer, tilapia, cat, turkey and salmon. Access to the ARKdb databases is effected via the World Wide Web using the ARKdb browser and Anubis map viewer. The information stored includes details of loci, maps, experimental methods and the source references. Links to other information sources such as PubMed and EMBL/GenBank are provided. Responsibility for data entry and curation is shared amongst scientists active in genome research in the species of interest. Mirror sites in the United States are maintained in addition to the central genome server at Roslin.
Resumo:
There is no control over the information provided with sequences when they are deposited in the sequence databases. Consequently mistakes can seed the incorrect annotation of other sequences. Grouping genes into families and applying controlled annotation overcomes the problems of incorrect annotation associated with individual sequences. Two databases (http://www.mendel.ac.uk) were created to apply controlled annotation to plant genes and plant ESTs: Mendel-GFDb is a database of plant protein (gene) families based on gapped-BLAST analysis of all sequences in the SWISS-PROT family of databases. Sequences are aligned (ClustalW) and identical and similar residues shaded. The families are visually curated to ensure that one or more criteria, for example overall relatedness and/or domain similarity relate all sequences within a family. Sequence families are assigned a ‘Gene Family Number’ and a unified description is developed which best describes the family and its members. If authority exists the gene family is assigned a ‘Gene Family Name’. This information is placed in Mendel-GFDb. Mendel-ESTS is primarily a database of plant ESTs, which have been compared to Mendel-GFDb, completely sequenced genomes and domain databases. This approach associated ESTs with individual sequences and the controlled annotation of gene families and protein domains; the information being placed in Mendel-ESTS. The controlled annotation applied to genes and ESTs provides a basis from which a plant transcription database can be developed.
Resumo:
High throughput genome (HTG) and expressed sequence tag (EST) sequences are currently the most abundant nucleotide sequence classes in the public database. The large volume, high degree of fragmentation and lack of gene structure annotations prevent efficient and effective searches of HTG and EST data for protein sequence homologies by standard search methods. Here, we briefly describe three newly developed resources that should make discovery of interesting genes in these sequence classes easier in the future, especially to biologists not having access to a powerful local bioinformatics environment. trEST and trGEN are regularly regenerated databases of hypothetical protein sequences predicted from EST and HTG sequences, respectively. Hits is a web-based data retrieval and analysis system providing access to precomputed matches between protein sequences (including sequences from trEST and trGEN) and patterns and profiles from Prosite and Pfam. The three resources can be accessed via the Hits home page (http://hits.isb-sib.ch).