957 resultados para DATABASES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Congenital Erythrocytosis (CE), or congenital polycythemia, represents a rare and heterogeneous clinical entity. It is caused by deregulated red blood cell production where erythrocyte overproduction results in elevated hemoglobin and hematocrit levels. Primary congenital familial erythrocytosis is associated with low erythropoietin (Epo) levels and results from mutations in the Epo receptor gene (EPOR). Secondary congenital erythrocytosis arises from conditions causing tissue hypoxia and results in increased Epo production. These include hemoglobin variants with increased affinity for oxygen (HBB, HBA mutations), decreased production of 2,3-bisphosphoglycerate due to BPGM mutations, or mutations in the genes involved in the hypoxia sensing pathway (VHL, EPAS1 and EGLN1). Depending on the affected gene, CE can be inherited either in an autosomal dominant or recessive mode, with sporadic cases arising de novo. Despite recent important discoveries in the molecular pathogenesis of CE, the molecular causes remain to be identified in about 70% of the patients. With the objective of collecting all the published and unpublished cases of CE the COST action MPN&MPNr-Euronet developed a comprehensive internet-based database focusing on the registration of clinical history, hematological, biochemical and molecular data (http://www.erythrocytosis.org/). In addition, unreported mutations are also curated in the corresponding Leiden Open Variation Database (LOVD). This article is protected by copyright. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in computational and information technologies have facilitated the acquisition of geospatial information for regional and national soil and geology databases. These have been completed for a range of purposes from geological and soil baseline mapping to economic prospecting and land resource assessment, but have become increasingly used for forensic purposes. On the question of provenance of a questioned sample, the geologist or soil scientist will draw invariably on prior expert knowledge and available digital map and database sources in a ‘pseudo Bayesian’ approach. The context of this paper is the debate on whether existing (digital) geology and soil databases are indeed useful and suitable for forensic inferences. Published and new case studies are used to explore issues of completeness, consistency, compatibility and applicability in relation to the use of digital geology and soil databases in environmental and criminal forensics. One key theme that emerges is that, despite an acknowledgement that databases can be neither exhaustive nor precise enough to portray spatial variability at the scene of crime scale, coupled with expert knowledge, they play an invaluable role in providing background or
reference material in a criminal investigation. Moreover databases can offer an independent control set of samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Asthma is now one of the most common long-term conditions in the UK. It is therefore important to develop a comprehensive appreciation of the healthcare and societal costs in order to inform decisions on care provision and planning. We plan to build on our earlier estimates of national prevalence and costs from asthma by filling the data gaps previously identified in relation to healthcare and broadening the field of enquiry to include societal costs. This work will provide the first UK-wide estimates of the costs of asthma. In the context of asthma for the UK and its member countries (ie, England, Northern Ireland, Scotland and Wales), we seek to: (1) produce a detailed overview of estimates of incidence, prevalence and healthcare utilisation; (2) estimate health and societal costs; (3) identify any remaining information gaps and explore the feasibility of filling these and (4) provide insights into future research that has the potential to inform changes in policy leading to the provision of more cost-effective care.

Methods and analysis Secondary analyses of data from national health surveys, primary care, prescribing, emergency care, hospital, mortality and administrative data sources will be undertaken to estimate prevalence, healthcare utilisation and outcomes from asthma. Data linkages and economic modelling will be undertaken in an attempt to populate data gaps and estimate costs. Separate prevalence and cost estimates will be calculated for each of the UK-member countries and these will then be aggregated to generate UK-wide estimates.

Ethics and dissemination Approvals have been obtained from the NHS Scotland Information Services Division's Privacy Advisory Committee, the Secure Anonymised Information Linkage Collaboration Review System, the NHS South-East Scotland Research Ethics Service and The University of Edinburgh's Centre for Population Health Sciences Research Ethics Committee. We will produce a report for Asthma-UK, submit papers to peer-reviewed journals and construct an interactive map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nematodes are the most abundant metazoans, comprising more than 80% of all animals alive today. Since 1743, when Needham (Needham, 1743) described the first nematode, approximately 20,000 - 30,000 species have been named, with estimates of species remaining to be described ranging from 100,000 to 1 million (Blaxter, 2004; De Ley, 2000). Unfortunately, the taxonomic community is woefully inadequate for this task. The number of taxonomists currently describing new species of nematodes around the world is less than 100, and significant increases are not expected. If each of these taxonomists were able to describe 10 new species every year, it would take between 100 to 1,000 years to name these yet to be described species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Geographic information systems (GIS) are now widely applied in coastal resource management. Their ability to organise and interface information from a large range of public and private data sources, and their ability to combine this information, using management criteria, to develop a comprehensive picture of the system explains the success of GIS in this area. The use of numerical models as a tool to improve coastal management is also widespread. Less usual is a GIS-based management to ol implementing a comprehensive management model and integrating a numerical modelling system into itself. In this paper such a methodology is proposed. A GIS-based management tool based on the DPSIR model is presented. An overview of the MOHID numerical modelling system is given and the method of integrating this model in the management tool is described. This system is applied to the Sado Estuary (Portugal). Some preliminary results of the integration are presented, demonstrating the capabilities of the management system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB's Bioinformatics resource portal ExPASy features over 150 resources, including UniProtKB/Swiss-Prot, ENZYME, PROSITE, neXtProt, STRING, UniCarbKB, SugarBindDB, SwissRegulon, EPD, arrayMap, Bgee, SWISS-MODEL Repository, OMA, OrthoDB and other databases, which are briefly described in this article.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.