986 resultados para database integration


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The second main cause of death in Brazil is cancer, and according to statistics disclosed by National Cancer Institute from Brazil (INCA) 466,730 new cases of cancer are forecast for 2008. The analysis of tumour tissues of various types and patients' clinical data, genetic profiles, characteristics of diseases and epidemiological data may lead to more precise diagnoses, providing more effective treatments. In this work we present a clinical decision support system for cancer diseases, which manages a relational database containing information relating to the tumour tissue and their location in freezers, patients and medical forms. Furthermore, it is also discussed some problems encountered, as database integration and the adoption of a standard to describe topography and morphology. It is also discussed the dynamic report generation functionality, that shows data in table and graph format, according to the user's configuration. © ACM 2008.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last years, and particularly in the context of the COMBIOMED network, our biomedical informatics (BMI) group at the Universidad Politecnica de Madrid has carried out several approaches to address a fundamental issue: to facilitate open access and retrieval to BMI resources —including software, databases and services. In this regard, we have followed various directions: a) a text mining-based approach to automatically build a “resourceome”, an inventory of open resources, b) methods for heterogeneous database integration —including clinical, -omics and nanoinformatics sources—; c) creating various services to provide access to different resources to African users and professionals, and d) an approach to facilitate access to open resources from research projects

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current state of Russian databases for substances and materials properties was considered. A brief review of integration methods of given information systems was prepared and a distributed databases integration approach based on metabase was proposed. Implementation details were mentioned on the posed database on electronics materials integration approach. An operating pilot version of given integrated information system implemented at IMET RAS was considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Mouse Genome Database (MGD) is the community database resource for the laboratory mouse, a key model organism for interpreting the human genome and for understanding human biology and disease (http://www.informatics.jax.org). MGD provides standard nomenclature and consensus map positions for mouse genes and genetic markers; it provides a curated set of mammalian homology records, user-defined chromosomal maps, experimental data sets and the definitive mouse ‘gene to sequence’ reference set for the research community. The integration and standardization of these data sets facilitates the transition between mouse DNA sequence, gene and phenotype annotations. A recent focus on allele and phenotype representations enhances the ability of MGD to organize and present data for exploring the relationship between genotype and phenotype. This link between the genome and the biology of the mouse is especially important as phenotype information grows from large mutagenesis projects and genotype information grows from large-scale sequencing projects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - This paper seeks to examine the complex relationships between urban planning, infrastructure management, sustainable urban development, and to illustrate why there is an urgent need for local governments to develop a robust planning support system which integrates with advance urban computer modelling tools to facilitate better infrastructure management and improve knowledge sharing between the community, urban planners, engineers and decision makers. Design/methodology/approach - The methods used in this paper includes literature review and practical project case observations. Originality/value - This paper provides an insight of how the Brisbane's planning support system established by Brisbane City Council has significantly improved the effectiveness of urban planning, infrastructure management and community engagement through better knowledge management processes. Practical implications - This paper presents a practical framework for setting up a functional planning support system within local government. The integration of the Brisbane Urban Growth model, Virtual Brisbane and the Brisbane Economic Activity Monitoring (BEAM) database have proven initially successful to provide a dynamic platform to assist elected officials, planners and engineers to understand the limitations of the local environment, its urban systems and the planning implications on a city. With the Brisbane's planning support system, planners and decision makers are able to provide better planning outcomes, policy and infrastructure that adequately address the local needs and achieve sustainable spatial forms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 3D Water Chemistry Atlas is an intuitive, open source, Web-based system that enables the three-dimensional (3D) sub-surface visualization of ground water monitoring data, overlaid on the local geological model (formation and aquifer strata). This paper firstly describes the results of evaluating existing virtual globe technologies, which led to the decision to use the Cesium open source WebGL Virtual Globe and Map Engine as the underlying platform. Next it describes the backend database and search, filtering, browse and analysis tools that were developed to enable users to interactively explore the groundwater monitoring data and interpret it spatially and temporally relative to the local geological formations and aquifers via the Cesium interface. The result is an integrated 3D visualization system that enables environmental managers and regulators to assess groundwater conditions, identify inconsistencies in the data, manage impacts and risks and make more informed decisions about coal seam gas extraction, waste water extraction, and water reuse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With new national targets for patient flow in public hospitals designed to increase efficiencies in patient care and resource use, better knowledge of events affecting length of stay will support improved bed management and scheduling of procedures. This paper presents a case study involving the integration of material from each of three databases in operation at one tertiary hospital and demonstrates it is possible to follow patient journeys from admission to discharge. What is known about this topic? At present, patient data at one Queensland tertiary hospital are assembled in three information systems: (1) the Hospital Based Corporate Information System (HBCIS), which tracks patients from in-patient admission to discharge; (2) the Emergency Department Information System (EDIS) containing patient data from presentation to departure from the emergency department; and (3) Operation Room Management Information System (ORMIS), which records surgical operations. What does this paper add? This paper describes how a new enquiry tool may be used to link the three hospital information systems for studying the hospital journey through different wards and/or operating theatres for both individual and groups of patients. What are the implications for practitioners? An understanding of the patients’ journeys provides better insight into patient flow and provides the tool for research relating to access block, as well as optimising the use of physical and human resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to reduce potential uncertainties and conservatism in welded panel analysis procedures, understanding of the relationships between welding process parameters and static strength is required. The aim of this study is to determine and characterize the key process induced properties of advanced welding assembly methods on stiffened panel local buckling and collapse performance. To this end, an in-depth experimental and computational study of the static strength of a friction stir welded fuselage skin-stiffener panel subjected to compression loading has been undertaken. Four welding process effects, viz. the weld joint width, the width of the weld Heat Affected Zone, the strength of material within the weld Heat Affected Zone and the magnitude of welding induced residual stress, are investigated. A fractional factorial experiment design method (Taguchi) has been applied to identify the relative importance of each welding process effect and investigate effect interactions on both local skin buckling and crippling collapse performance. For the identified dominant welding process effects, parametric studies have been undertaken to identify critical welding process effect magnitudes and boundaries. The studies have shown that local skin buckling is principally influenced by the magnitude of welding induced residual stress and that the strength of material in the Heat Affected Zone and the magnitude of the welding induced residual stress have the greatest influence on crippling collapse behavior.


--------------------------------------------------------------------------------

Reaxys Database Information
|