807 resultados para Clinical Data Integration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The decisions people make about medical treatments have a great impact on their lives. Health care practitioners, providers and patients often make decisions about medical treatments without complete understanding of the circumstances. The main reason for this is that medical data are available in fragmented, disparate and heterogeneous data silos. Without a centralised data warehouse structure to integrate these data silos, it is highly unlikely and impractical for the users to get all the information required on time to make a correct decision. In this research paper, a clinical data integration approach using SAS Clinical Data Integration Server tools is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research was a step forward in developing a data integration framework for Electronic Health Records. The outcome of the research is a conceptual and logical Data Warehousing model for integrating Cardiac Surgery electronic data records. This thesis investigated the main obstacles for the healthcare data integration and proposes a data warehousing model suitable for integrating fragmented data in a Cardiac Surgery Unit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heterogeneous health data is a critical issue when managing health information for quality decision making processes. In this paper we examine the efficient aggregation of lifestyle information through a data warehousing architecture lens. We present a proof of concept for a clinical data warehouse architecture that enables evidence based decision making processes by integrating and organising disparate data silos in support of healthcare services improvement paradigms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Historically, the paper hand-held record (PHR) has been used for sharing information between hospital clinicians, general practitioners and pregnant women in a maternity shared-care environment. Recently in alignment with a National e-health agenda, an electronic health record (EHR) was introduced at an Australian tertiary maternity service to replace the PHR for collection and transfer of data. The aim of this study was to examine and compare the completeness of clinical data collected in a PHR and an EHR. Methods We undertook a comparative cohort design study to determine differences in completeness between data collected from maternity records in two phases. Phase 1 data were collected from the PHR and Phase 2 data from the EHR. Records were compared for completeness of best practice variables collected The primary outcome was the presence of best practice variables and the secondary outcomes were the differences in individual variables between the records. Results Ninety-four percent of paper medical charts were available in Phase 1 and 100% of records from an obstetric database in Phase 2. No PHR or EHR had a complete dataset of best practice variables. The variables with significant improvement in completeness of data documented in the EHR, compared with the PHR, were urine culture, glucose tolerance test, nuchal screening, morphology scans, folic acid advice, tobacco smoking, illicit drug assessment and domestic violence assessment (p = 0.001). Additionally the documentation of immunisations (pertussis, hepatitis B, varicella, fluvax) were markedly improved in the EHR (p = 0.001). The variables of blood pressure, proteinuria, blood group, antibody, rubella and syphilis status, showed no significant differences in completeness of recording. Conclusion This is the first paper to report on the comparison of clinical data collected on a PHR and EHR in a maternity shared-care setting. The use of an EHR demonstrated significant improvements to the collection of best practice variables. Additionally, the data in an EHR were more available to relevant clinical staff with the appropriate log-in and more easily retrieved than from the PHR. This study contributes to an under-researched area of determining data quality collected in patient records.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large volumes of heterogeneous health data silos pose a big challenge when exploring for information to allow for evidence based decision making and ensuring quality outcomes. In this paper, we present a proof of concept for adopting data warehousing technology to aggregate and analyse disparate health data in order to understand the impact various lifestyle factors on obesity. We present a practical model for data warehousing with detailed explanation which can be adopted similarly for studying various other health issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical Data Warehousing: A Business Analytic approach for managing health data

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Using array comparative genomic hybridization (aCGH), a large number of deleted genomic regions have been identified in human cancers. However, subsequent efforts to identify target genes selected for inactivation in these regions have often been challenging. Methods: We integrated here genome-wide copy number data with gene expression data and non-sense mediated mRNA decay rates in breast cancer cell lines to prioritize gene candidates that are likely to be tumour suppressor genes inactivated by bi-allelic genetic events. The candidates were sequenced to identify potential mutations. Results: This integrated genomic approach led to the identification of RIC8A at 11p15 as a putative candidate target gene for the genomic deletion in the ZR-75-1 breast cancer cell line. We identified a truncating mutation in this cell line, leading to loss of expression and rapid decay of the transcript. We screened 127 breast cancers for RIC8A mutations, but did not find any pathogenic mutations. No promoter hypermethylation in these tumours was detected either. However, analysis of gene expression data from breast tumours identified a small group of aggressive tumours that displayed low levels of RIC8A transcripts. qRT-PCR analysis of 38 breast tumours showed a strong association between low RIC8A expression and the presence of TP53 mutations (P = 0.006). Conclusion: We demonstrate a data integration strategy leading to the identification of RIC8A as a gene undergoing a classical double-hit genetic inactivation in a breast cancer cell line, as well as in vivo evidence of loss of RIC8A expression in a subgroup of aggressive TP53 mutant breast cancers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOTIVATION: We present a method for directly inferring transcriptional modules (TMs) by integrating gene expression and transcription factor binding (ChIP-chip) data. Our model extends a hierarchical Dirichlet process mixture model to allow data fusion on a gene-by-gene basis. This encodes the intuition that co-expression and co-regulation are not necessarily equivalent and hence we do not expect all genes to group similarly in both datasets. In particular, it allows us to identify the subset of genes that share the same structure of transcriptional modules in both datasets. RESULTS: We find that by working on a gene-by-gene basis, our model is able to extract clusters with greater functional coherence than existing methods. By combining gene expression and transcription factor binding (ChIP-chip) data in this way, we are better able to determine the groups of genes that are most likely to represent underlying TMs. AVAILABILITY: If interested in the code for the work presented in this article, please contact the authors. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the register insertion protocol for mixed voice-data traffic is investigated by simulation. The simulation model incorporates a common insertion buffer for station and ring packets. Bandwidth allocation is achieved by imposing a queue limit at each node. A simple priority scheme is introduced by allowing the queue limit to vary from node to node. This enables voice traffic to be given priority over data. The effect on performance of various operational and design parameters such as ratio of voice to data traffic, queue limit and voice packet size is investigated. Comparisons are made where possible with related work on other protocols proposed for voice-data integration. The main conclusions are: (a) there is a general degradation of performance as the ratio of voice traffic to data traffic increases, (b) substantial improvement in performance can be achieved by restricting the queue length at data nodes and (c) for a given ring utilisation, smaller voice packets result in lower delays for both voice and data traffic.