976 resultados para electronic healthcare data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Through this paper, we present the initial steps for the creation of an integrated platform for the provision of a series of eHealth tools and services to both citizens and travelers in isolated areas of thesoutheast Mediterranean, and on board ships travelling across it. The platform was created through an INTERREG IIIB ARCHIMED project called INTERMED. Methods The support of primary healthcare, home care and the continuous education of physicians are the three major issues that the proposed platform is trying to facilitate. The proposed system is based on state-of-the-art telemedicine systems and is able to provide the following healthcare services: i) Telecollaboration and teleconsultation services between remotely located healthcare providers, ii) telemedicine services in emergencies, iii) home telecare services for "at risk" citizens such as the elderly and patients with chronic diseases, and iv) eLearning services for the continuous training through seminars of both healthcare personnel (physicians, nurses etc) and persons supporting "at risk" citizens. These systems support data transmission over simple phone lines, internet connections, integrated services digital network/digital subscriber lines, satellite links, mobile networks (GPRS/3G), and wireless local area networks. The data corresponds, among others, to voice, vital biosignals, still medical images, video, and data used by eLearning applications. The proposed platform comprises several systems, each supporting different services. These were integrated using a common data storage and exchange scheme in order to achieve system interoperability in terms of software, language and national characteristics. Results The platform has been installed and evaluated in different rural and urban sites in Greece, Cyprus and Italy. The evaluation was mainly related to technical issues and user satisfaction. The selected sites are, among others, rural health centers, ambulances, homes of "at-risk" citizens, and a ferry. Conclusions The results proved the functionality and utilization of the platform in various rural places in Greece, Cyprus and Italy. However, further actions are needed to enable the local healthcare systems and the different population groups to be familiarized with, and use in their everyday lives, mature technological solutions for the provision of healthcare services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of electronic health data favors scientific advance through the creation of repositories for secondary use. Data anonymization is a mandatory step to comply with current legislation. A service for the pseudonymization of electronic healthcare record (EHR) extracts aimed at facilitating the exchange of clinical information for secondary use in compliance with legislation on data protection is presented. According to ISO/TS 25237, pseudonymization is a particular type of anonymization. This tool performs the anonymizations by maintaining three quasi-identifiers (gender, date of birth and place of residence) with a degree of specification selected by the user. The developed system is based on the ISO/EN 13606 norm using its characteristics specifically favorable for anonymization. The service is made up of two independent modules: the demographic server and the pseudonymizing module. The demographic server supports the permanent storage of the demographic entities and the management of the identifiers. The pseudonymizing module anonymizes the ISO/EN 13606 extracts. The pseudonymizing process consists of four phases: the storage of the demographic information included in the extract, the substitution of the identifiers, the elimination of the demographic information of the extract and the elimination of key data in free-text fields. The described pseudonymizing system was used in three Telemedicine research projects with satisfactory results. A problem was detected with the type of data in a demographic data field and a proposal for modification was prepared for the group in charge of the drawing up and revision of the ISO/EN 13606 norm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the application of existing and novel adaptations of visualisation techniques to routinely collected health data. The aim of this case study is to examine the capacity for visualisation approaches to quickly and e ectively inform clinical, policy, and scal decision making to improve healthcare provision. We demonstrate the use of interactive graphics, fluctuation plots, mosaic plots, time plots, heatmaps, and disease maps to visualise patient admission, transfer, in-hospital mortality, morbidity coding, execution of diagnosis and treatment guidelines, and the temporal and spatial variations of diseases. The relative e ectiveness of these techniques and associated challenges are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Healthcare plays an important role in promoting the general health and well-being of people around the world. The difficulty in healthcare data classification arises from the uncertainty and the high-dimensional nature of the medical data collected. This paper proposes an integration of fuzzy standard additive model (SAM) with genetic algorithm (GA), called GSAM, to deal with uncertainty and computational challenges. GSAM learning process comprises three continual steps: rule initialization by unsupervised learning using the adaptive vector quantization clustering, evolutionary rule optimization by GA and parameter tuning by the gradient descent supervised learning. Wavelet transformation is employed to extract discriminative features for high-dimensional datasets. GSAM becomes highly capable when deployed with small number of wavelet features as its computational burden is remarkably reduced. The proposed method is evaluated using two frequently-used medical datasets: the Wisconsin breast cancer and Cleveland heart disease from the UCI Repository for machine learning. Experiments are organized with a five-fold cross validation and performance of classification techniques are measured by a number of important metrics: accuracy, F-measure, mutual information and area under the receiver operating characteristic curve. Results demonstrate the superiority of the GSAM compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus helpful as a decision support system for medical practitioners in the healthcare practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessing prognostic risk is crucial to clinical care, and critically dependent on both diagnosis and medical interventions. Current methods use this augmented information to build a single prediction rule. But this may not be expressive enough to capture differential effects of interventions on prognosis. To this end, we propose a supervised, Bayesian nonparametric framework that simultaneously discovers the latent intervention groups and builds a separate prediction rule for each intervention group. The prediction rule is learnt using diagnosis data through a Bayesian logistic regression. For inference, we develop an efficient collapsed Gibbs sampler. We demonstrate that our method outperforms baselines in predicting 30-day hospital readmission using two patient cohorts - Acute Myocardial Infarction and Pneumonia. The significance of this model is that it can be applied widely across a broad range of medical prognosis tasks. © 2014 Springer International Publishing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical interventions critically determine clinical outcomes. But prediction models either ignore interventions or dilute impact by building a single prediction rule by amalgamating interventions with other features. One rule across all interventions may not capture differential effects. Also, interventions change with time as innovations are made, requiring prediction models to evolve over time. To address these gaps, we propose a prediction framework that explicitly models interventions by extracting a set of latent intervention groups through a Hierarchical Dirichlet Process (HDP) mixture. Data are split in temporal windows and for each window, a separate distribution over the intervention groups is learnt. This ensures that the model evolves with changing interventions. The outcome is modeled as conditional, on both the latent grouping and the patients' condition, through a Bayesian logistic regression. Learning distributions for each time-window result in an over-complex model when interventions do not change in every time-window. We show that by replacing HDP with a dynamic HDP prior, a more compact set of distributions can be learnt. Experiments performed on two hospital datasets demonstrate the superiority of our framework over many existing clinical and traditional prediction frameworks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical outcomes are inexorably linked to patient illness and clinical interventions. Interventions change the course of disease, crucially determining outcome. Traditional outcome prediction models build a single classifier by augmenting interventions with disease information. Interventions, however, differentially affect prognosis, thus a single prediction rule may not suffice to capture variations. Interventions also evolve over time as more advanced interventions replace older ones. To this end, we propose a Bayesian nonparametric, supervised framework that models a set of intervention groups through a mixture distribution building a separate prediction rule for each group, and allows the mixture distribution to change with time. This is achieved by using a hierarchical Dirichlet process mixture model over the interventions. The outcome is then modeled as conditional on both the latent grouping and the disease information through a Bayesian logistic regression. Experiments on synthetic and medical cohorts for 30-day readmission prediction demonstrate the superiority of the proposed model over clinical and data mining baselines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have searched for periodic variations of the electronic recoil event rate in the (2-6) keV energy range recorded between February 2011 and March 2012 with the XENON100 detector, adding up to 224.6 live days in total. Following a detailed study to establish the stability of the detector and its background contributions during this run, we performed an un-binned profile likelihood analysis to identify any periodicity up to 500 days. We find a global significance of less than 1 sigma for all periods suggesting no statistically significant modulation in the data. While the local significance for an annual modulation is 2.8 sigma, the analysis of a multiple-scatter control sample and the phase of the modulation disfavor a dark matter interpretation. The DAMA/LIBRA annual modulation interpreted as a dark matter signature with axial-vector coupling of WIMPs to electrons is excluded at 4.8 sigma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have searched for periodic variations of the electronic recoil event rate in the (2-6) keV energy range recorded between February 2011 and March 2012 with the XENON100 detector, adding up to 224.6 live days in total. Following a detailed study to establish the stability of the detector and its background contributions during this run, we performed an un-binned profile likelihood analysis to identify any periodicity up to 500 days. We find a global significance of less than 1 sigma for all periods suggesting no statistically significant modulation in the data. While the local significance for an annual modulation is 2.8 sigma, the analysis of a multiple-scatter control sample and the phase of the modulation disfavor a dark matter interpretation. The DAMA/LIBRA annual modulation interpreted as a dark matter signature with axial-vector coupling of WIMPs to electrons is excluded at 4.8 sigma.