902 resultados para Medical data
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.
Resumo:
Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
In this article the medical data-advisory web-resource developed by authors is considered. This resource allows carrying out information interchange between consumers of medical services and the medical establishments which give these services, and firms-manufacturers of medical equipment and medicaments. Main sections of this web-site, their purposes and capabilities are considered in this article.
Resumo:
The use of secondary data in health care research has become a very important issue over the past few years. Data from the treatment context are being used for evaluation of medical data for external quality assurance, as well as to answer medical questions in the form of registers and research databases. Additionally, the establishment of electronic clinical systems like data warehouses provides new opportunities for the secondary use of clinical data. Because health data is among the most sensitive information about an individual, the data must be safeguarded from disclosure.
Resumo:
Wireless medical systems are comprised of four stages, namely the medical device, the data transport, the data collection and the data evaluation stages. Whereas the performance of the first stage is highly regulated, the others are not. This paper concentrates on the data transport stage and argues that it is necessary to establish standardized tests to be used by medical device manufacturers to provide comparable results concerning the communication performance of the wireless networks used to transport medical data. Besides, it suggests test parameters and procedures to be used to produce comparable communication performance results.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.
Resumo:
For more than 20 years, many countries have been trying to set up a standardised medical record at the regional or at the national level. Most of them have not reached this goal, essentially due to two main difficulties related to patient identification and medical records standardisation. Moreover, the issues raised by the centralisation of all gathered medical data have to be tackled particularly in terms of security and privacy. We discuss here the interest of a noncentralised management of medical records which would require a specific procedure that gives to the patient access to his/her distributed medical data, wherever he/she is located.
Resumo:
Through this article, we propose a mixed management of patients' medical records, so as to share responsibilities between the patient and the Medical Practitioner by making Patients responsible for the validation of their administrative information, and MPs responsible for the validation of their Patients' medical information. Our proposal can be considered a solution to the main problem faced by patients, health practitioners and the authorities, namely the gathering and updating of administrative and medical data belonging to the patient in order to accurately reconstitute a patient's medical history. This method is based on two processes. The aim of the first process is to provide a patient's administrative data, in order to know where and when the patient received care (name of the health structure or health practitioner, type of care: out patient or inpatient). The aim of the second process is to provide a patient's medical information and to validate it under the accountability of the Medical Practitioner with the help of the patient if needed. During these two processes, the patient's privacy will be ensured through cryptographic hash functions like the Secure Hash Algorithm, which allows pseudonymisation of a patient's identity. The proposed Medical Record Search Engines will be able to retrieve and to provide upon a request formulated by the Medical ractitioner all the available information concerning a patient who has received care in different health structures without divulging the patient's identity. Our method can lead to improved efficiency of personal medical record management under the mixed responsibilities of the patient and the MP.
Resumo:
Statistics has become an indispensable tool in biomedical research. Thanks, in particular, to computer science, the researcher has easy access to elementary "classical" procedures. These are often of a "confirmatory" nature: their aim is to test hypotheses (for example the efficacy of a treatment) prior to experimentation. However, doctors often use them in situations more complex than foreseen, to discover interesting data structures and formulate hypotheses. This inverse process may lead to misuse which increases the number of "statistically proven" results in medical publications. The help of a professional statistician thus becomes necessary. Moreover, good, simple "exploratory" techniques are now available. In addition, medical data contain quite a high percentage of outliers (data that deviate from the majority). With classical methods it is often very difficult (even for a statistician!) to detect them and the reliability of results becomes questionable. New, reliable ("robust") procedures have been the subject of research for the past two decades. Their practical introduction is one of the activities of the Statistics and Data Processing Department of the University of Social and Preventive Medicine, Lausanne.
Resumo:
IMPORTANCE: The 16p11.2 BP4-BP5 duplication is the copy number variant most frequently associated with autism spectrum disorder (ASD), schizophrenia, and comorbidities such as decreased body mass index (BMI). OBJECTIVES: To characterize the effects of the 16p11.2 duplication on cognitive, behavioral, medical, and anthropometric traits and to understand the specificity of these effects by systematically comparing results in duplication carriers and reciprocal deletion carriers, who are also at risk for ASD. DESIGN, SETTING, AND PARTICIPANTS: This international cohort study of 1006 study participants compared 270 duplication carriers with their 102 intrafamilial control individuals, 390 reciprocal deletion carriers, and 244 deletion controls from European and North American cohorts. Data were collected from August 1, 2010, to May 31, 2015 and analyzed from January 1 to August 14, 2015. Linear mixed models were used to estimate the effect of the duplication and deletion on clinical traits by comparison with noncarrier relatives. MAIN OUTCOMES AND MEASURES: Findings on the Full-Scale IQ (FSIQ), Nonverbal IQ, and Verbal IQ; the presence of ASD or other DSM-IV diagnoses; BMI; head circumference; and medical data. RESULTS: Among the 1006 study participants, the duplication was associated with a mean FSIQ score that was lower by 26.3 points between proband carriers and noncarrier relatives and a lower mean FSIQ score (16.2-11.4 points) in nonproband carriers. The mean overall effect of the deletion was similar (-22.1 points; P < .001). However, broad variation in FSIQ was found, with a 19.4- and 2.0-fold increase in the proportion of FSIQ scores that were very low (≤40) and higher than the mean (>100) compared with the deletion group (P < .001). Parental FSIQ predicted part of this variation (approximately 36.0% in hereditary probands). Although the frequency of ASD was similar in deletion and duplication proband carriers (16.0% and 20.0%, respectively), the FSIQ was significantly lower (by 26.3 points) in the duplication probands with ASD. There also were lower head circumference and BMI measurements among duplication carriers, which is consistent with the findings of previous studies. CONCLUSIONS AND RELEVANCE: The mean effect of the duplication on cognition is similar to that of the reciprocal deletion, but the variance in the duplication is significantly higher, with severe and mild subgroups not observed with the deletion. These results suggest that additional genetic and familial factors contribute to this variability. Additional studies will be necessary to characterize the predictors of cognitive deficits.
Resumo:
This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.