832 resultados para databases and data mining


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Enriquillo and Azuei are saltwater lakes located in a closed water basin in the southwestern region of the island of La Hispaniola, these have been experiencing dramatic changes in total lake-surface area coverage during the period 1980-2012. The size of Lake Enriquillo presented a surface area of approximately 276 km2 in 1984, gradually decreasing to 172 km2 in 1996. The surface area of the lake reached its lowest point in the satellite observation record in 2004, at 165 km2. Then the recent growth of the lake began reaching its 1984 size by 2006. Based on surface area measurement for June and July 2013, Lake Enriquillo has a surface area of ~358 km2. Sumatra sizes at both ends of the record are 116 km2 in 1984 and 134 km2in 2013, an overall 15.8% increase in 30 years. Determining the causes of lake surface area changes is of extreme importance due to its environmental, social, and economic impacts. The overall goal of this study is to quantify the changing water balance in these lakes and their catchment area using satellite and ground observations and a regional atmospheric-hydrologic modeling approach. Data analyses of environmental variables in the region reflect a hydrological unbalance of the lakes due to changing regional hydro-climatic conditions. Historical data show precipitation, land surface temperature and humidity, and sea surface temperature (SST), increasing over region during the past decades. Salinity levels have also been decreasing by more than 30% from previously reported baseline levels. Here we present a summary of the historical data obtained, new sensors deployed in the sourrounding sierras and the lakes, and the integrated modeling exercises. As well as the challenges of gathering, storing, sharing, and analyzing this large volumen of data in a remote location from such a diverse number of sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Leptospirosis is an important zoonotic disease associated with poor areas of urban settings of developing countries and early diagnosis and prompt treatment may prevent disease. Although rodents are reportedly considered the main reservoirs of leptospirosis, dogs may develop the disease, may become asymptomatic carriers and may be used as sentinels for disease epidemiology. The use of Geographical Information Systems (GIS) combined with spatial analysis techniques allows the mapping of the disease and the identification and assessment of health risk factors. Besides the use of GIS and spatial analysis, the technique of data mining, decision tree, can provide a great potential to find a pattern in the behavior of the variables that determine the occurrence of leptospirosis. The objective of the present study was to apply Geographical Information Systems and data prospection (decision tree) to evaluate the risk factors for canine leptospirosis in an area of Curitiba, PR.Materials, Methods & Results: The present study was performed on the Vila Pantanal, a urban poor community in the city of Curitiba. A total of 287 dog blood samples were randomly obtained house-by-house in a two-day sampling on January 2010. In addition, a questionnaire was applied to owners at the time of sampling. Geographical coordinates related to each household of tested dog were obtained using a Global Positioning System (GPS) for mapping the spatial distribution of reagent and non-reagent dogs to leptospirosis. For the decision tree, risk factors included results of microagglutination test (MAT) from the serum of dogs, previous disease on the household, contact with rats or other dogs, dog breed, outdoors access, feeding, trash around house or backyard, open sewer proximity and flooding. A total of 189 samples (about 2/3 of overall samples) were randomly selected for the training file and consequent decision rules. The remained 98 samples were used for the testing file. The seroprevalence showed a pattern of spatial distribution that involved all the Pantanal area, without agglomeration of reagent animals. In relation to data mining, from 189 samples used in decision tree, a total of 165 (87.3%) animal samples were correctly classified, generating a Kappa index of 0.413. A total of 154 out of 159 (96.8%) samples were considered non-reagent and were correctly classified and only 5/159 (3.2%) were wrongly identified. on the other hand, only 11 (36.7%) reagent samples were correctly classified, with 19 (63.3%) samples failing diagnosis.Discussion: The spatial distribution that involved all the Pantanal area showed that all the animals in the area are at risk of contamination by Leptospira spp. Although most samples had been classified correctly by the decision tree, a degree of difficulty of separability related to seropositive animals was observed, with only 36.7% of the samples classified correctly. This can occur due to the fact of seronegative animals number is superior to the number of seropositive ones, taking the differences in the pattern of variable behavior. The data mining helped to evaluate the most important risk factors for leptospirosis in an urban poor community of Curitiba. The variables selected by decision tree reflected the important factors about the existence of the disease (default of sewer, presence of rats and rubbish and dogs with free access to street). The analyses showed the multifactorial character of the epidemiology of canine leptospirosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whereas genome sequencing defines the genetic potential of an organism, transcript sequencing defines the utilization of this potential and links the genome with most areas of biology. To exploit the information within the human genome in the fight against cancer, we have deposited some two million expressed sequence tags (ESTs) from human tumors and their corresponding normal tissues in the public databases. The data currently define approximate to23,500 genes, of which only approximate to1,250 are still represented only by ESTs. Examination of the EST coverage of known cancer-related (CR) genes reveals that <1% do not have corresponding ESTs, indicating that the representation of genes associated with commonly studied tumors is high. The careful recording of the origin of all ESTs we have produced has enabled detailed definition of where the genes they represent are expressed in the human body. More than 100,000 ESTs are available for seven tissues, indicating a surprising variability of gene usage that has led to the discovery of a significant number of genes with restricted expression, and that may thus be therapeutically useful. The ESTs also reveal novel nonsynonymous germline variants (although the one-pass nature of the data necessitates careful validation) and many alternatively spliced transcripts. Although widely exploited by the scientific community, vindicating our totally open source policy, the EST data generated still provide extensive information that remains to be systematically explored, and that may further facilitate progress toward both the understanding and treatment of human cancers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a new modeling method, support vector regression (SVR) has been regarded as the state-of-the-art technique for regression and approximation. In this study, the SVR models had been introduced and developed to predict body and carcass-related characteristics of 2 strains of broiler chicken. To evaluate the prediction ability of SVR models, we compared their performance with that of neural network (NN) models. Evaluation of the prediction accuracy of models was based on the R-2, MS error, and bias. The variables of interest as model output were BW, empty BW, carcass, breast, drumstick, thigh, and wing weight in 2 strains of Ross and Cobb chickens based on intake dietary nutrients, including ME (kcal/bird per week), CP, TSAA, and Lys, all as grams per bird per week. A data set composed of 64 measurements taken from each strain were used for this analysis, where 44 data lines were used for model training, whereas the remaining 20 lines were used to test the created models. The results of this study revealed that it is possible to satisfactorily estimate the BW and carcass parts of the broiler chickens via their dietary nutrient intake. Through statistical criteria used to evaluate the performance of the SVR and NN models, the overall results demonstrate that the discussed models can be effective for accurate prediction of the body and carcass-related characteristics investigated here. However, the SVR method achieved better accuracy and generalization than the NN method. This indicates that the new data mining technique (SVR model) can be used as an alternative modeling tool for NN models. However, further reevaluation of this algorithm in the future is suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a technique to share the data stored in an object-oriented database aimed at designing environments. This technique shares data between two related databases, called the Original and Product databases, and is composed of three processes: data separation, evolution and integration. Whenever a block of data needs to be shared, it is spread into both databases, resulting in a block on the original database, and another into the Product database, with special links between them controlled by the Object Manager. These blocks do not need to be maintained identical during the evolution phase of the sharing process. Six types of links were defined, and by choosing one, the designer control the evolution and reintegration of the block in both databases. This process uses the composite object concept as the unit of control. The presented concepts can be applied to any data model with support to composite objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported. © 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hemoglobinopathies were included in the Brazilian Neonatal Screening Program on June 6, 2001. Automated high-performance liquid chromatography (HPLC) was indicated as one of the diagnostic methods. The amount of information generated by these systems is immense, and the behavior of groups cannot always be observed in individual analyses. Three-dimensional (3-D) visualization techniques can be applied to extract this information, for extracting patterns, trends or relations from the results stored in databases. We applied the 3-D visualization tool to analyze patterns in the results of hemoglobinopathy based on neonatal diagnosis by HPLC. The laboratory results of 2520 newborn analyses carried out in 2001 and 2002 were used. The Fast, F1, F and A peaks, which were detected by the analytical system, were chosen as attributes for mapping. To establish a behavior pattern, the results were classified into groups according to hemoglobin phenotype: normal (N = 2169), variant (N = 73) and thalassemia (N = 279). 3-D visualization was made with the FastMap DB tool; there were two distribution patterns in the normal group, due to variation in the amplitude of the values obtained by HPLC for the F1 window. It allowed separation of the samples with normal Hb from those with alpha thalassemia, based on a significant difference (P > 0.05) between the mean values of the Fast and A peaks, demonstrating the need for better evaluation of chromatograms; this method could be used to help diagnose alpha thalassemia in newborns.