818 resultados para MULTI-RELATIONAL DATA MINING


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important application of Big Data Analytics is the real-time analysis of streaming data. Streaming data imposes unique challenges to data mining algorithms, such as concept drifts, the need to analyse the data on the fly due to unbounded data streams and scalable algorithms due to potentially high throughput of data. Real-time classification algorithms that are adaptive to concept drifts and fast exist, however, most approaches are not naturally parallel and are thus limited in their scalability. This paper presents work on the Micro-Cluster Nearest Neighbour (MC-NN) classifier. MC-NN is based on an adaptive statistical data summary based on Micro-Clusters. MC-NN is very fast and adaptive to concept drift whilst maintaining the parallel properties of the base KNN classifier. Also MC-NN is competitive compared with existing data stream classifiers in terms of accuracy and speed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of this thesis project is to prediction of symptom severity and cause in data from test battery of the Parkinson’s disease patient, which is based on data mining. The collection of the data is from test battery on a hand in computer. We use the Chi-Square method and check which variables are important and which are not important. Then we apply different data mining techniques on our normalize data and check which technique or method gives good results.The implementation of this thesis is in WEKA. We normalize our data and then apply different methods on this data. The methods which we used are Naïve Bayes, CART and KNN. We draw the Bland Altman and Spearman’s Correlation for checking the final results and prediction of data. The Bland Altman tells how the percentage of our confident level in this data is correct and Spearman’s Correlation tells us our relationship is strong. On the basis of results and analysis we see all three methods give nearly same results. But if we see our CART (J48 Decision Tree) it gives good result of under predicted and over predicted values that’s lies between -2 to +2. The correlation between the Actual and Predicted values is 0,794in CART. Cause gives the better percentage classification result then disability because it can use two classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The opening of the Brazilian market of electricity and competitiveness between companies in the energy sector make the search for useful information and tools that will assist in decision making activities, increase by the concessionaires. An important source of knowledge for these utilities is the time series of energy demand. The identification of behavior patterns and description of events become important for the planning execution, seeking improvements in service quality and financial benefits. This dissertation presents a methodology based on mining and representation tools of time series, in order to extract knowledge that relate series of electricity demand in various substations connected of a electric utility. The method exploits the relationship of duration, coincidence and partial order of events in multi-dimensionals time series. To represent the knowledge is used the language proposed by Mörchen (2005) called Time Series Knowledge Representation (TSKR). We conducted a case study using time series of energy demand of 8 substations interconnected by a ring system, which feeds the metropolitan area of Goiânia-GO, provided by CELG (Companhia Energética de Goiás), responsible for the service of power distribution in the state of Goiás (Brazil). Using the proposed methodology were extracted three levels of knowledge that describe the behavior of the system studied, representing clearly the system dynamics, becoming a tool to assist planning activities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of large amounts of data is better performed by humans when represented in a graphical format. Therefore, a new research area called the Visual Data Mining is being developed endeavoring to use the number crunching power of computers to prepare data for visualization, allied to the ability of humans to interpret data presented graphically.This work presents the results of applying a visual data mining tool, called FastMapDB to detect the behavioral pattern exhibited by a dataset of clinical information about hemoglobinopathies known as thalassemia. FastMapDB is a visual data mining tool that get tabular data stored in a relational database such as dates, numbers and texts, and by considering them as points in a multidimensional space, maps them to a three-dimensional space. The intuitive three-dimensional representation of objects enables a data analyst to see the behavior of the characteristics from abnormal forms of hemoglobin, highlighting the differences when compared to data from a group without alteration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In soil surveys, several sampling systems can be used to define the most representative sites for sample collection and description of soil profiles. In recent years, the conditioned Latin hypercube sampling system has gained prominence for soil surveys. In Brazil, most of the soil maps are at small scales and in paper format, which hinders their refinement. The objectives of this work include: (i) to compare two sampling systems by conditioned Latin hypercube to map soil classes and soil properties; (II) to retrieve information from a detailed scale soil map of a pilot watershed for its refinement, comparing two data mining tools, and validation of the new soil map; and (III) to create and validate a soil map of a much larger and similar area from the extrapolation of information extracted from the existing soil map. Two sampling systems were created by conditioned Latin hypercube and by the cost-constrained conditioned Latin hypercube. At each prospection place, soil classification and measurement of the A horizon thickness were performed. Maps were generated and validated for each sampling system, comparing the efficiency of these methods. The conditioned Latin hypercube captured greater variability of soils and properties than the cost-constrained conditioned Latin hypercube, despite the former provided greater difficulty in field work. The conditioned Latin hypercube can capture greater soil variability and the cost-constrained conditioned Latin hypercube presents great potential for use in soil surveys, especially in areas of difficult access. From an existing detailed scale soil map of a pilot watershed, topographical information for each soil class was extracted from a Digital Elevation Model and its derivatives, by two data mining tools. Maps were generated using each tool. The more accurate of these tools was used for extrapolation of soil information for a much larger and similar area and the generated map was validated. It was possible to retrieve the existing soil map information and apply it on a larger area containing similar soil forming factors, at much low financial cost. The KnowledgeMiner tool for data mining, and ArcSIE, used to create the soil map, presented better results and enabled the use of existing soil map to extract soil information and its application in similar larger areas at reduced costs, which is especially important in development countries with limited financial resources for such activities, such as Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concept drift, which refers to non stationary learning problems over time, has increasing importance in machine learning and data mining. Many concept drift applications require fast response, which means an algorithm must always be (re)trained with the latest available data. But the process of data labeling is usually expensive and/or time consuming when compared to acquisition of unlabeled data, thus usually only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are based on assumptions that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenging task in machine learning. Recently, a particle competition and cooperation approach has been developed to realize graph-based semi-supervised learning from static data. We have extend that approach to handle data streams and concept drift. The result is a passive algorithm which uses a single classifier approach, naturally adapted to concept changes without any explicit drift detection mechanism. It has built-in mechanisms that provide a natural way of learning from new data, gradually "forgetting" older knowledge as older data items are no longer useful for the classification of newer data items. The proposed algorithm is applied to the KDD Cup 1999 Data of network intrusion, showing its effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People store data of the most different daily events, to know yourself, detect behaviors, predict events and have an strongly knowledge to take decisions. The growth of the events, results in a large amount of data colected and this data needs to be processed to get value information. This data have a temporal component from the collect process (daily, monthly or annualy) and this need to be consider on the exploration. The exploration based on temporal component can be uni-scale or multi-scale. The data mining goes toward to extract knowledge from large databases and if combined with visualization tools, the data mining can be more effective to detect information. This visualization tools display data and allow user to manipulate and change it by interaction features toward your goal. The user can combine tools and combine the steps of visualization among the tools through messages. This monograph aim to insert interactivity on AdaptaVis architecture model, developed by Shimabukuro (2004), the InfoVis, then extends its ability of exploration and provide a consistent base for the user handle data and extract information

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In [1], the authors proposed a framework for automated clustering and visualization of biological data sets named AUTO-HDS. This letter is intended to complement that framework by showing that it is possible to get rid of a user-defined parameter in a way that the clustering stage can be implemented more accurately while having reduced computational complexity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The attributes describing a data set may often be arranged in meaningful subsets, each of which corresponds to a different aspect of the data. An unsupervised algorithm (SCAD) that simultaneously performs fuzzy clustering and aspects weighting was proposed in the literature. However, SCAD may fail and halt given certain conditions. To fix this problem, its steps are modified and then reordered to reduce the number of parameters required to be set by the user. In this paper we prove that each step of the resulting algorithm, named ASCAD, globally minimizes its cost-function with respect to the argument being optimized. The asymptotic analysis of ASCAD leads to a time complexity which is the same as that of fuzzy c-means. A hard version of the algorithm and a novel validity criterion that considers aspect weights in order to estimate the number of clusters are also described. The proposed method is assessed over several artificial and real data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro nasce dall’obiettivo di individuare strumenti statistici per indagare, sotto diversi aspetti, il flusso di lavoro di un Laboratorio di Anatomia Patologica. Il punto di partenza dello studio è l’ambiente di lavoro di ATHENA, software gestionale utilizzato nell’Anatomia Patologica, sviluppato dalla NoemaLife S.p.A., azienda specializzata nell’informatica per la sanità. A partire da tale applicativo è stato innanzitutto formalizzato il workflow del laboratorio (Capitolo 2), nelle sue caratteristiche e nelle sue possibili varianti, identificando le operazioni principali attraverso una serie di “fasi”. Proprio le fasi, unitamente alle informazioni addizionali ad esse associate, saranno per tutta la trattazione e sotto diversi punti di vista al centro dello studio. L’analisi che presentiamo è stata per completezza sviluppata in due scenari che tengono conto di diversi aspetti delle informazioni in possesso. Il primo scenario tiene conto delle sequenze di fasi, che si presentano nel loro ordine cronologico, comprensive di eventuali ripetizioni o cicli di fasi precedenti alla conclusione. Attraverso l’elaborazione dei dati secondo specifici formati è stata svolta un’iniziale indagine grafica di Workflow Mining (Capitolo 3) grazie all’ausilio di EMiT, un software che attraverso un set di log di processo restituisce graficamente il flusso di lavoro che li rappresenta. Questa indagine consente già di valutare la completezza dell’utilizzo di un applicativo rispetto alle sue potenzialità. Successivamente, le stesse fasi sono state elaborate attraverso uno specifico adattamento di un comune algoritmo di allineamento globale, l’algoritmo Needleman-Wunsch (Capitolo 4). L’utilizzo delle tecniche di allineamento applicate a sequenze di processo è in grado di individuare, nell’ambito di una specifica codifica delle fasi, le similarità tra casi clinici. L’algoritmo di Needleman-Wunsch individua le identità e le discordanze tra due stringhe di caratteri, assegnando relativi punteggi che portano a valutarne la similarità. Tale algoritmo è stato opportunamente modificato affinché possa riconoscere e penalizzare differentemente cicli e ripetizioni, piuttosto che fasi mancanti. Sempre in ottica di allineamento sarà utilizzato l’algoritmo euristico Clustal, che a partire da un confronto pairwise tra sequenze costruisce un dendrogramma rappresentante graficamente l’aggregazione dei casi in funzione della loro similarità. Proprio il dendrogramma, per la sua struttura grafica ad albero, è in grado di mostrare intuitivamente l’andamento evolutivo della similarità di un pattern di casi. Il secondo scenario (Capitolo 5) aggiunge alle sequenze l’informazione temporale in termini di istante di esecuzione di ogni fase. Da un dominio basato su sequenze di fasi, si passa dunque ad uno scenario di serie temporali. I tempi rappresentano infatti un dato essenziale per valutare la performance di un laboratorio e per individuare la conformità agli standard richiesti. Il confronto tra i casi è stato effettuato con diverse modalità, in modo da stabilire la distanza tra tutte le coppie sotto diversi aspetti: le sequenze, rappresentate in uno specifico sistema di riferimento, sono state confrontate in base alla Distanza Euclidea ed alla Dynamic Time Warping, in grado di esprimerne le discordanze rispettivamente temporali, di forma e, dunque, di processo. Alla luce dei risultati e del loro confronto, saranno presentate già in questa fase le prime valutazioni sulla pertinenza delle distanze e sulle informazioni deducibili da esse. Il Capitolo 6 rappresenta la ricerca delle correlazioni tra elementi caratteristici del processo e la performance dello stesso. Svariati fattori come le procedure utilizzate, gli utenti coinvolti ed ulteriori specificità determinano direttamente o indirettamente la qualità del servizio erogato. Le distanze precedentemente calcolate vengono dunque sottoposte a clustering, una tecnica che a partire da un insieme eterogeneo di elementi individua famiglie o gruppi simili. L’algoritmo utilizzato sarà l’UPGMA, comunemente applicato nel clustering in quanto, utilizzando, una logica di medie pesate, porta a clusterizzazioni pertinenti anche in ambiti diversi, dal campo biologico a quello industriale. L’ottenimento dei cluster potrà dunque essere finalmente sottoposto ad un’attività di ricerca di correlazioni utili, che saranno individuate ed interpretate relativamente all’attività gestionale del laboratorio. La presente trattazione propone quindi modelli sperimentali adattati al caso in esame ma idealmente estendibili, interamente o in parte, a tutti i processi che presentano caratteristiche analoghe.