818 resultados para Clustering algorithm
Resumo:
An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining
Resumo:
Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
The aim of this study is to show the importance of two classification techniques, viz. decision tree and clustering, in prediction of learning disabilities (LD) of school-age children. LDs affect about 10 percent of all children enrolled in schools. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Decision trees and clustering are powerful and popular tools used for classification and prediction in Data mining. Different rules extracted from the decision tree are used for prediction of learning disabilities. Clustering is the assignment of a set of observations into subsets, called clusters, which are useful in finding the different signs and symptoms (attributes) present in the LD affected child. In this paper, J48 algorithm is used for constructing the decision tree and K-means algorithm is used for creating the clusters. By applying these classification techniques, LD in any child can be identified
Resumo:
This work proposes a parallel genetic algorithm for compressing scanned document images. A fitness function is designed with Hausdorff distance which determines the terminating condition. The algorithm helps to locate the text lines. A greater compression ratio has achieved with lesser distortion
Resumo:
A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases
Resumo:
Reinforcement Learning (RL) refers to a class of learning algorithms in which learning system learns which action to take in different situations by using a scalar evaluation received from the environment on performing an action. RL has been successfully applied to many multi stage decision making problem (MDP) where in each stage the learning systems decides which action has to be taken. Economic Dispatch (ED) problem is an important scheduling problem in power systems, which decides the amount of generation to be allocated to each generating unit so that the total cost of generation is minimized without violating system constraints. In this paper we formulate economic dispatch problem as a multi stage decision making problem. In this paper, we also develop RL based algorithm to solve the ED problem. The performance of our algorithm is compared with other recent methods. The main advantage of our method is it can learn the schedule for all possible demands simultaneously.
Resumo:
Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year
Resumo:
Adaptive filter is a primary method to filter Electrocardiogram (ECG), because it does not need the signal statistical characteristics. In this paper, an adaptive filtering technique for denoising the ECG based on Genetic Algorithm (GA) tuned Sign-Data Least Mean Square (SD-LMS) algorithm is proposed. This technique minimizes the mean-squared error between the primary input, which is a noisy ECG, and a reference input which can be either noise that is correlated in some way with the noise in the primary input or a signal that is correlated only with ECG in the primary input. Noise is used as the reference signal in this work. The algorithm was applied to the records from the MIT -BIH Arrhythmia database for removing the baseline wander and 60Hz power line interference. The proposed algorithm gave an average signal to noise ratio improvement of 10.75 dB for baseline wander and 24.26 dB for power line interference which is better than the previous reported works
Resumo:
A Multi-Objective Antenna Placement Genetic Algorithm (MO-APGA) has been proposed for the synthesis of matched antenna arrays on complex platforms. The total number of antennas required, their position on the platform, location of loads, loading circuit parameters, decoupling and matching network topology, matching network parameters and feed network parameters are optimized simultaneously. The optimization goal was to provide a given minimum gain, specific gain discrimination between the main and back lobes and broadband performance. This algorithm is developed based on the non-dominated sorting genetic algorithm (NSGA-II) and Minimum Spanning Tree (MST) technique for producing diverse solutions when the number of objectives is increased beyond two. The proposed method is validated through the design of a wideband airborne SAR
Resumo:
Considerable research effort has been devoted in predicting the exon regions of genes. The binary indicator (BI), Electron ion interaction pseudo potential (EIIP), Filter method are some of the methods. All these methods make use of the period three behavior of the exon region. Even though the method suggested in this paper is similar to above mentioned methods , it introduces a set of sequences for mapping the nucleotides selected by applying genetic algorithm and found to be more promising
Resumo:
Combinational digital circuits can be evolved automatically using Genetic Algorithms (GA). Until recently this technique used linear chromosomes and and one dimensional crossover and mutation operators. In this paper, a new method for representing combinational digital circuits as 2 Dimensional (2D) chromosomes and suitable 2D crossover and mutation techniques has been proposed. By using this method, the convergence speed of GA can be increased significantly compared to the conventional methods. Moreover, the 2D representation and crossover operation provides the designer with better visualization of the evolved circuits. In addition to this, a technique to display automatically the evolved circuits has been developed with the help of MATLAB
Resumo:
This paper presents a new approach to the design of combinational digital circuits with multiplexers using Evolutionary techniques. Genetic Algorithm (GA) is used as the optimization tool. Several circuits are synthesized with this method and compared with two design techniques such as standard implementation of logic functions using multiplexers and implementation using Shannon’s decomposition technique using GA. With the proposed method complexity of the circuit and the associated delay can be reduced significantly
Resumo:
Many recent Web 2.0 resource sharing applications can be subsumed under the "folksonomy" moniker. Regardless of the type of resource shared, all of these share a common structure describing the assignment of tags to resources by users. In this report, we generalize the notions of clustering and characteristic path length which play a major role in the current research on networks, where they are used to describe the small-world effects on many observable network datasets. To that end, we show that the notion of clustering has two facets which are not equivalent in the generalized setting. The new measures are evaluated on two large-scale folksonomy datasets from resource sharing systems on the web.
Resumo:
Formal Concept Analysis is an unsupervised learning technique for conceptual clustering. We introduce the notion of iceberg concept lattices and show their use in Knowledge Discovery in Databases (KDD). Iceberg lattices are designed for analyzing very large databases. In particular they serve as a condensed representation of frequent patterns as known from association rule mining. In order to show the interplay between Formal Concept Analysis and association rule mining, we discuss the algorithm TITANIC. We show that iceberg concept lattices are a starting point for computing condensed sets of association rules without loss of information, and are a visualization method for the resulting rules.