750 resultados para CLASSIFIER
Resumo:
Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm
Resumo:
In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.
Resumo:
Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization
Resumo:
The municipality of Areia Branca is within the mesoregion of West Potiguar and within the microregion of Mossoró, covering an area of 357,58 km2. Covering an area of weakness in terms of environmental, housing, together with the municipality of Grossos-RN, the estuary of River Apodi-Mossoró. The municipality of Areia Branca has historically suffered from a lack of planning regarding the use and occupation of land as some economic activities, attracted by the extremely favorable natural conditions, have exploited their natural resources improperly. The aim of this study is to quantify and analyze the environmental degradation in the municipality. Thus initially was performed a characterization of land use using remote sensing, geoprocessing and geographic information system GIS in order to generate data and information on the municipal scale, which may serve as input to the environmental planning and land use planning in the region. From this perspective, were used a Landsat 5 image TM sensor for the year 2010. In the processing of this image was used SPRING 5.2 and applied a supervised classification using the classifier regions, which was employed Bhattacharya Distance method with a threshold at 30%. Thus was obtained the land use map that was analyzed the spatial distribution of different types of the use that is occurring in the city, identifying areas that are being used incorrectly and the main types of environmental degradation. And further, were applied the methodology proposed by Beltrame (1994), Physical Diagnosis Conservationist under some adaptations for quantifying the level of degradation or conservation study area. As results, the indexes were obtained for the parameters in the proposed methodology, allowing quantitatively analyze the degradation potential of each sector. From this perspective, considering a scale of 0 to 100, sector A and sector B had value 31.20 units of risk of physical deterioration. And the C sector, has shown its value - 34.64 units degradation risk and should be considered a priority in relation to the achievement of conservation actions
Resumo:
In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison
Resumo:
Objective to establish a methodology for the oil spill monitoring on the sea surface, located at the Submerged Exploration Area of the Polo Region of Guamaré, in the State of Rio Grande do Norte, using orbital images of Synthetic Aperture Radar (SAR integrated with meteoceanographycs products. This methodology was applied in the following stages: (1) the creation of a base map of the Exploration Area; (2) the processing of NOAA/AVHRR and ERS-2 images for generation of meteoceanographycs products; (3) the processing of RADARSAT-1 images for monitoring of oil spills; (4) the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products; and (5) the structuring of a data base. The Integration of RADARSAT-1 image of the Potiguar Basin of day 21.05.99 with the base map of the Exploration Area of the Polo Region of Guamaré for the identification of the probable sources of the oil spots, was used successfully in the detention of the probable spot of oil detected next to the exit to the submarine emissary in the Exploration Area of the Polo Region of Guamaré. To support the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products, a methodology was developed for the classification of oil spills identified by RADARSAT-1 images. For this, the following algorithms of classification not supervised were tested: K-means, Fuzzy k-means and Isodata. These algorithms are part of the PCI Geomatics software, which was used for the filtering of RADARSAT-1 images. For validation of the results, the oil spills submitted to the unsupervised classification were compared to the results of the Semivariogram Textural Classifier (STC). The mentioned classifier was developed especially for oil spill classification purposes and requires PCI software for the whole processing of RADARSAT-1 images. After all, the results of the classifications were analyzed through Visual Analysis; Calculation of Proportionality of Largeness and Analysis Statistics. Amongst the three algorithms of classifications tested, it was noted that there were no significant alterations in relation to the spills classified with the STC, in all of the analyses taken into consideration. Therefore, considering all the procedures, it has been shown that the described methodology can be successfully applied using the unsupervised classifiers tested, resulting in a decrease of time in the identification and classification processing of oil spills, if compared with the utilization of the STC classifier
Resumo:
In this paper we deal with the problem of feature selection by introducing a new approach based on Gravitational Search Algorithm (GSA). The proposed algorithm combines the optimization behavior of GSA together with the speed of Optimum-Path Forest (OPF) classifier in order to provide a fast and accurate framework for feature selection. Experiments on datasets obtained from a wide range of applications, such as vowel recognition, image classification and fraud detection in power distribution systems are conducted in order to asses the robustness of the proposed technique against Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and a Particle Swarm Optimization (PSO)-based algorithm for feature selection.
Resumo:
The identification of genes essential for survival is important for the understanding of the minimal requirements for cellular life and for drug design. As experimental studies with the purpose of building a catalog of essential genes for a given organism are time-consuming and laborious, a computational approach which could predict gene essentiality with high accuracy would be of great value. We present here a novel computational approach, called NTPGE (Network Topology-based Prediction of Gene Essentiality), that relies on the network topology features of a gene to estimate its essentiality. The first step of NTPGE is to construct the integrated molecular network for a given organism comprising protein physical, metabolic and transcriptional regulation interactions. The second step consists in training a decision-tree-based machine-learning algorithm on known essential and non-essential genes of the organism of interest, considering as learning attributes the network topology information for each of these genes. Finally, the decision-tree classifier generated is applied to the set of genes of this organism to estimate essentiality for each gene. We applied the NTPGE approach for discovering the essential genes in Escherichia coli and then assessed its performance. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Background: The genome-wide identification of both morbid genes, i.e., those genes whose mutations cause hereditary human diseases, and druggable genes, i.e., genes coding for proteins whose modulation by small molecules elicits phenotypic effects, requires experimental approaches that are time-consuming and laborious. Thus, a computational approach which could accurately predict such genes on a genome-wide scale would be invaluable for accelerating the pace of discovery of causal relationships between genes and diseases as well as the determination of druggability of gene products.Results: In this paper we propose a machine learning-based computational approach to predict morbid and druggable genes on a genome-wide scale. For this purpose, we constructed a decision tree-based meta-classifier and trained it on datasets containing, for each morbid and druggable gene, network topological features, tissue expression profile and subcellular localization data as learning attributes. This meta-classifier correctly recovered 65% of known morbid genes with a precision of 66% and correctly recovered 78% of known druggable genes with a precision of 75%. It was than used to assign morbidity and druggability scores to genes not known to be morbid and druggable and we showed a good match between these scores and literature data. Finally, we generated decision trees by training the J48 algorithm on the morbidity and druggability datasets to discover cellular rules for morbidity and druggability and, among the rules, we found that the number of regulating transcription factors and plasma membrane localization are the most important factors to morbidity and druggability, respectively.Conclusions: We were able to demonstrate that network topological features along with tissue expression profile and subcellular localization can reliably predict human morbid and druggable genes on a genome-wide scale. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing morbidity and druggability.
Resumo:
Mobile robots need autonomy to fulfill their tasks. Such autonomy is related whith their capacity to explorer and to recognize their navigation environments. In this context, the present work considers techniques for the classification and extraction of features from images, using artificial neural networks. This images are used in the mapping and localization system of LACE (Automation and Evolutive Computing Laboratory) mobile robot. In this direction, the robot uses a sensorial system composed by ultrasound sensors and a catadioptric vision system equipped with a camera and a conical mirror. The mapping system is composed of three modules; two of them will be presented in this paper: the classifier and the characterizer modules. Results of these modules simulations are presented in this paper.
Resumo:
The objective of the present study, developed in a mountainous region in Brazil where many landslides occur, is to present a method for detecting landslide scars that couples image processing techniques with spatial analysis tools. An IKONOS image was initially segmented, and then classified through a Batthacharrya classifier, with an acceptance limit of 99%, resulting in 216 polygons identified with a spectral response similar to landslide scars. After making use of some spatial analysis tools that took into account a susceptibility map, a map of local drainage channels and highways, and the maximum expected size of scars in the study area, some features misinterpreted as scars were excluded. The 43 resulting features were then compared with visually interpreted landslide scars and field observations. The proposed method can be reproduced and enhanced by adding filtering criteria and was able to find new scars on the image, with a final error rate of 2.3%.
Resumo:
As condições meteorológicas são determinantes para a produção agrícola; a precipitação, em particular, pode ser citada como a mais influente por sua relação direta com o balanço hídrico. Neste sentido, modelos agrometeorológicos, os quais se baseiam nas respostas das culturas às condições meteorológicas, vêm sendo cada vez mais utilizados para a estimativa de rendimentos agrícolas. Devido às dificuldades de obtenção de dados para abastecer tais modelos, métodos de estimativa de precipitação utilizando imagens dos canais espectrais dos satélites meteorológicos têm sido empregados para esta finalidade. O presente trabalho tem por objetivo utilizar o classificador de padrões floresta de caminhos ótimos para correlacionar informações disponíveis no canal espectral infravermelho do satélite meteorológico GOES-12 com a refletividade obtida pelo radar do IPMET/UNESP localizado no município de Bauru, visando o desenvolvimento de um modelo para a detecção de ocorrência de precipitação. Nos experimentos foram comparados quatro algoritmos de classificação: redes neurais artificiais (ANN), k-vizinhos mais próximos (k-NN), máquinas de vetores de suporte (SVM) e floresta de caminhos ótimos (OPF). Este último obteve melhor resultado, tanto em eficiência quanto em precisão.
Resumo:
Redes neurais pulsadas - redes que utilizam uma codificação temporal da informação - têm despontado como uma promissora abordagem dentro do paradigma conexionista, emergente da ciência cognitiva. Um desses novos modelos é a rede neural pulsada com função de base radial, que é capaz de armazenar informação nos tempos de atraso axonais dos neurônios. Um algoritmo de aprendizado foi aplicado com sucesso nesta rede pulsada, que se mostrou capaz de mapear uma seqüência de pulsos de entrada em uma seqüência de pulsos de saída. Mais recentemente, um método baseado no uso de campos receptivos gaussianos foi proposto para codificar dados constantes em uma seqüência de pulsos temporais. Este método tornou possível a essa rede lidar com dados computacionais. O processo de aprendizado desta nova rede não se encontra plenamente compreendido e investigações mais profundas são necessárias para situar este modelo dentro do contexto do aprendizado de máquinas e também para estabelecer as habilidades e limitações desta rede. Este trabalho apresenta uma investigação desse novo classificador e um estudo de sua capacidade de agrupar dados em três dimensões, particularmente procurando estabelecer seus domínios de aplicação e horizontes no campo da visão computacional.
Resumo:
This article introduces an efficient method to generate structural models for medium-sized silicon clusters. Geometrical information obtained from previous investigations of small clusters is initially sorted and then introduced into our predictor algorithm in order to generate structural models for large clusters. The method predicts geometries whose binding energies are close (95%) to the corresponding value for the ground-state with very low computational cost. These predictions can be used as a very good initial guess for any global optimization algorithm. As a test case, information from clusters up to 14 atoms was used to predict good models for silicon clusters up to 20 atoms. We believe that the new algorithm may enhance the performance of most optimization methods whenever some previous information is available. (C) 2003 Wiley Periodicals, Inc.