973 resultados para self-organising maps
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.
Resumo:
This article highlights the potential benefits that the Kohonen method has for the classification of rivers with similar characteristics by determining regional ecological flows using the ELOHA (Ecological Limits of Hydrologic Alteration) methodology. Currently, there are many methodologies for the classification of rivers, however none of them include the characteristics found in Kohonen method such as (i) providing the number of groups that actually underlie the information presented, (ii) used to make variable importance analysis, (iii) which in any case can display two-dimensional classification process, and (iv) that regardless of the parameters used in the model the clustering structure remains. In order to evaluate the potential benefits of the Kohonen method, 174 flow stations distributed along the great river basin “Magdalena-Cauca” (Colombia) were analyzed. 73 variables were obtained for the classification process in each case. Six trials were done using different combinations of variables and the results were validated against reference classification obtained by Ingfocol in 2010, whose results were also framed using ELOHA guidelines. In the process of validation it was found that two of the tested models reproduced a level higher than 80% of the reference classification with the first trial, meaning that more than 80% of the flow stations analyzed in both models formed invariant groups of streams.
Resumo:
Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization
Resumo:
In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development
Resumo:
Background: Sugarcane is an increasingly economically and environmentally important C4 grass, used for the production of sugar and bioethanol, a low-carbon emission fuel. Sugarcane originated from crosses of Saccharum species and is noted for its unique capacity to accumulate high amounts of sucrose in its stems. Environmental stresses limit enormously sugarcane productivity worldwide. To investigate transcriptome changes in response to environmental inputs that alter yield we used cDNA microarrays to profile expression of 1,545 genes in plants submitted to drought, phosphate starvation, herbivory and N-2-fixing endophytic bacteria. We also investigated the response to phytohormones (abscisic acid and methyl jasmonate). The arrayed elements correspond mostly to genes involved in signal transduction, hormone biosynthesis, transcription factors, novel genes and genes corresponding to unknown proteins.Results: Adopting an outliers searching method 179 genes with strikingly different expression levels were identified as differentially expressed in at least one of the treatments analysed. Self Organizing Maps were used to cluster the expression profiles of 695 genes that showed a highly correlated expression pattern among replicates. The expression data for 22 genes was evaluated for 36 experimental data points by quantitative RT-PCR indicating a validation rate of 80.5% using three biological experimental replicates. The SUCAST Database was created that provides public access to the data described in this work, linked to tissue expression profiling and the SUCAST gene category and sequence analysis. The SUCAST database also includes a categorization of the sugarcane kinome based on a phylogenetic grouping that included 182 undefined kinases.Conclusion: An extensive study on the sugarcane transcriptome was performed. Sugarcane genes responsive to phytohormones and to challenges sugarcane commonly deals with in the field were identified. Additionally, the protein kinases were annotated based on a phylogenetic approach. The experimental design and statistical analysis applied proved robust to unravel genes associated with a diverse array of conditions attributing novel functions to previously unknown or undefined genes. The data consolidated in the SUCAST database resource can guide further studies and be useful for the development of improved sugarcane varieties.
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The use of non-human primates in scientific research has contributed significantly to the biomedical area and, in the case of Callithrix jacchus, has provided important evidence on physiological mechanisms that help explain its biology, making the species a valuable experimental model in different pathologies. However, raising non-human primates in captivity for long periods of time is accompanied by behavioral disorders and chronic diseases, as well as progressive weight loss in most of the animals. The Primatology Center of the Universidade Federal do Rio Grande do Norte (UFRN) has housed a colony of C. jacchus for nearly 30 years and during this period these animals have been weighed systematically to detect possible alterations in their clinical conditions. This procedure has generated a volume of data on the weight of animals at different age ranges. These data are of great importance in the study of this variable from different perspectives. Accordingly, this paper presents three studies using weight data collected over 15 years (1985-2000) as a way of verifying the health status and development of the animals. The first study produced the first article, which describes the histopathological findings of animals with probable diagnosis of permanent wasting marmoset syndrome (WMS). All the animals were carriers of trematode parasites (Platynosomum spp) and had obstruction in the hepatobiliary system; it is suggested that this agent is one of the etiological factors of the syndrome. In the second article, the analysis focused on comparing environmental profile and cortisol levels between the animals with normal weight curve evolution and those with WMS. We observed a marked decrease in locomotion, increased use of lower cage extracts and hypocortisolemia. The latter is likely associated to an adaptation of the mechanisms that make up the hypothalamus-hypophysis-adrenal axis, as observed in other mammals under conditions of chronic malnutrition. Finally, in the third study, the animals with weight alterations were excluded from the sample and, using computational tools (K-means and SOM) in a non-supervised way, we suggest found new ontogenetic development classes for C. jacchus. These were redimensioned from five to eight classes: infant I, infant II, infant III, juvenile I, juvenile II, sub-adult, young adult and elderly adult, in order to provide a more suitable classification for more detailed studies that require better control over the animal development
Resumo:
This paper considers the role of automatic estimation of crowd density and its importance for the automatic monitoring of areas where crowds are expected to be present. A new technique is proposed which is able to estimate densities ranging from very low to very high concentration of people, which is a difficult problem because in a crowd only parts of people's body appear. The new technique is based on the differences of texture patterns of the images of crowds. Images of low density crowds tend to present coarse textures, while images of dense crowds tend to present fine textures. The image pixels are classified in different texture classes and statistics of such classes are used to estimate the number of people. The texture classification and the estimation of people density are carried out by means of self organising neural networks. Results obtained respectively to the estimation of the number of people in a specific area of Liverpool Street Railway Station in London (UK) are presented. (C) 1998 Elsevier B.V. Ltd. All rights reserved.
Resumo:
Human beings perceive images through their properties, like colour, shape, size, and texture. Texture is a fertile source of information about the physical environment. Images of low density crowds tend to present coarse textures, while images of dense crowds tend to present fine textures. This paper describes a new technique for automatic estimation of crowd density, which is a part of the problem of automatic crowd monitoring, using texture information based on grey-level transition probabilities on digitised images. Crowd density feature vectors are extracted from such images and used by a self organising neural network which is responsible for the crowd density estimation. Results obtained respectively to the estimation of the number of people in a specific area of Liverpool Street Railway Station in London (UK) are presented.
Resumo:
The presence of precipitates in metallic materials affects its durability, resistance and mechanical properties. Hence, its automatic identification by image processing and machine learning techniques may lead to reliable and efficient assessments on the materials. In this paper, we introduce four widely used supervised pattern recognition techniques to accomplish metallic precipitates segmentation in scanning electron microscope images from dissimilar welding on a Hastelloy C-276 alloy: Support Vector Machines, Optimum-Path Forest, Self Organizing Maps and a Bayesian classifier. Experimental results demonstrated that all classifiers achieved similar recognition rates with good results validated by an expert in metallographic image analysis. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
Pós-graduação em Filosofia - FFC
Resumo:
As concessionárias de energia, para garantir que sua rede seja confiável, necessitam realizar um procedimento para estudo e análise baseado em funções de entrega de energia nos pontos de consumo. Este estudo, geralmente chamado de planejamento de sistemas de distribuição de energia elétrica, é essencial para garantir que variações na demanda de energia não afetem o desempenho do sistema, que deverá se manter operando de maneira técnica e economicamente viável. Nestes estudos, geralmente são analisados, demanda, tipologia de curva de carga, fator de carga e outros aspectos das cargas existentes. Considerando então a importância da determinação das tipologias de curvas de cargas para as concessionárias de energia em seu processo de planejamento, a Companhia de Eletricidade do Amapá (CEA) realizou uma campanha de medidas de curvas de carga de transformadores de distribuição para obtenção das tipologias de curvas de carga que caracterizam seus consumidores. Neste trabalho apresentam-se os resultados satisfatórios obtidos a partir da utilização de Mineração de Dados baseada em Inteligência Computacional (Mapas Auto-Organizáveis de Kohonen) para seleção das curvas típicas e determinação das tipologias de curvas de carga de consumidores residenciais e industriais da cidade de Macapá, localizada no estado do Amapá. O mapa auto-organizável de Kohonen é um tipo de Rede Neural Artificial que combina operações de projeção e agrupamento, permitindo a realização de análise exploratória de dados, com o objetivo de produzir descrições sumarizadas de grandes conjuntos de dados.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)