907 resultados para Data clustering
Resumo:
A identificação de fácies em um poço não testemunhado é um dos problemas clássicos da avaliação de formação. Neste trabalho este problema é tratado em dois passos, no primeiro produz-se a codificação da informação geológica ou da descrição das fácies atravessadas em um poço testemunhado em termos das suas propriedades físicas registradas nos perfis geofísicos e traduzidas pelos parâmetros L e K, que são obtidos a partir dos perfis de porosidade (densidade, sônico e porosidade neutrônica) e pela argilosidade (Vsh) calculada pelo perfil de raio gama natural. Estes três parâmetros são convenientemente representados na forma do Gráfico Vsh-L-K. No segundo passo é realizada a interpretação computacional do Gráfico Vsh-L-K por um algoritmo inteligente construído com base na rede neural competitiva angular generalizada, que é especializada na classificação de padrões angulares ou agrupamento de pontos no espaço n-dimensional que possuem uma envoltória aproximadamente elipsoidal. Os parâmetros operacionais do algoritmo inteligente, como a arquitetura da rede neural e pesos sinápticos são obtidos em um Gráfico Vsh-L-K, construído e interpretado com as informações de um poço testemunhado. Assim, a aplicação deste algoritmo inteligente é capaz de identificar e classificar as camadas presentes em um poço não testemunhado, em termos das fácies identificadas no poço testemunhado ou em termos do mineral principal, quando ausentes no poço testemunhado. Esta metodologia é apresentada com dados sintéticos e com perfis de poços testemunhados do Campo de Namorado, na Bacia de Campos, localizada na plataforma continental do Rio de Janeiro, Brasil.
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.
Resumo:
Erkrankungen des Skelettapparats wie beispielsweise die Osteoporose oder Arthrose gehören neben den Herz-Kreislauferkrankungen und Tumoren zu den Häufigsten Erkrankungen des Menschen. Ein besseres Verständnis der Bildung und des Erhalts von Knochen- oder Knorpelgewebe ist deshalb von besonderer Bedeutung. Viele bisherige Ansätze zur Identifizierung hierfür relevanter Gene, deren Produkte und Interaktionen beruhen auf der Untersuchung pathologischer Situationen. Daher ist die Funktion vieler Gene nur im Zusammenhang mit Krankheiten beschrieben. Untersuchungen, die die Genaktivität bei der Normalentwicklung von knochen- und knorpelbildenden Geweben zum Ziel haben, sind dagegen weit weniger oft durchgeführt worden. rnEines der entwicklungsphysiologisch interessantesten Gewebe ist die Epiphysenfuge der Röhrenknochen. In dieser sogenannten Wachstumsfuge ist insbesondere beim fötalen Gewebe eine sehr hohe Aktivität derjenigen Gene zu erwarten, die an der Knochen- und Knorpelbildung beteiligt sind. In der vorliegenden Arbeit wurde daher aus der Epiphysenfuge von Kälberknochen RNA isoliert und eine cDNA-Bibliothek konstruiert. Von dieser wurden ca. 4000 Klone im Rahmen eines klassischen EST-Projekts sequenziert. Durch die Analyse konnte ein ungefähr 900 Gene umfassendes Expressionsprofil erstellt werden und viele Transkripte für Komponenten der regulatorischen und strukturbildenden Bestandteile der Knochen- und Knorpelentwicklung identifiziert werden. Neben den typischen Genen für Komponenten der Knochenentwicklung sind auch deutlich Bestandteile für embryonale Entwicklungsprozesse vertreten. Zu ersten gehören in erster Linie die Kollagene, allen voran Kollagen II alpha 1, das mit Abstand höchst exprimierte Gen in der fötalen Wachstumsfuge. Nach den ribosomalen Proteinen stellen die Kollagene mit ca. 10 % aller auswertbaren Sequenzen die zweitgrößte Gengruppe im erstellten Expressionsprofil dar. Proteoglykane und andere niedrig exprimierte regulatorische Elemente, wie Transkriptionsfaktoren, konnten im EST-Projekt aufgrund der geringen Abdeckung nur in sehr geringer Kopienzahl gefunden werden. Allerdings förderte die EST-Analyse mehrere interessante, bisher nicht bekannte Transkripte zutage, die detaillierter untersucht wurden. Dazu gehören Transkripte die, die dem LOC618319 zugeordnet werden konnten. Neben den bisher beschriebenen drei Exonbereichen konnte ein weiteres Exon im 3‘-UTR identifiziert werden. Im abgeleiteten Protein, das mindestens 121 AS lang ist, wurden ein Signalpeptid und eine Transmembrandomäne nachgewiesen. In Verbindung mit einer möglichen Glykosylierung ist das Genprodukt in die Gruppe der Proteoglykane einzuordnen. Leicht abweichend von den typischen Strukturen knochen- und knorpelspezifischer Proteoglykane ist eine mögliche Funktion dieses Genprodukts bei der Interaktion mit Integrinen und der Zell-Zellinteraktion, aber auch bei der Signaltransduktion denkbar. rnDie EST-Sequenzierungen von ca. 4000 cDNA-Klonen können aber in der Regel nur einen Bruchteil der möglichen Transkripte des untersuchten Gewebes abdecken. Mit den neuen Sequenziertechnologien des „Next Generation Sequencing“ bestehen völlig neue Möglichkeiten, komplette Transkriptome mit sehr hoher Abdeckung zu sequenzieren und zu analysieren. Zur Unterstützung der EST-Daten und zur deutlichen Verbreiterung der Datenbasis wurde das Transkriptom der bovinen fötalen Wachstumsfuge sowohl mit Hilfe der Roche-454/FLX- als auch der Illumina-Solexa-Technologie sequenziert. Bei der Auswertung der ca. 40000 454- und 75 Millionen Illumina-Sequenzen wurden Verfahren zur allgemeinen Handhabung, der Qualitätskontrolle, dem „Clustern“, der Annotation und quantitativen Auswertung von großen Mengen an Sequenzdaten etabliert. Beim Vergleich der Hochdurchsatz Blast-Analysen im klassischen „Read-Count“-Ansatz mit dem erstellten EST-Expressionsprofil konnten gute Überstimmungen gezeigt werden. Abweichungen zwischen den einzelnen Methoden konnten nicht in allen Fällen methodisch erklärt werden. In einigen Fällen sind Korrelationen zwischen Transkriptlänge und „Read“-Verteilung zu erkennen. Obwohl schon simple Methoden wie die Normierung auf RPKM („reads per kilo base transkript per million mappable reads“) eine Verbesserung der Interpretation ermöglichen, konnten messtechnisch durch die Art der Sequenzierung bedingte systematische Fehler nicht immer ausgeräumt werden. Besonders wichtig ist daher die geeignete Normalisierung der Daten beim Vergleich verschieden generierter Datensätze. rnDie hier diskutierten Ergebnisse aus den verschiedenen Analysen zeigen die neuen Sequenziertechnologien als gute Ergänzung und potentiellen Ersatz für etablierte Methoden zur Genexpressionsanalyse.rn
Resumo:
The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
The analysis of time-dependent data is an important problem in many application domains, and interactive visualization of time-series data can help in understanding patterns in large time series data. Many effective approaches already exist for visual analysis of univariate time series supporting tasks such as assessment of data quality, detection of outliers, or identification of periodically or frequently occurring patterns. However, much fewer approaches exist which support multivariate time series. The existence of multiple values per time stamp makes the analysis task per se harder, and existing visualization techniques often do not scale well. We introduce an approach for visual analysis of large multivariate time-dependent data, based on the idea of projecting multivariate measurements to a 2D display, visualizing the time dimension by trajectories. We use visual data aggregation metaphors based on grouping of similar data elements to scale with multivariate time series. Aggregation procedures can either be based on statistical properties of the data or on data clustering routines. Appropriately defined user controls allow to navigate and explore the data and interactively steer the parameters of the data aggregation to enhance data analysis. We present an implementation of our approach and apply it on a comprehensive data set from the field of earth bservation, demonstrating the applicability and usefulness of our approach.
Resumo:
Cognitive wireless sensor network (CWSN) is a new paradigm, integrating cognitive features in traditional wireless sensor networks (WSNs) to mitigate important problems such as spectrum occupancy. Security in cognitive wireless sensor networks is an important problem since these kinds of networks manage critical applications and data. The specific constraints of WSN make the problem even more critical, and effective solutions have not yet been implemented. Primary user emulation (PUE) attack is the most studied specific attack deriving from new cognitive features. This work discusses a new approach, based on anomaly behavior detection and collaboration, to detect the primary user emulation attack in CWSN scenarios. Two non-parametric algorithms, suitable for low-resource networks like CWSNs, have been used in this work: the cumulative sum and data clustering algorithms. The comparison is based on some characteristics such as detection delay, learning time, scalability, resources, and scenario dependency. The algorithms have been tested using a cognitive simulator that provides important results in this area. Both algorithms have shown to be valid in order to detect PUE attacks, reaching a detection rate of 99% and less than 1% of false positives using collaboration.
Resumo:
Las redes de sensores inalámbricas son uno de los sectores con más crecimiento dentro de las redes inalámbricas. La rápida adopción de estas redes como solución para muchas nuevas aplicaciones ha llevado a un creciente tráfico en el espectro radioeléctrico. Debido a que las redes inalámbricas de sensores operan en las bandas libres Industrial, Scientific and Medical (ISM) se ha producido una saturación del espectro que en pocos años no permitirá un buen funcionamiento. Con el objetivo de solucionar este tipo de problemas ha aparecido el paradigma de Radio Cognitiva (CR). La introducción de las capacidades cognitivas en las redes inalámbricas de sensores permite utilizar estas redes para aplicaciones con unos requisitos más estrictos respecto a fiabilidad, cobertura o calidad de servicio. Estas redes que aúnan todas estas características son llamadas redes de sensores inalámbricas cognitivas (CWSNs). La mejora en prestaciones de las CWSNs permite su utilización en aplicaciones críticas donde antes no podían ser utilizadas como monitorización de estructuras, de servicios médicos, en entornos militares o de vigilancia. Sin embargo, estas aplicaciones también requieren de otras características que la radio cognitiva no nos ofrece directamente como, por ejemplo, la seguridad. La seguridad en CWSNs es un aspecto poco desarrollado al ser una característica no esencial para su funcionamiento, como pueden serlo el sensado del espectro o la colaboración. Sin embargo, su estudio y mejora es esencial de cara al crecimiento de las CWSNs. Por tanto, esta tesis tiene como objetivo implementar contramedidas usando las nuevas capacidades cognitivas, especialmente en la capa física, teniendo en cuenta las limitaciones con las que cuentan las WSNs. En el ciclo de trabajo de esta tesis se han desarrollado dos estrategias de seguridad contra ataques de especial importancia en redes cognitivas: el ataque de simulación de usuario primario (PUE) y el ataque contra la privacidad eavesdropping. Para mitigar el ataque PUE se ha desarrollado una contramedida basada en la detección de anomalías. Se han implementado dos algoritmos diferentes para detectar este ataque: el algoritmo de Cumulative Sum y el algoritmo de Data Clustering. Una vez comprobado su validez se han comparado entre sí y se han investigado los efectos que pueden afectar al funcionamiento de los mismos. Para combatir el ataque de eavesdropping se ha desarrollado una contramedida basada en la inyección de ruido artificial de manera que el atacante no distinga las señales con información del ruido sin verse afectada la comunicación que nos interesa. También se ha estudiado el impacto que tiene esta contramedida en los recursos de la red. Como resultado paralelo se ha desarrollado un marco de pruebas para CWSNs que consta de un simulador y de una red de nodos cognitivos reales. Estas herramientas han sido esenciales para la implementación y extracción de resultados de la tesis. ABSTRACT Wireless Sensor Networks (WSNs) are one of the fastest growing sectors in wireless networks. The fast introduction of these networks as a solution in many new applications has increased the traffic in the radio spectrum. Due to the operation of WSNs in the free industrial, scientific, and medical (ISM) bands, saturation has ocurred in these frequencies that will make the same operation methods impossible in the future. Cognitive radio (CR) has appeared as a solution for this problem. The networks that join all the mentioned features together are called cognitive wireless sensor networks (CWSNs). The adoption of cognitive features in WSNs allows the use of these networks in applications with higher reliability, coverage, or quality of service requirements. The improvement of the performance of CWSNs allows their use in critical applications where they could not be used before such as structural monitoring, medical care, military scenarios, or security monitoring systems. Nevertheless, these applications also need other features that cognitive radio does not add directly, such as security. The security in CWSNs has not yet been explored fully because it is not necessary field for the main performance of these networks. Instead, other fields like spectrum sensing or collaboration have been explored deeply. However, the study of security in CWSNs is essential for their growth. Therefore, the main objective of this thesis is to study the impact of some cognitive radio attacks in CWSNs and to implement countermeasures using new cognitive capabilities, especially in the physical layer and considering the limitations of WSNs. Inside the work cycle of this thesis, security strategies against two important kinds of attacks in cognitive networks have been developed. These attacks are the primary user emulator (PUE) attack and the eavesdropping attack. A countermeasure against the PUE attack based on anomaly detection has been developed. Two different algorithms have been implemented: the cumulative sum algorithm and the data clustering algorithm. After the verification of these solutions, they have been compared and the side effects that can disturb their performance have been analyzed. The developed approach against the eavesdropping attack is based on the generation of artificial noise to conceal information messages. The impact of this countermeasure on network resources has also been studied. As a parallel result, a new framework for CWSNs has been developed. This includes a simulator and a real network with cognitive nodes. This framework has been crucial for the implementation and extraction of the results presented in this thesis.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.
Resumo:
Geospatial clustering must be designed in such a way that it takes into account the special features of geoinformation and the peculiar nature of geographical environments in order to successfully derive geospatially interesting global concentrations and localized excesses. This paper examines families of geospaital clustering recently proposed in the data mining community and identifies several features and issues especially important to geospatial clustering in data-rich environments.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
This article is is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Attribution-NonCommercial (CC BY-NC) license lets others remix, tweak, and build upon work non-commercially, and although the new works must also acknowledge & be non-commercial.
Resumo:
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.