901 resultados para self-organizing map


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the relationship between macroscopic traffic parameters, such as flow, speed and travel time, is essential to the understanding of the behaviour of freeway and arterial roads. However, the temporal dynamics of these parameters are difficult to model, especially for arterial roads, where the process of traffic change is driven by a variety of variables. The introduction of the Bluetooth technology into the transportation area has proven exceptionally useful for monitoring vehicular traffic, as it allows reliable estimation of travel times and traffic demands. In this work, we propose an approach based on Bayesian networks for analyzing and predicting the complex dynamics of flow or volume, based on travel time observations from Bluetooth sensors. The spatio-temporal relationship between volume and travel time is captured through a first-order transition model, and a univariate Gaussian sensor model. The two models are trained and tested on travel time and volume data, from an arterial link, collected over a period of six days. To reduce the computational costs of the inference tasks, volume is converted into a discrete variable. The discretization process is carried out through a Self-Organizing Map. Preliminary results show that a simple Bayesian network can effectively estimate and predict the complex temporal dynamics of arterial volumes from the travel time data. Not only is the model well suited to produce posterior distributions over single past, current and future states; but it also allows computing the estimations of joint distributions, over sequences of states. Furthermore, the Bayesian network can achieve excellent prediction, even when the stream of travel time observation is partially incomplete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Close to one half of the LHC events are expected to be due to elastic or inelastic diffractive scattering. Still, predictions based on extrapolations of experimental data at lower energies differ by large factors in estimating the relative rate of diffractive event categories at the LHC energies. By identifying diffractive events, detailed studies on proton structure can be carried out. The combined forward physics objects: rapidity gaps, forward multiplicity and transverse energy flows can be used to efficiently classify proton-proton collisions. Data samples recorded by the forward detectors, with a simple extension, will allow first estimates of the single diffractive (SD), double diffractive (DD), central diffractive (CD), and non-diffractive (ND) cross sections. The approach, which uses the measurement of inelastic activity in forward and central detector systems, is complementary to the detection and measurement of leading beam-like protons. In this investigation, three different multivariate analysis approaches are assessed in classifying forward physics processes at the LHC. It is shown that with gene expression programming, neural networks and support vector machines, diffraction can be efficiently identified within a large sample of simulated proton-proton scattering events. The event characteristics are visualized by using the self-organizing map algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper. we propose a novel method using wavelets as input to neural network self-organizing maps and support vector machine for classification of magnetic resonance (MR) images of the human brain. The proposed method classifies MR brain images as either normal or abnormal. We have tested the proposed approach using a dataset of 52 MR brain images. Good classification percentage of more than 94% was achieved using the neural network self-organizing maps (SOM) and 98% front support vector machine. We observed that the classification rate is high for a Support vector machine classifier compared to self-organizing map-based approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of ail edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however, we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A self-organizing map (SOM) was used to cluster the water quality data of Xiangxi River in the Three Gorges Reservoir region. The results showed that 81 sampling sites could be divided into several groups representing different land use types. The forest dominated region had low concentrations of most nutrient variables except COD, whereas the agricultural region had high concentrations of NO3N, TN, Alkalinity, and Hardness. The sites downstream of an urban area were high in NH3N, NO2N, PO4P and TP. Redundancy analysis was used to identify the individual effects of topography and land use on river water quality. The results revealed that the watershed factors accounted for 61.7% variations of water quality in the Xiangxi River. Specifically, topographical characteristics explained 26.0% variations of water quality, land use explained 10.2%, and topography and land use together explained 25.5%. More than 50% of the variation in most water quality variables was explained by watershed characteristics. However, water quality variables which are strongly influenced by urban and industrial point source pollution (NH3N, NO2N, PO4P and TP) were not as well correlated with watershed characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four microsatellites were used to examine the genetic variability of the spawning stocks of Chinese sturgeon, Acipenser sinensis, from the Yangtze River sampled over a 3-year period (1999-2001). Within 60 individuals, a total of 28 alleles were detected over four polymorphic microsatellite loci. The number of alleles per locus ranged from 4 to 15, with an average allele number of 7. The number of genotypes per locus ranged from 6 to 41. The genetic diversity of four microsatellite loci varied from 0.34 to 0.67, with an average value of 0.54. For the four microsatellite loci, the deviation from the Hardy-Weinberg equilibrium was mainly due to null alleles. The mean number of alleles per locus and the mean heterozygosity were lower than the average values known for anadromous fishes. Fish were clustered according to their microsatellite characteristics using an unsupervised 'Artificial Neural Networks' method entitled 'Self-organizing Map'. The results revealed no significant genetic differentiation considering genetic distance among samples collected during different years. Lack of heterogeneity among different annual groups of spawning stocks was explained by the complex age structure (from 8 to 27 years for males and 12 to 35 years for females) of Chinese sturgeon, leading to formulate an hypothesis about the maintenance of genetic diversity and stability in long-lived animals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The largest damming project to date, the Three Gorges Dam has been built along the Yangtze River (China), the most species-rich river in the Palearctic region. Among 162 species of fish inhabiting the main channel of the upper Yangtze, 44 are endemic and are therefore under serious threat of global extinction from the dam. Accordingly, it is urgently necessary to develop strategies to minimize the impacts of the drastic environmental changes associated with the dam. We sought to identify potential reserves for the endemic species among the 17 tributaries in the upper Yangtze, based on presence/absence data for the 44 endemic species. Potential reserves for the endemic species were identified by characterizing the distribution patterns of endemic species with an adaptive learning algorithm called a "self-organizing map" (SOM). Using this method, we also predicted occurrence probabilities of species in potential reserves based on the distribution patterns of communities. Considering both SOM model results and actual knowledge of the biology of the considered species, our results suggested that 24 species may survive in the tributaries, 14 have an uncertain future, and 6 have a high probability of becoming extinct after dam filling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The successful design of biomaterial scaffolds for articular cartilage tissue engineering requires an understanding of the impact of combinations of material formulation parameters on diverse and competing functional outcomes of biomaterial performance. This study sought to explore the use of a type of unsupervised artificial network, a self-organizing map, to identify relationships between scaffold formulation parameters (crosslink density, molecular weight, and concentration) and 11 such outcomes (including mechanical properties, matrix accumulation, metabolite usage and production, and histological appearance) for scaffolds formed from crosslinked elastin-like polypeptide (ELP) hydrogels. The artificial neural network recognized patterns in functional outcomes and provided a set of relationships between ELP formulation parameters and measured outcomes. Mapping resulted in the best mean separation amongst neurons for mechanical properties and pointed to crosslink density as the strongest predictor of most outcomes, followed by ELP concentration. The map also grouped formulations together that simultaneously resulted in the highest values for matrix production, greatest changes in metabolite consumption or production, and highest histological scores, indicating that the network was able to recognize patterns amongst diverse measurement outcomes. These results demonstrated the utility of artificial neural network tools for recognizing relationships in systems with competing parameters, toward the goal of optimizing and accelerating the design of biomaterial scaffolds for articular cartilage tissue engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A inovação é considerada pelos economistas como fator determinante para o crescimento económico e social sustentável. No contexto da atual economia, global e marcada por uma profunda crise, torna-se imperativo compreender os padrões de inovação para suportar melhores políticas e respostas aos desafios que se impõem. Este entendimento conduz à ilação de que os desvios significativos no crescimento económico observado entre diferentes regiões são também explicados por diferenças espaciais nos padrões de inovação. Na sequência do exposto tem-se assistido a um renovado e crescente interesse no estudo da inovação numa perspetiva territorial e a uma crescente produção e disponibilização de dados para estudo e compreensão das suas dinâmicas. O objectivo principal da presente dissertação é demonstrar a utilidade de uma técnica de Data Mining, a rede neuronal Self Organizing Map, na exploração destes dados para estudo da inovação. Em concreto pretende-se demonstrar a capacidade desta técnica tanto para identificar perfis regionais de inovação bem como para visualizar a evolução desses perfis no tempo num mapa topológico virtual, o espaço de atributos do SOM, por comparação com um mapa geográfico. Foram utilizados dados Euronext relativos a 236 regiões europeias para os anos compreendidos entre 2003 e 2009. O Self Organizing Map foi construído com base no GeoSOM, software desenvolvido pelo Instituto Superior de Estatística e Gestão de Informação. Os resultados obtidos permitem demonstrar a utilidade desta técnica na visualização dos padrões de inovação das regiões europeias no espaço e no tempo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atualmente, um dos principais desafios que afeta a saúde pública no Brasil é a crescente evolução no número de casos e epidemias provocados pelo vírus da dengue. Não existem estudos suficientes que consigam elucidar quais fatores contribuem para a evolução das epidemias de Dengue. Fatores como condições sanitárias, localização geográfica, investimentos financeiros em infraestrutura e qualidade de vida podem estar relacionados com a incidência de Dengue. Além disso, outra questão que merece um maior destaque é o estudo para se identificar o grau de impacto das variáveis determinantes da dengue e se existe um padrão que está correlacionado com a taxa de incidência. Desta forma, este trabalho tem como objetivo principal a correlação da taxa de incidência da dengue na população de cada município brasileiro, utilizando dados relativos aos aspectos sociais, econômicos, demográficos e ambientais. Outra contribuição relevante do trabalho, foi a análise dos padrões de distribuição espacial da taxa de incidência de Dengue e sua relação com os padrões encontrados utilizando as variáveis socioeconômicas e ambientais, sobretudo analisando a evolução temporal no período de 2008 até 2012. Para essa análises, utilizou-se o Sistema de Informação Geográfica (SIG) aliado com a mineração de dados, através da metodologia de rede neural mais especificamente o mapa auto organizável de Kohonen ou self-organizing maps (SOM). Tal metodologia foi empregada para a identificação de padrão de agrupamentos dessas variáveis e sua relação com as classes de incidência de dengue no Brasil (Alta, Média e Baixa). Assim, este projeto contribui de forma significativa para uma melhor compreensão dos fatores que estão associados à ocorrência de Dengue, e como essa doença está correlacionada com fatores como: meio ambiente, infraestrutura e localização no espaço geográfico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interest in using information to improve the quality of living in large urban areas and its governance efficiency has been around for decades. Nevertheless, the improvements in Information and Communications Technology has sparked a new dynamic in academic research, usually under the umbrella term of Smart Cities. This concept of Smart City can probably be translated, in a simplified version, into cities that are lived, managed and developed in an information-saturated environment. While it makes perfect sense and we can easily foresee the benefits of such a concept, presently there are still several significant challenges that need to be tackled before we can materialize this vision. In this work we aim at providing a small contribution in this direction, which maximizes the relevancy of the available information resources. One of the most detailed and geographically relevant information resource available, for the study of cities, is the census, more specifically the data available at block level (Subsecção Estatística). In this work, we use Self-Organizing Maps (SOM) and the variant Geo-SOM to explore the block level data from the Portuguese census of Lisbon city, for the years of 2001 and 2011. We focus on gauging change, proposing ways that allow the comparison of the two time periods, which have two different underlying geographical bases. We proceed with the analysis of the data using different SOM variants, aiming at producing a two-fold portrait: one, of the evolution of Lisbon during the first decade of the XXI century, another, of how the census dataset and SOM’s can be used to produce an informational framework for the study of cities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.