774 resultados para outlier detection, data mining, gpgpu, gpu computing, supercomputing
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In this paper, we implement an anomaly detection system using the Dempster-Shafer method. Using two standard benchmark problems we show that by combining multiple signals it is possible to achieve better results than by using a single signal. We further show that by applying this approach to a real-world email dataset the algorithm works for email worm detection. Dempster-Shafer can be a promising method for anomaly detection problems with multiple features (data sources), and two or more classes.
Resumo:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
Resumo:
Clustering data streams is an important task in data mining research. Recently, some algorithms have been proposed to cluster data streams as a whole, but just few of them deal with multivariate data streams. Even so, these algorithms merely aggregate the attributes without touching upon the correlation among them. In order to overcome this issue, we propose a new framework to cluster multivariate data streams based on their evolving behavior over time, exploring the correlations among their attributes by computing the fractal dimension. Experimental results with climate data streams show that the clusters' quality and compactness can be improved compared to the competing method, leading to the thoughtfulness that attributes correlations cannot be put aside. In fact, the clusters' compactness are 7 to 25 times better using our method. Our framework also proves to be an useful tool to assist meteorologists in understanding the climate behavior along a period of time.
Resumo:
Background: The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart's field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time. Methods/Principal Findings: We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of ""what if'' situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal. Conclusion/Significance: The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
3rd SMTDA Conference Proceedings, 11-14 June 2014, Lisbon Portugal.
Resumo:
Perante a evolução constante da Internet, a sua utilização é quase obrigatória. Através da web, é possível conferir extractos bancários, fazer compras em países longínquos, pagar serviços sem sair de casa, entre muitos outros. Há inúmeras alternativas de utilização desta rede. Ao se tornar tão útil e próxima das pessoas, estas começaram também a ganhar mais conhecimentos informáticos. Na Internet, estão também publicados vários guias para intrusão ilícita em sistemas, assim como manuais para outras práticas criminosas. Este tipo de informação, aliado à crescente capacidade informática do utilizador, teve como resultado uma alteração nos paradigmas de segurança informática actual. Actualmente, em segurança informática a preocupação com o hardware é menor, sendo o principal objectivo a salvaguarda dos dados e continuidade dos serviços. Isto deve-se fundamentalmente à dependência das organizações nos seus dados digitais e, cada vez mais, dos serviços que disponibilizam online. Dada a mudança dos perigos e do que se pretende proteger, também os mecanismos de segurança devem ser alterados. Torna-se necessário conhecer o atacante, podendo prever o que o motiva e o que pretende atacar. Neste contexto, propôs-se a implementação de sistemas de registo de tentativas de acesso ilícitas em cinco instituições de ensino superior e posterior análise da informação recolhida com auxílio de técnicas de data mining (mineração de dados). Esta solução é pouco utilizada com este intuito em investigação, pelo que foi necessário procurar analogias com outras áreas de aplicação para recolher documentação relevante para a sua implementação. A solução resultante revelou-se eficaz, tendo levado ao desenvolvimento de uma aplicação de fusão de logs das aplicações Honeyd e Snort (responsável também pelo seu tratamento, preparação e disponibilização num ficheiro Comma Separated Values (CSV), acrescentando conhecimento sobre o que se pode obter estatisticamente e revelando características úteis e previamente desconhecidas dos atacantes. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurança, tais como firewalls e Intrusion Detection Systems (IDS).
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
A vigilância de efeitos indesejáveis após a vacinação é complexa. Existem vários actores de confundimento que podem dar origem a associações espúrias, meramente temporais mas que podem provocar uma percepção do risco alterada e uma consequente desconfiança generalizada acerca do uso das vacinas. Com efeito as vacinas são medicamentos complexos com características únicas cuja vigilância necessita de abordagens metodológicas desenvolvidas para esse propósito. Do exposto se entende que, desde o desenvolvimento da farmacovigilância se tem procurado desenvolver novas metodologias que sejam concomitantes aos Sistemas de Notificação Espontânea que já existem. Neste trabalho propusemo-nos a desenvolver e testar um modelo de vigilância de reacções adversas a vacinas, baseado na auto-declaração pelo utente de eventos ocorridos após a vacinação e testar a capacidade de gerar sinais aplicando cálculos de desproporção a datamining. Para esse efeito foi constituída uma coorte não controlada de utentes vacinados em Centros de Saúde que foram seguidos durante quinze dias. A recolha de eventos adversos a vacinas foi efectuada pelos próprios utentes através de um diário de registo. Os dados recolhidos foram objecto de análise descritiva e análise de data-mining utilizando os cálculos Proportional Reporting Ratio e o Information Component. A metodologia utilizada permitiu gerar um corpo de evidência suficiente para a geração de sinais. Tendo sido gerados quatro sinais. No âmbito do data-mining a utilização do Information Component como método de geração de sinais parece aumentar a eficiência científica ao permitir reduzir o número de ocorrências até detecção de sinal. A informação reportada pelos utentes parece válida como indicador de sinais de reacções adversas não graves, o que permitiu o registo de eventos sem incluir o viés da avaliação da relação causal pelo notificador. Os principais eventos reportados foram eventos adversos locais (62,7%) e febre (31,4%).------------------------------------------ABSTRACT: The monitoring of undesirable effects following vaccination is complex. There are several confounding factors that can lead to merely temporal but spurious associations that can cause a change in the risk perception and a consequent generalized distrust about the safe use of vaccines. Indeed, vaccines are complex drugs with unique characteristics so that its monitoring requires specifically designed methodological approaches. From the above-cited it is understandable that since the development of Pharmacovigilance there has been a drive for the development of new methodologies that are concomitant with Spontaneous Reporting Systems already in place. We proposed to develop and test a new model for vaccine adverse reaction monitoring, based on self-report by users of events following vaccination and to test its capability to generate disproportionality signals applying quantitative methods of signal generation to data-mining. For that effect we set up an uncontrolled cohort of users vaccinated in Healthcare Centers,with a follow-up period of fifteen days. Adverse vaccine events we registered by the users themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. The methodology we used allowed for the generation of a sufficient body of evidence for signal generation. Four signals were generated. Regarding the data-mining, the use of Information Component as a method for generating disproportionality signals seems to increase scientific efficiency by reducing the number of events needed to signal detection. The information reported by users seems valid as an indicator of non serious adverse vaccine reactions, allowing for the registry of events without the bias of the evaluation of the casual relation by the reporter. The main adverse events reported were injection site reactions (62,7%) and fever (31,4%).
Resumo:
Mestrado em Engenharia Informática, Área de Especialização em Tecnologias do Conhecimento e da Decisão
Resumo:
Dissertação apresentada como requisito parcial para a obtenção do grau de Mestre em Estatística e Gestão da Informação