53 resultados para Spatial Data mining


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper consist in the establishment of a Virtual Producer/Consumer Agent (VPCA) in order to optimize the integrated management of distributed energy resources and to improve and control Demand Side Management DSM) and its aggregated loads. The paper presents the VPCA architecture and the proposed function-based organization to be used in order to coordinate the several generation technologies, the different load types and storage systems. This VPCA organization uses a frame work based on data mining techniques to characterize the costumers. The paper includes results of several experimental tests cases, using real data and taking into account electricity generation resources as well as consumption data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many current e-commerce systems provide personalization when their content is shown to users. In this sense, recommender systems make personalized suggestions and provide information of items available in the system. Nowadays, there is a vast amount of methods, including data mining techniques that can be employed for personalization in recommender systems. However, these methods are still quite vulnerable to some limitations and shortcomings related to recommender environment. In order to deal with some of them, in this work we implement a recommendation methodology in a recommender system for tourism, where classification based on association is applied. Classification based on association methods, also named associative classification methods, consist of an alternative data mining technique, which combines concepts from classification and association in order to allow association rules to be employed in a prediction context. The proposed methodology was evaluated in some case studies, where we could verify that it is able to shorten limitations presented in recommender systems and to enhance recommendation quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an integrated system that helps both retail companies and electricity consumers on the definition of the best retail contracts and tariffs. This integrated system is composed by a Decision Support System (DSS) based on a Consumer Characterization Framework (CCF). The CCF is based on data mining techniques, applied to obtain useful knowledge about electricity consumers from large amounts of consumption data. This knowledge is acquired following an innovative and systematic approach able to identify different consumers’ classes, represented by a load profile, and its characterization using decision trees. The framework generates inputs to use in the knowledge base and in the database of the DSS. The rule sets derived from the decision trees are integrated in the knowledge base of the DSS. The load profiles together with the information about contracts and electricity prices form the database of the DSS. This DSS is able to perform the classification of different consumers, present its load profile and test different electricity tariffs and contracts. The final outputs of the DSS are a comparative economic analysis between different contracts and advice about the most economic contract to each consumer class. The presentation of the DSS is completed with an application example using a real data base of consumers from the Portuguese distribution company.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Perante a evolução constante da Internet, a sua utilização é quase obrigatória. Através da web, é possível conferir extractos bancários, fazer compras em países longínquos, pagar serviços sem sair de casa, entre muitos outros. Há inúmeras alternativas de utilização desta rede. Ao se tornar tão útil e próxima das pessoas, estas começaram também a ganhar mais conhecimentos informáticos. Na Internet, estão também publicados vários guias para intrusão ilícita em sistemas, assim como manuais para outras práticas criminosas. Este tipo de informação, aliado à crescente capacidade informática do utilizador, teve como resultado uma alteração nos paradigmas de segurança informática actual. Actualmente, em segurança informática a preocupação com o hardware é menor, sendo o principal objectivo a salvaguarda dos dados e continuidade dos serviços. Isto deve-se fundamentalmente à dependência das organizações nos seus dados digitais e, cada vez mais, dos serviços que disponibilizam online. Dada a mudança dos perigos e do que se pretende proteger, também os mecanismos de segurança devem ser alterados. Torna-se necessário conhecer o atacante, podendo prever o que o motiva e o que pretende atacar. Neste contexto, propôs-se a implementação de sistemas de registo de tentativas de acesso ilícitas em cinco instituições de ensino superior e posterior análise da informação recolhida com auxílio de técnicas de data mining (mineração de dados). Esta solução é pouco utilizada com este intuito em investigação, pelo que foi necessário procurar analogias com outras áreas de aplicação para recolher documentação relevante para a sua implementação. A solução resultante revelou-se eficaz, tendo levado ao desenvolvimento de uma aplicação de fusão de logs das aplicações Honeyd e Snort (responsável também pelo seu tratamento, preparação e disponibilização num ficheiro Comma Separated Values (CSV), acrescentando conhecimento sobre o que se pode obter estatisticamente e revelando características úteis e previamente desconhecidas dos atacantes. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurança, tais como firewalls e Intrusion Detection Systems (IDS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ao longo dos últimos anos, as regras de associação têm assumido um papel relevante na extracção de informação e de conhecimento em base de dados e vêm com isso auxiliar o processo de tomada de decisão. A maioria dos trabalhos de investigação desenvolvidos sobre regras de associação têm por base o modelo de suporte e confiança. Este modelo permite obter regras de associação que envolvem particularmente conjuntos de itens frequentes. Contudo, nos últimos anos, tem-se explorado conjuntos de itens que surgem com menor frequência, designados de regras de associação raras ou infrequentes. Muitas das regras com base nestes itens têm particular interesse para o utilizador. Actualmente a investigação sobre regras de associação procuram incidir na geração do maior número possível de regras com interesse aglomerando itens raros e frequentes. Assim, este estudo foca, inicialmente, uma pesquisa sobre os principais algoritmos de data mining que abordam as regras de associação. A finalidade deste trabalho é examinar as técnicas e algoritmos de extracção de regras de associação já existentes, verificar as principais vantagens e desvantagens dos algoritmos na extracção de regras de associação e, por fim, desenvolver um algoritmo cujo objectivo é gerar regras de associação que envolvem itens raros e frequentes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese submetida à Universidade Portucalense para obtenção do grau de Mestre em Informática, elaborada sob a orientação de Prof. Doutor Reis Lima e Eng. Jorge S. Coelho.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A procura de padrões nos dados de modo a formar grupos é conhecida como aglomeração de dados ou clustering, sendo uma das tarefas mais realizadas em mineração de dados e reconhecimento de padrões. Nesta dissertação é abordado o conceito de entropia e são usados algoritmos com critérios entrópicos para fazer clustering em dados biomédicos. O uso da entropia para efetuar clustering é relativamente recente e surge numa tentativa da utilização da capacidade que a entropia possui de extrair da distribuição dos dados informação de ordem superior, para usá-la como o critério na formação de grupos (clusters) ou então para complementar/melhorar algoritmos existentes, numa busca de obtenção de melhores resultados. Alguns trabalhos envolvendo o uso de algoritmos baseados em critérios entrópicos demonstraram resultados positivos na análise de dados reais. Neste trabalho, exploraram-se alguns algoritmos baseados em critérios entrópicos e a sua aplicabilidade a dados biomédicos, numa tentativa de avaliar a adequação destes algoritmos a este tipo de dados. Os resultados dos algoritmos testados são comparados com os obtidos por outros algoritmos mais “convencionais" como o k-médias, os algoritmos de spectral clustering e um algoritmo baseado em densidade.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the predictive value of genetic polymorphisms in the context of BCG immunotherapy outcome and create a predictive profile that may allow discriminating the risk of recurrence. MATERIAL AND METHODS: In a dataset of 204 patients treated with BCG, we evaluate 42 genetic polymorphisms in 38 genes involved in the BCG mechanism of action, using Sequenom MassARRAY technology. Stepwise multivariate Cox Regression was used for data mining. RESULTS: In agreement with previous studies we observed that gender, age, tumor multiplicity and treatment scheme were associated with BCG failure. Using stepwise multivariate Cox Regression analysis we propose the first predictive profile of BCG immunotherapy outcome and a risk score based on polymorphisms in immune system molecules (SNPs in TNFA-1031T/C (rs1799964), IL2RA rs2104286 T/C, IL17A-197G/A (rs2275913), IL17RA-809A/G (rs4819554), IL18R1 rs3771171 T/C, ICAM1 K469E (rs5498), FASL-844T/C (rs763110) and TRAILR1-397T/G (rs79037040) in association with clinicopathological variables. This risk score allows the categorization of patients into risk groups: patients within the Low Risk group have a 90% chance of successful treatment, whereas patients in the High Risk group present 75% chance of recurrence after BCG treatment. CONCLUSION: We have established the first predictive score of BCG immunotherapy outcome combining clinicopathological characteristics and a panel of genetic polymorphisms. Further studies using an independent cohort are warranted. Moreover, the inclusion of other biomarkers may help to improve the proposed model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática, Área de Especialização em Tecnologias do Conhecimento e da Decisão

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Load forecasting has gradually becoming a major field of research in electricity industry. Therefore, Load forecasting is extremely important for the electric sector under deregulated environment as it provides a useful support to the power system management. Accurate power load forecasting models are required to the operation and planning of a utility company, and they have received increasing attention from researches of this field study. Many mathematical methods have been developed for load forecasting. This work aims to develop and implement a load forecasting method for short-term load forecasting (STLF), based on Holt-Winters exponential smoothing and an artificial neural network (ANN). One of the main contributions of this paper is the application of Holt-Winters exponential smoothing approach to the forecasting problem and, as an evaluation of the past forecasting work, data mining techniques are also applied to short-term Load forecasting. Both ANN and Holt-Winters exponential smoothing approaches are compared and evaluated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the characterization of high voltage (HV) electric power consumers based on a data clustering approach. The typical load profiles (TLP) are obtained selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The choice of the best partition is supported using several cluster validity indices. The proposed data-mining (DM) based methodology, that includes all steps presented in the process of knowledge discovery in databases (KDD), presents an automatic data treatment application in order to preprocess the initial database in an automatic way, allowing time saving and better accuracy during this phase. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ consumption behavior. To validate our approach, a case study with a real database of 185 HV consumers was used.