778 resultados para Machine Learning. Semissupervised learning. Multi-label classification. Reliability Parameter
Resumo:
In almost all industrialized countries, the energy sector has suffered a severe restructuring that originated a greater complexity in market players’ interactions. The complexity that these changes brought made way for the creation of decision support tools that facilitate the study and understanding of these markets. MASCEM – “Multiagent Simulator for Competitive Electricity Markets” arose in this context providing a framework for evaluating new rules, new behaviour, and new participants in deregulated electricity markets. MASCEM uses game theory, machine learning techniques, scenario analysis and optimisation techniques to model market agents and to provide them with decision-support. ALBidS is a multiagent system created to provide decision support to market negotiating players. Fully integrated with MASCEM it considers several different methodologies based on very distinct approaches. The Six Thinking Hats is a powerful technique used to look at decisions from different perspectives. This tool’s goal is to force the thinker to move outside his habitual thinking style. It was developed to be used mainly at meetings in order to “run better meetings, make faster decisions”. This dissertation presents a study about the applicability of the Six Thinking Hats technique in Decision Support Systems, particularly with the multiagent paradigm like the MASCEM simulator. As such this work’s proposal is of a new agent, a meta-learner based on STH technique that organizes several different ALBidS’ strategies and combines the distinct answers into a single one that, expectedly, out-performs any of them.
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
With the advent of wearable sensing and mobile technologies, biosignals have seen an increasingly growing number of application areas, leading to the collection of large volumes of data. One of the difficulties in dealing with these data sets, and in the development of automated machine learning systems which use them as input, is the lack of reliable ground truth information. In this paper we present a new web-based platform for visualization, retrieval and annotation of biosignals by non-technical users, aimed at improving the process of ground truth collection for biomedical applications. Moreover, a novel extendable and scalable data representation model and persistency framework is presented. The results of the experimental evaluation with possible users has further confirmed the potential of the presented framework.
Resumo:
The iterative simulation of the Brownian bridge is well known. In this article, we present a vectorial simulation alternative based on Gaussian processes for machine learning regression that is suitable for interpreted programming languages implementations. We extend the vectorial simulation of path-dependent trajectories to other Gaussian processes, namely, sequences of Brownian bridges, geometric Brownian motion, fractional Brownian motion, and Ornstein-Ulenbeck mean reversion process.
Resumo:
This paper presents several forecasting methodologies based on the application of Artificial Neural Networks (ANN) and Support Vector Machines (SVM), directed to the prediction of the solar radiance intensity. The methodologies differ from each other by using different information in the training of the methods, i.e, different environmental complementary fields such as the wind speed, temperature, and humidity. Additionally, different ways of considering the data series information have been considered. Sensitivity testing has been performed on all methodologies in order to achieve the best parameterizations for the proposed approaches. Results show that the SVM approach using the exponential Radial Basis Function (eRBF) is capable of achieving the best forecasting results, and in half execution time of the ANN based approaches.
Resumo:
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.
Resumo:
The energy sector has suffered a significant restructuring that has increased the complexity in electricity market players' interactions. The complexity that these changes brought requires the creation of decision support tools to facilitate the study and understanding of these markets. The Multiagent Simulator of Competitive Electricity Markets (MASCEM) arose in this context, providing a simulation framework for deregulated electricity markets. The Adaptive Learning strategic Bidding System (ALBidS) is a multiagent system created to provide decision support to market negotiating players. Fully integrated with MASCEM, ALBidS considers several different strategic methodologies based on highly distinct approaches. Six Thinking Hats (STH) is a powerful technique used to look at decisions from different perspectives, forcing the thinker to move outside its usual way of thinking. This paper aims to complement the ALBidS strategies by combining them and taking advantage of their different perspectives through the use of the STH group decision technique. The combination of ALBidS' strategies is performed through the application of a genetic algorithm, resulting in an evolutionary learning approach.
Resumo:
A personalização é um aspeto chave de uma interação homem-computador efetiva. Numa era em que existe uma abundância de informação e tantas pessoas a interagir com ela, de muitas maneiras, a capacidade de se ajustar aos seus utilizadores é crucial para qualquer sistema moderno. A criação de sistemas adaptáveis é um domínio bastante complexo que necessita de métodos muito específicos para ter sucesso. No entanto, nos dias de hoje ainda não existe um modelo ou arquitetura padrão para usar nos sistemas adaptativos modernos. A principal motivação desta tese é a proposta de uma arquitetura para modelação do utilizador que seja capaz de incorporar diferentes módulos necessários para criar um sistema com inteligência escalável com técnicas de modelação. Os módulos cooperam de forma a analisar os utilizadores e caracterizar o seu comportamento, usando essa informação para fornecer uma experiência de sistema customizada que irá aumentar não só a usabilidade do sistema mas também a produtividade e conhecimento do utilizador. A arquitetura proposta é constituída por três componentes: uma unidade de informação do utilizador, uma estrutura matemática capaz de classificar os utilizadores e a técnica a usar quando se adapta o conteúdo. A unidade de informação do utilizador é responsável por conhecer os vários tipos de indivíduos que podem usar o sistema, por capturar cada detalhe de interações relevantes entre si e os seus utilizadores e também contém a base de dados que guarda essa informação. A estrutura matemática é o classificador de utilizadores, e tem como tarefa a sua análise e classificação num de três perfis: iniciado, intermédio ou avançado. Tanto as redes de Bayes como as neuronais são utilizadas, e uma explicação de como as preparar e treinar para lidar com a informação do utilizador é apresentada. Com o perfil do utilizador definido torna-se necessária uma técnica para adaptar o conteúdo do sistema. Nesta proposta, uma abordagem de iniciativa mista é apresentada tendo como base a liberdade de tanto o utilizador como o sistema controlarem a comunicação entre si. A arquitetura proposta foi desenvolvida como parte integrante do projeto ADSyS - um sistema de escalonamento dinâmico - utilizado para resolver problemas de escalonamento sujeitos a eventos dinâmicos. Possui uma complexidade elevada mesmo para utilizadores frequentes, daí a necessidade de adaptar o seu conteúdo de forma a aumentar a sua usabilidade. Com o objetivo de avaliar as contribuições deste trabalho, um estudo computacional acerca do reconhecimento dos utilizadores foi desenvolvido, tendo por base duas sessões de avaliação de usabilidade com grupos de utilizadores distintos. Foi possível concluir acerca dos benefícios na utilização de técnicas de modelação do utilizador com a arquitetura proposta.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
A Programação Genética (PG) é uma técnica de Aprendizagem de Máquina (Machine Learning (ML)) aplicada em problemas de otimização onde pretende-se achar a melhor solução num conjunto de possíveis soluções. A PG faz parte do paradigma conhecido por Computação Evolucionária (CE) que tem como inspiração à teoria da evolução natural das espécies para orientar a pesquisa das soluções. Neste trabalho, é avaliada a performance da PG no problema de previsão de parâmetros farmacocinéticos utilizados no processo de desenvolvimento de fármacos. Este é um problema de otimização onde, dado um conjunto de descritores moleculares de fármacos e os valores correspondentes dos parâmetros farmacocinéticos ou de sua atividade molecular, utiliza-se a PG para construir uma função matemática que estima tais valores. Para tal, foram utilizados dados de fármacos com os valores conhecidos de alguns parâmetros farmacocinéticos. Para avaliar o desempenho da PG na resolução do problema em questão, foram implementados diferentes modelos de PG com diferentes funções de fitness e configurações. Os resultados obtidos pelos diferentes modelos foram comparados com os resultados atualmente publicados na literatura e os mesmos confirmam que a PG é uma técnica promissora do ponto de vista da precisão das soluções encontradas, da capacidade de generalização e da correlação entre os valores previstos e os valores reais.
Resumo:
Human Activity Recognition systems require objective and reliable methods that can be used in the daily routine and must offer consistent results according with the performed activities. These systems are under development and offer objective and personalized support for several applications such as the healthcare area. This thesis aims to create a framework for human activities recognition based on accelerometry signals. Some new features and techniques inspired in the audio recognition methodology are introduced in this work, namely Log Scale Power Bandwidth and the Markov Models application. The Forward Feature Selection was adopted as the feature selection algorithm in order to improve the clustering performances and limit the computational demands. This method selects the most suitable set of features for activities recognition in accelerometry from a 423th dimensional feature vector. Several Machine Learning algorithms were applied to the used accelerometry databases – FCHA and PAMAP databases - and these showed promising results in activities recognition. The developed algorithm set constitutes a mighty contribution for the development of reliable evaluation methods of movement disorders for diagnosis and treatment applications.
Resumo:
Botnets are a group of computers infected with a specific sub-set of a malware family and controlled by one individual, called botmaster. This kind of networks are used not only, but also for virtual extorsion, spam campaigns and identity theft. They implement different types of evasion techniques that make it harder for one to group and detect botnet traffic. This thesis introduces one methodology, called CONDENSER, that outputs clusters through a self-organizing map and that identify domain names generated by an unknown pseudo-random seed that is known by the botnet herder(s). Aditionally DNS Crawler is proposed, this system saves historic DNS data for fast-flux and double fastflux detection, and is used to identify live C&Cs IPs used by real botnets. A program, called CHEWER, was developed to automate the calculation of the SVM parameters and features that better perform against the available domain names associated with DGAs. CONDENSER and DNS Crawler were developed with scalability in mind so the detection of fast-flux and double fast-flux networks become faster. We used a SVM for the DGA classififer, selecting a total of 11 attributes and achieving a Precision of 77,9% and a F-Measure of 83,2%. The feature selection method identified the 3 most significant attributes of the total set of attributes. For clustering, a Self-Organizing Map was used on a total of 81 attributes. The conclusions of this thesis were accepted in Botconf through a submited article. Botconf is known conferênce for research, mitigation and discovery of botnets tailled for the industry, where is presented current work and research. This conference is known for having security and anti-virus companies, law enforcement agencies and researchers.
Resumo:
O crescimento e a expansão das redes sociais trouxe novas formas de interação entre os seres humanos que se repercutem na vida real. Os textos partilhados nas redes sociais e as interações resultantes de todas as atividades virtuais têm vindo a ganhar um grande impacto no quotidiano da sociedade e no âmbito económico e financeiro, as redes sociais tem sido alvo de diversos estudos, particularmente em termos de previsão e descrição do mercado acionista (Zhang, Fuehres, & Gloor, 2011) (Bollen, Mao & Zheng, 2010). Nesta investigação percebemos se o sentimento do Twitter, rede social de microblogging, se relaciona diretamente com o mercado acionista, querendo assim compreender qual o impacto das redes sociais no mercado financeiro. Tentámos assim relacionar duas dimensões, social e financeira, de forma a conseguirmos compreender de que forma poderemos utilizar os valores de uma para prever a outra. É um tópico especialmente interessante para empresas e investidores na medida em que se tenta compreender se o que se diz de determinada empresa no Twitter pode ter relação com o valor de mercado dessa empresa. Usámos duas técnicas de análise de sentimentos, uma de comparação léxica de palavras e outra de machine learning para compreender qual das duas tinha uma melhor precisão na classificação dos tweets em três atributos, positivo, negativo ou neutro. O modelo de machine learning foi o modelo escolhido e relacionámos esses dados com os dados do mercado acionista através de um teste de causalidade de Granger. Descobrimos que para certas empresas existe uma relação entre as duas variáveis, sentimento do Twitter e alteração da posição da ação entre dois períodos de tempo no mercado acionista, esta última variável estando dependente da dimensão temporal em que agrupamos o nosso sentimento do Twitter. Este estudo pretendeu assim dar seguimento ao trabalho desenvolvido por Bollen, Mao e Zheng (2010) que descobriram que uma dimensão de sentimento (calma) consegue ser usada para prever a direção das ações do mercado acionista, apesar de terem rejeitado que o sentimento geral (positivo, negativo ou neutro) não se relacionava de modo global com o mercado acionista. No seu trabalho compararam o sentimento de todos os tweets de um determinado período sem exclusão com o índice geral de ações no mercado enquanto a metodologia adotada nesta investigação foi realizada por empresa e apenas nos interessaram tweets que se relacionavam com aquela empresa em específico. Com esta diferença obtemos resultados diferentes e certas empresas demonstravam que existia relação entre várias combinações, principalmente para empresas tecnológicas. Testamos o agrupamento do sentimento do Twitter em 3 minutos, 1 hora e 1 dia, sendo que certas empresas só demonstravam relação quando aumentávamos a nossa dimensão temporal. Isto leva-nos a querer que o sentimento geral da empresa, e se a mesma for uma empresa tecnológica, está ligado ao mercado acionista estando condicionada esta relação à dimensão temporal que possamos estar a analisar.
Resumo:
The reduction of greenhouse gas emissions is one of the big global challenges for the next decades due to its severe impact on the atmosphere that leads to a change in the climate and other environmental factors. One of the main sources of greenhouse gas is energy consumption, therefore a number of initiatives and calls for awareness and sustainability in energy use are issued among different types of institutional and organizations. The European Council adopted in 2007 energy and climate change objectives for 20% improvement until 2020. All European countries are required to use energy with more efficiency. Several steps could be conducted for energy reduction: understanding the buildings behavior through time, revealing the factors that influence the consumption, applying the right measurement for reduction and sustainability, visualizing the hidden connection between our daily habits impacts on the natural world and promoting to more sustainable life. Researchers have suggested that feedback visualization can effectively encourage conservation with energy reduction rate of 18%. Furthermore, researchers have contributed to the identification process of a set of factors which are very likely to influence consumption. Such as occupancy level, occupants behavior, environmental conditions, building thermal envelope, climate zones, etc. Nowadays, the amount of energy consumption at the university campuses are huge and it needs great effort to meet the reduction requested by European Council as well as the cost reduction. Thus, the present study was performed on the university buildings as a use case to: a. Investigate the most dynamic influence factors on energy consumption in campus; b. Implement prediction model for electricity consumption using different techniques, such as the traditional regression way and the alternative machine learning techniques; and c. Assist energy management by providing a real time energy feedback and visualization in campus for more awareness and better decision making. This methodology is implemented to the use case of University Jaume I (UJI), located in Castellon, Spain.