16 resultados para text and data mining


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex systems, i.e. systems composed of a large set of elements interacting in a non-linear way, are constantly found all around us. In the last decades, different approaches have been proposed toward their understanding, one of the most interesting being the Complex Network perspective. This legacy of the 18th century mathematical concepts proposed by Leonhard Euler is still current, and more and more relevant in real-world problems. In recent years, it has been demonstrated that network-based representations can yield relevant knowledge about complex systems. In spite of that, several problems have been detected, mainly related to the degree of subjectivity involved in the creation and evaluation of such network structures. In this Thesis, we propose addressing these problems by means of different data mining techniques, thus obtaining a novel hybrid approximation intermingling complex networks and data mining. Results indicate that such techniques can be effectively used to i) enable the creation of novel network representations, ii) reduce the dimensionality of analyzed systems by pre-selecting the most important elements, iii) describe complex networks, and iv) assist in the analysis of different network topologies. The soundness of such approach is validated through different validation cases drawn from actual biomedical problems, e.g. the diagnosis of cancer from tissue analysis, or the study of the dynamics of the brain under different neurological disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Física

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Qualquer assunto relacionado com a saúde é sempre um tema sensível, pela importância que tem junto da população, já que interage diretamente com o bem-estar das pessoas e, essencialmente, com a sensação de segurança que as estas pretendem ter na prestação dos cuidados básicos de saúde. Dados estatísticos mostram que a população está cada vez mais envelhecida, reforçando a importância da existência de bons centros hospitalares e de um bom Sistema Nacional de Saúde (SNS) (Plano Nacional de Saúde, 2010). Em Portugal, caso os pacientes necessitem de cuidados mais urgentes, podem recorrer ao Serviço de Urgências disponibilizado para toda a população através do SNS. No entanto, a gestão e planeamento deste serviço é complexa, dado este serviço ser frequentemente utilizado por pacientes que não necessitam de cuidados urgentes, levando a que os hospitais deixem de conseguir dar a resposta esperada, implicando a prestação por vezes um serviço de menor qualidade. Neste sentido, analisaram-se dados de um hospital do norte do país com o intuito de perceber o ponto de situação das urgências, de forma a encontrar padrões relevantes através da análise de clusters e de regras de associação. Começando pela análise de clusters, utilizaram-se apenas as variáveis que foram consideradas importantes para o problema, resultando da análise final 3 clusters. O primeiro cluster é constituído por elementos do sexo masculino de todas as idades, o segundo cluster por elementos do sexo masculino mais jovens e por elementos do sexo feminino até aos 60 anos e o terceiro cluster apenas por elementos do sexo feminino a partir dos 40 anos. No final verificaram-se muitas semelhanças entre os clusters 1 e 3, pois ambos continham os pacientes mais idosos, havendo um padrão comum no seu comportamento. No ano 2012 não houve registo de nenhuma epidemia, não havendo por isso nenhuma doença que se destacasse comparativamente às restantes. Concluiu-se também que na maior parte dos casos houve a necessidade de uma intervenção urgente (pulseira de cor Amarela), no entanto a maioria dos pacientes observados conseguiu regressar às suas habitações após as consultas nas Urgências Hospitalares, sem intervenções médicas adicionais. Relativamente às regras de associação, houve a necessidade de transformar e eliminar algumas variáveis que enviesassem o estudo. Após o processo da criação das regras de associação, percebeu-se que as regras eram muito similares entre si, apresentando uma maior confiança nas variáveis que apareceram em maior número (“Pacientes com pulseira de cor Amarela”, “distrito do Porto” ou “Alta Médica para a Residência”).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation presented at the Faculty of Sciences and Technology of the New University of Lisbon to obtain the degree of Doctor in Electrical Engineering, specialty of Robotics and Integrated Manufacturing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reduction of greenhouse gas emissions is one of the big global challenges for the next decades due to its severe impact on the atmosphere that leads to a change in the climate and other environmental factors. One of the main sources of greenhouse gas is energy consumption, therefore a number of initiatives and calls for awareness and sustainability in energy use are issued among different types of institutional and organizations. The European Council adopted in 2007 energy and climate change objectives for 20% improvement until 2020. All European countries are required to use energy with more efficiency. Several steps could be conducted for energy reduction: understanding the buildings behavior through time, revealing the factors that influence the consumption, applying the right measurement for reduction and sustainability, visualizing the hidden connection between our daily habits impacts on the natural world and promoting to more sustainable life. Researchers have suggested that feedback visualization can effectively encourage conservation with energy reduction rate of 18%. Furthermore, researchers have contributed to the identification process of a set of factors which are very likely to influence consumption. Such as occupancy level, occupants behavior, environmental conditions, building thermal envelope, climate zones, etc. Nowadays, the amount of energy consumption at the university campuses are huge and it needs great effort to meet the reduction requested by European Council as well as the cost reduction. Thus, the present study was performed on the university buildings as a use case to: a. Investigate the most dynamic influence factors on energy consumption in campus; b. Implement prediction model for electricity consumption using different techniques, such as the traditional regression way and the alternative machine learning techniques; and c. Assist energy management by providing a real time energy feedback and visualization in campus for more awareness and better decision making. This methodology is implemented to the use case of University Jaume I (UJI), located in Castellon, Spain.