864 resultados para Data mining methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study of the effectiveness of the cognitive rehabilitation processes and the identification of cognitive profiles, in order to define comparable populations, is a controversial area, but concurrently it is strongly needed in order to improve therapies. There is limited evidence about cognitive rehabilitation efficacy. Many of the trials conclude that in spite of an apparent clinical good response, differences do not show statistical significance. The common feature in all these trials is heterogeneity among populations. In this situation, observational studies on very well controlled cohort of studies, together with innovative methods in knowledge extraction, could provide methodological insights for the design of more accurate comparative trials. Some correlation studies between neuropsychological tests and patients capacities have been carried out -1---2- and also correlation between tests and morphological changes in the brain -3-. The procedures efficacy depends on three main factors: the affectation profile, the scheduled tasks and the execution results. The relationship between them makes up the cognitive rehabilitation as a discipline, but its structure is not properly defined. In this work we present a clustering method used in Neuro Personal Trainer (NPT) to group patients into cognitive profiles using data mining techniques. The system uses these clusters to personalize treatments, using the patients assigned cluster to select which tasks are more suitable for its concrete needs, by comparing the results obtained in the past by patients with the same profile.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Effective data summarization methods that use AI techniques can help humans understand large sets of data. In this paper, we describe a knowledge-based method for automatically generating summaries of geospatial and temporal data, i.e. data with geographical and temporal references. The method is useful for summarizing data streams, such as GPS traces and traffic information, that are becoming more prevalent with the increasing use of sensors in computing devices. The method presented here is an initial architecture for our ongoing research in this domain. In this paper we describe the data representations we have designed for our method, our implementations of components to perform data abstraction and natural language generation. We also discuss evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La minería de datos es un campo de las ciencias de la computación referido al proceso que intenta descubrir patrones en grandes volúmenes de datos. La minería de datos busca generar información similar a la que podría producir un experto humano. Además es el proceso de descubrir conocimientos interesantes, como patrones, asociaciones, cambios, anomalías y estructuras significativas a partir de grandes cantidades de datos almacenadas en bases de datos, data warehouses o cualquier otro medio de almacenamiento de información. El aprendizaje automático o aprendizaje de máquinas es una rama de la Inteligencia artificial cuyo objetivo es desarrollar técnicas que permitan a las computadoras aprender. De forma más concreta, se trata de crear programas capaces de generalizar comportamientos a partir de una información no estructurada suministrada en forma de ejemplos. La minería de datos utiliza métodos de aprendizaje automático para descubrir y enumerar patrones presentes en los datos. En los últimos años se han aplicado las técnicas de clasificación y aprendizaje automático en un número elevado de ámbitos como el sanitario, comercial o de seguridad. Un ejemplo muy actual es la detección de comportamientos y transacciones fraudulentas en bancos. Una aplicación de interés es el uso de las técnicas desarrolladas para la detección de comportamientos fraudulentos en la identificación de usuarios existentes en el interior de entornos inteligentes sin necesidad de realizar un proceso de autenticación. Para comprobar que estas técnicas son efectivas durante la fase de análisis de una determinada solución, es necesario crear una plataforma que de soporte al desarrollo, validación y evaluación de algoritmos de aprendizaje y clasificación en los entornos de aplicación bajo estudio. El proyecto planteado está definido para la creación de una plataforma que permita evaluar algoritmos de aprendizaje automático como mecanismos de identificación en espacios inteligentes. Se estudiarán tanto los algoritmos propios de este tipo de técnicas como las plataformas actuales existentes para definir un conjunto de requisitos específicos de la plataforma a desarrollar. Tras el análisis se desarrollará parcialmente la plataforma. Tras el desarrollo se validará con pruebas de concepto y finalmente se verificará en un entorno de investigación a definir. ABSTRACT. The data mining is a field of the sciences of the computation referred to the process that it tries to discover patterns in big volumes of information. The data mining seeks to generate information similar to the one that a human expert might produce. In addition it is the process of discovering interesting knowledge, as patterns, associations, changes, abnormalities and significant structures from big quantities of information stored in databases, data warehouses or any other way of storage of information. The machine learning is a branch of the artificial Intelligence which aim is to develop technologies that they allow the computers to learn. More specifically, it is a question of creating programs capable of generalizing behaviors from not structured information supplied in the form of examples. The data mining uses methods of machine learning to discover and to enumerate present patterns in the information. In the last years there have been applied classification and machine learning techniques in a high number of areas such as healthcare, commercial or security. A very current example is the detection of behaviors and fraudulent transactions in banks. An application of interest is the use of the techniques developed for the detection of fraudulent behaviors in the identification of existing Users inside intelligent environments without need to realize a process of authentication. To verify these techniques are effective during the phase of analysis of a certain solution, it is necessary to create a platform that support the development, validation and evaluation of algorithms of learning and classification in the environments of application under study. The project proposed is defined for the creation of a platform that allows evaluating algorithms of machine learning as mechanisms of identification in intelligent spaces. There will be studied both the own algorithms of this type of technologies and the current existing platforms to define a set of specific requirements of the platform to develop. After the analysis the platform will develop partially. After the development it will be validated by prove of concept and finally verified in an environment of investigation that would be define.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The mobile apps market is a tremendous success, with millions of apps downloaded and used every day by users spread all around the world. For apps’ developers, having their apps published on one of the major app stores (e.g. Google Play market) is just the beginning of the apps lifecycle. Indeed, in order to successfully compete with the other apps in the market, an app has to be updated frequently by adding new attractive features and by fixing existing bugs. Clearly, any developer interested in increasing the success of her app should try to implement features desired by the app’s users and to fix bugs affecting the user experience of many of them. A precious source of information to decide how to collect users’ opinions and wishes is represented by the reviews left by users on the store from which they downloaded the app. However, to exploit such information the app’s developer should manually read each user review and verify if it contains useful information (e.g. suggestions for new features). This is something not doable if the app receives hundreds of reviews per day, as happens for the very popular apps on the market. In this work, our aim is to provide support to mobile apps developers by proposing a novel approach exploiting data mining, natural language processing, machine learning, and clustering techniques in order to classify the user reviews on the basis of the information they contain (e.g. useless, suggestion for new features, bugs reporting). Such an approach has been empirically evaluated and made available in a web-­‐based tool publicly available to all apps’ developers. The achieved results showed that the developed tool: (i) is able to correctly categorise user reviews on the basis of their content (e.g. isolating those reporting bugs) with 78% of accuracy, (ii) produces clusters of reviews (e.g. groups together reviews indicating exactly the same bug to be fixed) that are meaningful from a developer’s point-­‐of-­‐view, and (iii) is considered useful by a software company working in the mobile apps’ development market.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El correcto pronóstico en el ámbito de la logística de transportes es de vital importancia para una adecuada planificación de medios y recursos, así como de su optimización. Hasta la fecha los estudios sobre planificación portuaria se basan principalmente en modelos empíricos; que se han utilizado para planificar nuevas terminales y desarrollar planes directores cuando no se dispone de datos iniciales, analíticos; más relacionados con la teoría de colas y tiempos de espera con formulaciones matemáticas complejas y necesitando simplificaciones de las mismas para hacer manejable y práctico el modelo o de simulación; que requieren de una inversión significativa como para poder obtener resultados aceptables invirtiendo en programas y desarrollos complejos. La Minería de Datos (MD) es un área moderna interdisciplinaria que engloba a aquellas técnicas que operan de forma automática (requieren de la mínima intervención humana) y, además, son eficientes para trabajar con las grandes cantidades de información disponible en las bases de datos de numerosos problemas prácticos. La aplicación práctica de estas disciplinas se extiende a numerosos ámbitos comerciales y de investigación en problemas de predicción, clasificación o diagnosis. Entre las diferentes técnicas disponibles en minería de datos las redes neuronales artificiales (RNA) y las redes probabilísticas o redes bayesianas (RB) permiten modelizar de forma conjunta toda la información relevante para un problema dado. En el presente trabajo se han analizado dos aplicaciones de estos casos al ámbito portuario y en concreto a contenedores. En la Tesis Doctoral se desarrollan las RNA como herramienta para obtener previsiones de tráfico y de recursos a futuro de diferentes puertos, a partir de variables de explotación, obteniéndose valores continuos. Para el caso de las redes bayesianas (RB), se realiza un trabajo similar que para el caso de las RNA, obteniéndose valores discretos (un intervalo). El principal resultado que se obtiene es la posibilidad de utilizar tanto las RNA como las RB para la estimación a futuro de parámetros físicos, así como la relación entre los mismos en una terminal para una correcta asignación de los medios a utilizar y por tanto aumentar la eficiencia productiva de la terminal. Como paso final se realiza un estudio de complementariedad de ambos modelos a corto plazo, donde se puede comprobar la buena aceptación de los resultados obtenidos. Por tanto, se puede concluir que estos métodos de predicción pueden ser de gran ayuda a la planificación portuaria. The correct assets’ forecast in the field of transportation logistics is a matter of vital importance for a suitable planning and optimization of the necessary means and resources. Up to this date, ports planning studies were basically using empirical models to deal with new terminals planning or master plans development when no initial data are available; analytical models, more connected to the queuing theory and the waiting times, and very complicated mathematical formulations requiring significant simplifications to acquire a practical and easy to handle model; or simulation models, that require a significant investment in computer codes and complex developments to produce acceptable results. The Data Mining (DM) is a modern interdisciplinary field that include those techniques that operate automatically (almost no human intervention is required) and are highly efficient when dealing with practical problems characterized by huge data bases containing significant amount of information. These disciplines’ practical application extends to many commercial or research fields, dealing with forecast, classification or diagnosis problems. Among the different techniques of the Data Mining, the Artificial Neuronal Networks (ANN) and the probabilistic – or Bayesian – networks (BN) allow the joint modeling of all the relevant information for a given problem. This PhD work analyses their application to two practical cases in the ports field, concretely to container terminals. This PhD work details how the ANN have been developed as a tool to produce traffic and resources forecasts for several ports, based on exploitation variables to obtain continuous values. For the Bayesian networks case (BN), a similar development has been carried out, obtaining discreet values (an interval). The main finding is the possibility to use ANN and BN to estimate future needs of the port’s or terminal’s physical parameters, as well as the relationship between them within a specific terminal, that allow a correct assignment of the necessary means and, thus, to increase the terminal’s productive efficiency. The final step is a short term complementarily study of both models, carried out in order to verify the obtained results. It can thus be stated that these prediction methods can be a very useful tool in ports’ planning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Biomolecular Interaction Network Database (BIND; http://binddb.org) is a database designed to store full descriptions of interactions, molecular complexes and pathways. Development of the BIND 2.0 data model has led to the incorporation of virtually all components of molecular mechanisms including interactions between any two molecules composed of proteins, nucleic acids and small molecules. Chemical reactions, photochemical activation and conformational changes can also be described. Everything from small molecule biochemistry to signal transduction is abstracted in such a way that graph theory methods may be applied for data mining. The database can be used to study networks of interactions, to map pathways across taxonomic branches and to generate information for kinetic simulations. BIND anticipates the coming large influx of interaction information from high-throughput proteomics efforts including detailed information about post-translational modifications from mass spectrometry. Version 2.0 of the BIND data model is discussed as well as implementation, content and the open nature of the BIND project. The BIND data specification is available as ASN.1 and XML DTD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We set out to define patterns of gene expression during kidney organogenesis by using high-density DNA array technology. Expression analysis of 8,740 rat genes revealed five discrete patterns or groups of gene expression during nephrogenesis. Group 1 consisted of genes with very high expression in the early embryonic kidney, many with roles in protein translation and DNA replication. Group 2 consisted of genes that peaked in midembryogenesis and contained many transcripts specifying proteins of the extracellular matrix. Many additional transcripts allied with groups 1 and 2 had known or proposed roles in kidney development and included LIM1, POD1, GFRA1, WT1, BCL2, Homeobox protein A11, timeless, pleiotrophin, HGF, HNF3, BMP4, TGF-α, TGF-β2, IGF-II, met, FGF7, BMP4, and ganglioside-GD3. Group 3 consisted of transcripts that peaked in the neonatal period and contained a number of retrotransposon RNAs. Group 4 contained genes that steadily increased in relative expression levels throughout development, including many genes involved in energy metabolism and transport. Group 5 consisted of genes with relatively low levels of expression throughout embryogenesis but with markedly higher levels in the adult kidney; this group included a heterogeneous mix of transporters, detoxification enzymes, and oxidative stress genes. The data suggest that the embryonic kidney is committed to cellular proliferation and morphogenesis early on, followed sequentially by extracellular matrix deposition and acquisition of markers of terminal differentiation. The neonatal burst of retrotransposon mRNA was unexpected and may play a role in a stress response associated with birth. Custom analytical tools were developed including “The Equalizer” and “eBlot,” which contain improved methods for data normalization, significance testing, and data mining.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

© The Author(s) 2014. Acknowledgements We thank the Information Services Division, Scotland, who provided the SMR01 data, and NHS Grampian, who provided the biochemistry data. We also thank the University of Aberdeen’s Data Management Team. Funding This work was supported by the Chief Scientists Office for Scotland (grant no. CZH/4/656).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Em virtude de uma elevada expectativa de vida mundial, faz-se crescente a probabilidade de ocorrer acidentes naturais e traumas físicos no cotidiano, o que ocasiona um aumento na demanda por reabilitação. A terapia física, sob o paradigma da reabilitação robótica com serious games, oferece maior motivação e engajamento do paciente ao tratamento, cujo emprego foi recomendado pela American Heart Association (AHA), apontando a mais alta avaliação (Level A) para pacientes internados e ambulatoriais. No entanto, o potencial de análise dos dados coletados pelos dispositivos robóticos envolvidos é pouco explorado, deixando de extrair informações que podem ser de grande valia para os tratamentos. O foco deste trabalho consiste na aplicação de técnicas para descoberta de conhecimento, classificando o desempenho de pacientes diagnosticados com hemiparesia crônica. Os pacientes foram inseridos em um ambiente de reabilitação robótica, fazendo uso do InMotion ARM, um dispositivo robótico para reabilitação de membros superiores e coleta dos dados de desempenho. Foi aplicado sobre os dados um roteiro para descoberta de conhecimento em bases de dados, desempenhando pré-processamento, transformação (extração de características) e então a mineração de dados a partir de algoritmos de aprendizado de máquina. A estratégia do presente trabalho culminou em uma classificação de padrões com a capacidade de distinguir lados hemiparéticos sob uma precisão de 94%, havendo oito atributos alimentando a entrada do mecanismo obtido. Interpretando esta coleção de atributos, foi observado que dados de força são mais significativos, os quais abrangem metade da composição de uma amostra.