801 resultados para Knowledge Discovery Tools


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This chapter introduces the latest practices and technologies in the interactive interpretation of environmental data. With environmental data becoming ever larger, more diverse and more complex, there is a need for a new generation of tools that provides new capabilities over and above those of the standard workhorses of science. These new tools aid the scientist in discovering interesting new features (and also problems) in large datasets by allowing the data to be explored interactively using simple, intuitive graphical tools. In this way, new discoveries are made that are commonly missed by automated batch data processing. This chapter discusses the characteristics of environmental science data, common current practice in data analysis and the supporting tools and infrastructure. New approaches are introduced and illustrated from the points of view of both the end user and the underlying technology. We conclude by speculating as to future developments in the field and what must be achieved to fulfil this vision.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces an ontology-based knowledge model for knowledge management. This model can facilitate knowledge discovery that provides users with insight for decision making. The users requiring the insight normally play different roles with different requirements in an organisation. To meet the requirements, insights are created by purposely aggregated transnational data. This involves a semantic data integration process. In this paper, we present a knowledge management system which is capable of representing knowledge requirements in a domain context and enabling the semantic data integration through ontology modeling. The knowledge domain context of United Bible Societies is used to illustrate the features of the knowledge management capabilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an application of Social Network Analysis methods for identification of knowledge demands in public organisations. Affiliation networks established in a postgraduate programme were analysed. The course was executed in a distance education mode and its students worked on public agencies. Relations established among course participants were mediated through a virtual learning environment using Moodle. Data available in Moodle may be extracted using knowledge discovery in databases techniques. Potential degrees of closeness existing among different organisations and among researched subjects were assessed. This suggests how organisations could cooperate for knowledge management and also how to identify their common interests. The study points out that closeness among organisations and research topics may be assessed through affiliation networks. This opens up opportunities for applying knowledge management between organisations and creating communities of practice. Concepts of knowledge management and social network analysis provide the theoretical and methodological basis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: In many experimental pipelines, clustering of multidimensional biological datasets is used to detect hidden structures in unlabelled input data. Taverna is a popular workflow management system that is used to design and execute scientific workflows and aid in silico experimentation. The availability of fast unsupervised methods for clustering and visualization in the Taverna platform is important to support a data-driven scientific discovery in complex and explorative bioinformatics applications. Results: This work presents a Taverna plugin, the Biological Data Interactive Clustering Explorer (BioDICE), that performs clustering of high-dimensional biological data and provides a nonlinear, topology preserving projection for the visualization of the input data and their similarities. The core algorithm in the BioDICE plugin is Fast Learning Self Organizing Map (FLSOM), which is an improved variant of the Self Organizing Map (SOM) algorithm. The plugin generates an interactive 2D map that allows the visual exploration of multidimensional data and the identification of groups of similar objects. The effectiveness of the plugin is demonstrated on a case study related to chemical compounds. Conclusions: The number and variety of available tools and its extensibility have made Taverna a popular choice for the development of scientific data workflows. This work presents a novel plugin, BioDICE, which adds a data-driven knowledge discovery component to Taverna. BioDICE provides an effective and powerful clustering tool, which can be adopted for the explorative analysis of biological datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O atual modelo do setor elétrico brasileiro permite igualdade de condições a todos os agentes e reduz o papel do Estado no setor. Esse modelo obriga as empresas do setor a melhorarem cada vez mais a qualidade de seu produto e, como requisito para este objetivo, devem fazer uso mais efetivo da enorme quantidade de dados operacionais que são armazenados em bancos de dados, provenientes da operação dos seus sistemas elétricos e que tem nas Usinas Hidrelétricas (UHE) a sua principal fonte de geração de energia. Uma das principais ferramentas para gerenciamento dessas usinas são os sistemas de Supervisão, Controle e Aquisição de Dados (Supervisory Control And Data Acquisition - SCADA). Assim, a imensa quantidade de dados acumulados nos bancos de dados pelos sistemas SCADA, muito provavelmente contendo informações relevantes, deve ser tratada para descobrir relações e padrões e assim ajudar na compreensão de muitos aspectos operacionais importantes e avaliar o desempenho dos sistemas elétricos de potência. O processo de Descoberta de Conhecimento em Banco de Dados (Knowledge Discovery in Database - KDD) é o processo de identificar, em grandes conjuntos de dados, padrões que sejam válidos, novos, úteis e compreensíveis, para melhorar o entendimento de um problema ou um procedimento de tomada de decisão. A Mineração de Dados (ou Data Mining) é o passo dentro do KDD que permite extrair informações úteis em grandes bases de dados. Neste cenário, o presente trabalho se propõe a realizar experimentos de mineração de dados nos dados gerados por sistemas SCADA em UHE, a fim de produzir informações relevantes para auxiliar no planejamento, operação, manutenção e segurança das hidrelétricas e na implantação da cultura da mineração de dados aplicada a estas usinas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The structural peculiarities of a protein are related to its biological function. In the fatty acid elongation cycle, one small carrier protein shuttles and delivers the acyl intermediates from one enzyme to the other. The carrier has to recognize several enzymatic counterparts, specifically interact with each of them, and finally transiently deliver the carried substrate to the active site. Carry out such a complex game requires the players to be flexible and efficiently adapt their structure to the interacting protein or substrate. In a drug discovery effort, the structure-function relationships of a target system should be taken into account to optimistically interfere with its biological function. In this doctoral work, the essential role of structural plasticity in key steps of fatty acid biosynthesis in Plasmodium falciparum is investigated by means of molecular simulations. The key steps considered include the delivery of acyl substrates and the structural rearrangements of catalytic pockets upon ligand binding. The ground-level bases for carrier/enzyme recognition and interaction are also put forward. The structural features of the target have driven the selection of proper drug discovery tools, which captured the dynamics of biological processes and could allow the rational design of novel inhibitors. The model may be perspectively used for the identification of novel pathway-based antimalarial compounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data was gathered on students’ usage of these tools. Regardless of the search system, students exhibited a marked inability to effectively evaluate sources and a heavy reliance on default search settings. On the quantitative benchmarks measured by this study, the EBSCO Discovery Service tool outperformed the other search systems in almost every category. This article describes these results and makes recommendations for libraries considering these tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web-scale knowledge retrieval can be enabled by distributed information retrieval, clustering Web clients to a large-scale computing infrastructure for knowledge discovery from Web documents. Based on this infrastructure, we propose to apply semiotic (i.e., sub-syntactical) and inductive (i.e., probabilistic) methods for inferring concept associations in human knowledge. These associations can be combined to form a fuzzy (i.e.,gradual) semantic net representing a map of the knowledge in the Web. Thus, we propose to provide interactive visualizations of these cognitive concept maps to end users, who can browse and search the Web in a human-oriented, visual, and associative interface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knowledge modeling tools are software tools that follow a modeling approach to help developers in building a knowledge-based system. The purpose of this article is to show the advantages of using this type of tools in the development of complex knowledge-based decision support systems. In order to do so, the article describes the development of a system called SAIDA in the domain of hydrology with the help of the KSM modeling tool. SAIDA operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.). It follows a multi-agent architecture to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation. KSM was especially useful to design and implement the complex knowledge based architecture in an efficient way.