917 resultados para semiautomatic knowledge acquisition
Resumo:
Increased understanding of knowledge transfer (KT) from universities to the wider regional knowledge ecosystem offers opportunities for increased regional innovation and commercialisation. The aim of this article is to improve the understanding of the KT phenomena in an open innovation context where multiple diverse quadruple helix stakeholders are interacting. An absorptive capacity-based conceptual framework is proposed, using a priori constructs which portrays the multidimensional process of KT between universities and its constituent stakeholders in pursuit of open innovation and commercialisation. Given the lack of overarching theory in the field, an exploratory, inductive theory building methodology was adopted using semi-structured interviews, document analysis and longitudinal observation data over a three-year period. The findings identify five factors, namely human centric factors, organisational factors, knowledge characteristics, power relationships and network characteristics, which mediate both the ability of stakeholders to engage in KT and the effectiveness of knowledge acquisition, assimilation, transformation and exploitation. This research has implications for policy makers and practitioners by identifying the need to implement interventions to overcome the barriers to KT effectiveness between regional quadruple helix stakeholders within an open innovation ecosystem.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Processus d'acquisition de nouvelles connaissances en urbanisme : le cas de l'îlot de chaleur urbain
Resumo:
Dans le contexte du changement climatique, la chaleur est, depuis le début des années 2000, une préoccupation grandissante, d’abord en tant qu’enjeu sanitaire puis comme problématique affectant la qualité de vie des citoyens. Au Québec, le concept d’îlot de chaleur urbain, issu de la climatologie urbaine, a graduellement émergé dans le discours des autorités et de certains acteurs de l’aménagement. Or, on constate l’existence d’un certain décalage entre les connaissances scientifiques et l’interprétation qu’en font les urbanistes. Dans le cadre de ce mémoire, on a tenté d’identifier les facteurs explicatifs de ce décalage en s’intéressant au processus d’acquisition des connaissances des urbanistes québécois. Par le biais d’entretiens réalisés auprès des principaux acteurs ayant contribué à l’émergence de l’ICU au Québec, on a été en mesure d’identifier les éléments ayant entraîné certaines distorsions des connaissances. L’absence d’interdisciplinarité entre la climatologie urbaine et l’urbanisme tout au long du processus d’acquisition des connaissances ainsi qu’une interprétation tronquée de la carte des températures de surface expliquent principalement la nature du décalage observé.
Resumo:
Conceptual Information Systems unfold the conceptual structure of data stored in relational databases. In the design phase of the system, conceptual hierarchies have to be created which describe different aspects of the data. In this paper, we describe two principal ways of designing such conceptual hierarchies, data driven design and theory driven design and discuss advantages and drawbacks. The central part of the paper shows how Attribute Exploration, a knowledge acquisition tool developped by B. Ganter can be applied for narrowing the gap between both approaches.
Resumo:
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.
Resumo:
Successful knowledge transfer is an important process which requires continuous improvement in today’s knowledge-intensive economy. However, improving knowledge transfer processes represents a challenge for construction practitioners due to the complexity of knowledge acquisition, codification and sharing. Although knowledge transfer is context based, understanding the critical success factors can lead to improvements in the transfer process. This paper seeks to identify and evaluate the most significant critical factors for improving knowledge transfer processes in Public Private Partnerships/Private Finance Initiatives (PPP/PFI) projects. Drawing upon a questionnaire survey of 52 construction firms located in the UK, data is analysed using Severity Index (SI) and Coefficient of Variation (COV), to examine and identify these factors in PPP/PFI schemes. The findings suggest that a supportive leadership, participation/commitment from the relevant parties, and good communication between the relevant parties are crucial to improving knowledge transfer processes in PFI schemes. Practitioners, managers and researchers can use the findings to efficiently design performance measures for analysing and improving knowledge transfer processes.
Resumo:
Based on numerous studies showing that testing studied material can improve long-term retention more than restudying the same material, it is often suggested that the number of tests in education should be increased to enhance knowledge acquisition. However, testing in real-life educational settings often entails a high degree of extrinsic motivation of learners due to the common practice of placing important consequences on the outcome of a test. Such an effect on the motivation of learners may undermine the beneficial effects of testing on long-term memory because it has been shown that extrinsic motivation can reduce the quality of learning. To examine this issue, participants learned foreign language vocabulary words, followed by an immediate test in which one-third of the words were tested and one-third restudied. To manipulate extrinsic motivation during immediate testing, participants received either monetary reward contingent on test performance or no reward. After 1 week, memory for all words was tested. In the immediate test, reward reduced correct recall and increased commission errors, indicating that reward reduced the number of items that can benefit from successful retrieval. The results in the delayed test revealed that reward additionally reduced the gain received from successful retrieval because memory for initially successfully retrieved words was lower in the reward condition. However, testing was still more effective than restudying under reward conditions because reward undermined long-term memory for concurrently restudied material as well. These findings indicate that providing performance–contingent reward in a test can undermine long-term knowledge acquisition.
Resumo:
This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported. © 2003 Elsevier Ltd. All rights reserved.
Resumo:
In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.
Resumo:
The welfare sector has seen considerable changes in its operational context. Welfare services respond to an increasing number of challenges as citizens are confronted with life’s uncertainties and a variety of complex situations. At the same time the service-delivery system is facing problems of co-operation and the development of staff competence, as well as demands to improve service effectiveness and outcomes. In order to ensure optimal user outcomes in this complex, evolving environment it is necessary to enhance professional knowledge and skills, and to increase efforts to develop the services. Changes are also evident in the new emergent knowledge-production models. There has been a shift from knowledge acquisition and transmission to its construction and production. New actors have stepped in and the roles of researchers are subject to critical discussion. Research outcomes, in other words the usefulness of research with respect to practice development, is a topical agenda item. Research is needed, but if it is to be useful it needs to be not only credible but also useful in action. What do we know about different research processes in practice? What conceptions, approaches, methods and actor roles are embedded? What is the effect on practice? How does ‘here and now’ practice challenge research methods? This article is based on the research processes conducted in the institutes of practice research in social work in Finland. It analyses the different approaches applied by elucidating the theoretical standpoints and the critical elements embedded in them, and reflects on the outcomes in and for practice. It highlights the level of change and progression in practice research, arguing for diverse practice research models with a solid theoretical grounding, rigorous research processes, and a supportive infrastructure.
Resumo:
This chapter introduces a conceptual model to combine creativity techniques with fuzzy cognitive maps (FCMs) and aims to support knowledge management methods by improving expert knowledge acquisition and aggregation. The aim of the conceptual model is to represent acquired knowledge in a manner that is as computer-understandable as possible with the intention of developing automated reasoning in the future as part of intelligent information systems. The formal represented knowledge thus may provide businesses with intelligent information integration. To this end, we introduce and evaluate various creativity techniques with a list of attributes to define the most suitable to combine with FCMs. This proposed combination enables enhanced knowledge management through the acquisition and representation of expert knowledge with FCMs. Our evaluation indicates that the creativity technique known as mind mapping is the most suitable technique in our set. Finally, a scenario from stakeholder management demonstrates the combination of mind mapping with FCMs as an integrated system.
Resumo:
Tradicionalmente, el uso de técnicas de análisis de datos ha sido una de las principales vías para el descubrimiento de conocimiento oculto en grandes cantidades de datos, recopilados por expertos en diferentes dominios. Por otra parte, las técnicas de visualización también se han usado para mejorar y facilitar este proceso. Sin embargo, existen limitaciones serias en la obtención de conocimiento, ya que suele ser un proceso lento, tedioso y en muchas ocasiones infructífero, debido a la dificultad de las personas para comprender conjuntos de datos de grandes dimensiones. Otro gran inconveniente, pocas veces tenido en cuenta por los expertos que analizan grandes conjuntos de datos, es la degradación involuntaria a la que someten a los datos durante las tareas de análisis, previas a la obtención final de conclusiones. Por degradación quiere decirse que los datos pueden perder sus propiedades originales, y suele producirse por una reducción inapropiada de los datos, alterando así su naturaleza original y llevando en muchos casos a interpretaciones y conclusiones erróneas que podrían tener serias implicaciones. Además, este hecho adquiere una importancia trascendental cuando los datos pertenecen al dominio médico o biológico, y la vida de diferentes personas depende de esta toma final de decisiones, en algunas ocasiones llevada a cabo de forma inapropiada. Ésta es la motivación de la presente tesis, la cual propone un nuevo framework visual, llamado MedVir, que combina la potencia de técnicas avanzadas de visualización y minería de datos para tratar de dar solución a estos grandes inconvenientes existentes en el proceso de descubrimiento de información válida. El objetivo principal es hacer más fácil, comprensible, intuitivo y rápido el proceso de adquisición de conocimiento al que se enfrentan los expertos cuando trabajan con grandes conjuntos de datos en diferentes dominios. Para ello, en primer lugar, se lleva a cabo una fuerte disminución en el tamaño de los datos con el objetivo de facilitar al experto su manejo, y a la vez preservando intactas, en la medida de lo posible, sus propiedades originales. Después, se hace uso de efectivas técnicas de visualización para representar los datos obtenidos, permitiendo al experto interactuar de forma sencilla e intuitiva con los datos, llevar a cabo diferentes tareas de análisis de datos y así estimular visualmente su capacidad de comprensión. De este modo, el objetivo subyacente se basa en abstraer al experto, en la medida de lo posible, de la complejidad de sus datos originales para presentarle una versión más comprensible, que facilite y acelere la tarea final de descubrimiento de conocimiento. MedVir se ha aplicado satisfactoriamente, entre otros, al campo de la magnetoencefalografía (MEG), que consiste en la predicción en la rehabilitación de lesiones cerebrales traumáticas (Traumatic Brain Injury (TBI) rehabilitation prediction). Los resultados obtenidos demuestran la efectividad del framework a la hora de acelerar y facilitar el proceso de descubrimiento de conocimiento sobre conjuntos de datos reales. ABSTRACT Traditionally, the use of data analysis techniques has been one of the main ways of discovering knowledge hidden in large amounts of data, collected by experts in different domains. Moreover, visualization techniques have also been used to enhance and facilitate this process. However, there are serious limitations in the process of knowledge acquisition, as it is often a slow, tedious and many times fruitless process, due to the difficulty for human beings to understand large datasets. Another major drawback, rarely considered by experts that analyze large datasets, is the involuntary degradation to which they subject the data during analysis tasks, prior to obtaining the final conclusions. Degradation means that data can lose part of their original properties, and it is usually caused by improper data reduction, thereby altering their original nature and often leading to erroneous interpretations and conclusions that could have serious implications. Furthermore, this fact gains a trascendental importance when the data belong to medical or biological domain, and the lives of people depends on the final decision-making, which is sometimes conducted improperly. This is the motivation of this thesis, which proposes a new visual framework, called MedVir, which combines the power of advanced visualization techniques and data mining to try to solve these major problems existing in the process of discovery of valid information. Thus, the main objective is to facilitate and to make more understandable, intuitive and fast the process of knowledge acquisition that experts face when working with large datasets in different domains. To achieve this, first, a strong reduction in the size of the data is carried out in order to make the management of the data easier to the expert, while preserving intact, as far as possible, the original properties of the data. Then, effective visualization techniques are used to represent the obtained data, allowing the expert to interact easily and intuitively with the data, to carry out different data analysis tasks, and so visually stimulating their comprehension capacity. Therefore, the underlying objective is based on abstracting the expert, as far as possible, from the complexity of the original data to present him a more understandable version, thus facilitating and accelerating the task of knowledge discovery. MedVir has been succesfully applied to, among others, the field of magnetoencephalography (MEG), which consists in predicting the rehabilitation of Traumatic Brain Injury (TBI). The results obtained successfully demonstrate the effectiveness of the framework to accelerate and facilitate the process of knowledge discovery on real world datasets.
Resumo:
The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.
Resumo:
This paper describes the adaptation approach of reusable knowledge representation components used in the KSM environment for the formulation and operationalisation of structured knowledge models. Reusable knowledge representation components in KSM are called primitives of representation. A primitive of representation provides: (1) a knowledge representation formalism (2) a set of tasks that use this knowledge together with several problem-solving methods to carry out these tasks (3) a knowledge acquisition module that provides different services to acquire and validate this knowledge (4) an abstract terminology about the linguistic categories included in the representation language associated to the primitive. Primitives of representation usually are domain independent. A primitive of representation can be adapted to support knowledge in a given domain by importing concepts from this domain. The paper describes how this activity can be carried out by mean of a terminological importation. Informally, a terminological importation partially populates an abstract terminology with concepts taken from a given domain. The information provided by the importation can be used by the acquisition and validation facilities to constraint the classes of knowledge that can be described using the representation formalism according to the domain knowledge. KSM provides the LINK-S language to specify terminological importation from a domain terminology to an abstract one. These terminologies are described in KSM by mean of the CONCEL language. Terminological importation is used to adapt reusable primitives of representation in order to increase the usability degree of such components in these domains. In addition, two primitives of representation can share a common vocabulary by importing common domain CONCEL terminologies (conceptual vocabularies). It is a necessary condition to make possible the interoperability between different, heterogeneous knowledge representation components in the framework of complex knowledge - based architectures.
Resumo:
AKT is a major research project applying a variety of technologies to knowledge management. Knowledge is a dynamic, ubiquitous resource, which is to be found equally in an expert's head, under terabytes of data, or explicitly stated in manuals. AKT will extend knowledge management technologies to exploit the potential of the semantic web, covering the use of knowledge over its entire lifecycle, from acquisition to maintenance and deletion. In this paper we discuss how HLT will be used in AKT and how the use of HLT will affect different areas of KM, such as knowledge acquisition, retrieval and publishing.