948 resultados para Knowledge representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A személyazonosság-menedzsment napjaink fontos kutatási területe mind elméleti, mind gyakorlati szempontból. A szakterületen megvalósuló együttműködés, elektronikus tudásáramoltatás és csere hosszú távon csak úgy képzelhető el, hogy az azonos módon történő értelmezést automatikus eszközök támogatják. A szerző cikkében azokat a kutatási tevékenységeket foglalja össze, amelyeket - felhasználva a tudásmenedzsment, a mesterséges intelligencia és az információtechnológia eszközeit - a személyazonosság-menedzsment terület fogalmi leképezésére, leírására használt fel. Kutatási célja olyan közös fogalmi bázis kialakítása volt személyazonosság-menedzsment területre, amely lehetővé teszi az őt körülvevő multidimenzionális környezet kezelését. A kutatás kapcsolódik a GUIDE kutatási projekthez is, amelynek a szerző résztvevője. ______________ Identity management is an important research field from theoretical and practical aspects as well. The task itself is not new, identification and authentication was necessary always in public administration and business life. Information Society offers new services for citizens, which dramatically change the way of administration and results additional risks and opportunities. The goal of the demonstrated research was to formulate a common basis for the identity management domain in order to support the management of the surrounding multidimensional environment. There is a need for capturing, mapping, processing knowledge concerning identity management in order to support reusability, interoperability; to help common sharing and understanding the domain and to avoid inconsistency. The paper summarizes research activities for the identification, conceptualisation and representation of domain knowledge related to identity management, using the results of knowledge management, artificial intelligence and information technology. I utilized the experiences of Guide project, in which I participate. The paper demonstrates, that domain ontologies could offer a proper solution for identity management domain conceptualisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competition between Higher Education Institutions is increasing at an alarming rate, while changes of the surrounding environment and demands of labour market are frequent and substantial. Universities must meet the requirements of both the national and European legislation environment. The Bologna Declaration aims at providing guidelines and solutions for these problems and challenges of European Higher Education. One of its main goals is the introduction of a common framework of transparent and comparable degrees that ensures the recognition of knowledge and qualifications of citizens all across the European Union. This paper will discuss a knowledge management approach that highlights the importance of such knowledge representation tools as ontologies. The discussed ontology-based model supports the creation of transparent curricula content (Educational Ontology) and the promotion of reliable knowledge testing (Adaptive Knowledge Testing System).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge is the key for success. The adequate treatment you make on data for generating knowledge can make a difference in projects, processes, and networks. Such a treatment is the main goal of two important areas: knowledger representation and management. Our aim, in this book, is collecting sorne innovative ways of representing and managing knowledge proposed by several Latin American researchers under the premise of improving knowledge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the fundamental issues in building autonomous agents is to be able to sense, represent and react to the world. Some of the earlier work [Mor83, Elf90, AyF89] has aimed towards a reconstructionist approach, where a number of sensors are used to obtain input that is used to construct a model of the world that mirrors the real world. Sensing and sensor fusion was thus an important aspect of such work. Such approaches have had limited success, and some of the main problems were the issues of uncertainty arising from sensor error and errors that accumulated in metric, quantitative models. Recent research has therefore looked at different ways of examining the problems. Instead of attempting to get the most accurate and correct model of the world, these approaches look at qualitative models to represent the world, which maintain relative and significant aspects of the environment rather than all aspects of the world. The relevant aspects of the world that are retained are determined by the task at hand which in turn determines how to sense. That is, task directed or purposive sensing is used to build a qualitative model of the world, which though inaccurate and incomplete is sufficient to solve the problem at hand. This paper examines the issues of building up a hierarchical knowledge representation of the environment with limited sensor input that can be actively acquired by an agent capable of interacting with the environment. Different tasks require different aspects of the environment to be abstracted out. For example, low level tasks such as navigation require aspects of the environment that are related to layout and obstacle placement. For the agent to be able to reposition itself in an environment, significant features of spatial situations and their relative placement need to be kept. For the agent to reason about objects in space, for example to determine the position of one object relative to another, the representation needs to retain information on relative locations of start and finish of the objects, that is endpoints of objects on a grid. For the agent to be able to do high level planning, the agent may need only the relative position of the starting point and destination, and not the low level details of endpoints, visual clues and so on. This indicates that a hierarchical approach would be suitable, such that each level in the hierarchy is at a different level of abstraction, and thus suitable for a different task. At the lowest level, the representation contains low level details of agent's motion and visual clues to allow the agent to navigate and reposition itself. At the next level of abstraction the aspects of the representation allow the agent to perform spatial reasoning, and finally the highest level of abstraction in the representation can be used by the agent for high level planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes an ontology-based approach to representation of courseware knowledge in different domains. The focus is on a three-level semantic graph, modeling respectively the course as a whole, its structure, and domain contents itself. The authors plan to use this representation for flexibie e- learning and generation of different study plans for the learners.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Modern enterprise knowledge management systems typically require distributed approaches and the integration of numerous heterogeneous sources of information. A powerful foundation for these tasks can be Topic Maps, which not only provide a semantic net-like knowledge representation means and the possibility to use ontologies for modelling knowledge structures, but also offer concepts to link these knowledge structures with unstructured data stored in files, external documents etc. In this paper, we present the architecture and prototypical implementation of a Topic Map application infrastructure, the ‘Topic Grid’, which enables transparent, node-spanning access to different Topic Maps distributed in a network.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nowadays, Workflow Management Systems (WfMSs) and, more generally, Process Management Systems (PMPs) are process-aware Information Systems (PAISs), are widely used to support many human organizational activities, ranging from well-understood, relatively stable and structures processes (supply chain management, postal delivery tracking, etc.) to processes that are more complicated, less structured and may exhibit a high degree of variation (health-care, emergency management, etc.). Every aspect of a business process involves a certain amount of knowledge which may be complex depending on the domain of interest. The adequate representation of this knowledge is determined by the modeling language used. Some processes behave in a way that is well understood, predictable and repeatable: the tasks are clearly delineated and the control flow is straightforward. Recent discussions, however, illustrate the increasing demand for solutions for knowledge-intensive processes, where these characteristics are less applicable. The actors involved in the conduct of a knowledge-intensive process have to deal with a high degree of uncertainty. Tasks may be hard to perform and the order in which they need to be performed may be highly variable. Modeling knowledge-intensive processes can be complex as it may be hard to capture at design-time what knowledge is available at run-time. In realistic environments, for example, actors lack important knowledge at execution time or this knowledge can become obsolete as the process progresses. Even if each actor (at some point) has perfect knowledge of the world, it may not be certain of its beliefs at later points in time, since tasks by other actors may change the world without those changes being perceived. Typically, a knowledge-intensive process cannot be adequately modeled by classical, state of the art process/workflow modeling approaches. In some respect there is a lack of maturity when it comes to capturing the semantic aspects involved, both in terms of reasoning about them. The main focus of the 1st International Workshop on Knowledge-intensive Business processes (KiBP 2012) was investigating how techniques from different fields, such as Artificial Intelligence (AI), Knowledge Representation (KR), Business Process Management (BPM), Service Oriented Computing (SOC), etc., can be combined with the aim of improving the modeling and the enactment phases of a knowledge-intensive process. The 1st International Workshop on Knowledge-intensive Business process (KiBP 2012) was held as part of the program of the 2012 Knowledge Representation & Reasoning International Conference (KR 2012) in Rome, Italy, in June 2012. The workshop was hosted by the Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti of Sapienza Universita di Roma, with financial support of the University, through grant 2010-C26A107CN9 TESTMED, and the EU Commission through the projects FP7-25888 Greener Buildings and FP7-257899 Smart Vortex. This volume contains the 5 papers accepted and presented at the workshop. Each paper was reviewed by three members of the internationally renowned Program Committee. In addition, a further paper was invted for inclusion in the workshop proceedings and for presentation at the workshop. There were two keynote talks, one by Marlon Dumas (Institute of Computer Science, University of Tartu, Estonia) on "Integrated Data and Process Management: Finally?" and the other by Yves Lesperance (Department of Computer Science and Engineering, York University, Canada) on "A Logic-Based Approach to Business Processes Customization" completed the scientific program. We would like to thank all the Program Committee members for the valuable work in selecting the papers, Andrea Marrella for his valuable work as publication and publicity chair of the workshop, and Carola Aiello and the consulting agency Consulta Umbria for the organization of this successful event.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This research is concerned with designing representations for analytical reasoning problems (of the sort found on the GRE and LSAT). These problems test the ability to draw logical conclusions. A computer program was developed that takes as input a straightforward predicate calculus translation of a problem, requests additional information if necessary, decides what to represent and how, designs representations capturing the constraints of the problem, and creates and executes a LISP program that uses those representations to produce a solution. Even though these problems are typically difficult for theorem provers to solve, the LISP program that uses the designed representations is very efficient.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper describes ARLO, a representation language loosely modelled after Greiner and Lenant's RLL-1. ARLO is a structure-based representation language for describing structure-based representation languages, including itself. A given representation language is specified in ARLO by a collection of structures describing how its descriptions are interpreted, defaulted, and verified. This high level description is compiles into lisp code and ARLO structures whose interpretation fulfills the specified semantics of the representation. In addition, ARLO itself- as a representation language for expressing and compiling partial and complete language specifications- is described and interpreted in the same manner as the language it describes and implements. This self-description can be extended of modified to expand or alter the expressive power of ARLO's initial configuration. Languages which describe themselves like ARLO- provide powerful mediums for systems which perform automatic self-modification, optimization, debugging, or documentation. AI systems implemented in such a self-descriptive language can reflect on their own capabilities and limitations, applying general learning and problem solving strategies to enlarge or alleviate them.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the rapid growth in the quantity and complexity of scientific knowledge available for scientists, and allied professionals, the problems associated with harnessing this knowledge are well recognized. Some of these problems are a result of the uncertainties and inconsistencies that arise in this knowledge. Other problems arise from heterogeneous and informal formats for this knowledge. To address these problems, developments in the application of knowledge representation and reasoning technologies can allow scientific knowledge to be captured in logic-based formalisms. Using such formalisms, we can undertake reasoning with the uncertainty and inconsistency to allow automated techniques to be used for querying and combining of scientific knowledge. Furthermore, by harnessing background knowledge, the querying and combining tasks can be carried out more intelligently. In this paper, we review some of the significant proposals for formalisms for representing and reasoning with scientific knowledge.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Knowledge discovery support environments include beside classical data analysis tools also data mining tools. For supporting both kinds of tools, a unified knowledge representation is needed. We show that concept lattices which are used as knowledge representation in Conceptual Information Systems can also be used for structuring the results of mining association rules. Vice versa, we use ideas of association rules for reducing the complexity of the visualization of Conceptual Information Systems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.