67 resultados para KNOWLEDGE REPRESENTATION AND REASONING

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since much knowledge is tacit, eliciting knowledge is a common bottleneck during the development of knowledge-based systems. Visual interactive simulation (VIS) has been proposed as a means for eliciting experts’ decision-making by getting them to interact with a visual simulation of the real system in which they work. In order to explore the effectiveness and efficiency of VIS based knowledge elicitation, an experiment has been carried out with decision-makers in a Ford Motor Company engine assembly plant. The model properties under investigation were the level of visual representation (2-dimensional, 2½-dimensional and 3-dimensional) and the model parameter settings (unadjusted and adjusted to represent more uncommon and extreme situations). The conclusion from the experiment is that using a 2-dimensional representation with adjusted parameter settings provides the better simulation-based means for eliciting knowledge, at least for the case modelled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the knowledge elicitation and knowledge representation aspects of a system being developed to help with the design and maintenance of relational data bases. The size algorithmic components. In addition, the domain contains multiple experts, but any given expert's knowledge of this large domain is only partial. The paper discusses the methods and techniques used for knowledge elicitation, which was based on a "broad and shallow" approach at first, moving to a "narrow and deep" one later, and describes the models used for knowledge representation, which were based on a layered "generic and variants" approach. © 1995.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ontologies have become the knowledge representation medium of choice in recent years for a range of computer science specialities including the Semantic Web, Agents, and Bio-informatics. There has been a great deal of research and development in this area combined with hype and reaction. This special issue is concerned with the limitations of ontologies and how these can be addressed, together with a consideration of how we can circumvent or go beyond these constraints. The introduction places the discussion in context and presents the papers included in this issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a certain automobile factory, batch-painting of the body types in colours is controlled by an allocation system. This tries to balance production with orders, whilst making optimally-sized batches of colours. Sequences of cars entering painting cannot be optimised for easy selection of colour and batch size. `Over-production' is not allowed, in order to reduce buffer stocks of unsold vehicles. Paint quality is degraded by random effects. This thesis describes a toolkit which supports IKBS in an object-centred formalism. The intended domain of use for the toolkit is flexible manufacturing. A sizeable application program was developed, using the toolkit, to test the validity of the IKBS approach in solving the real manufacturing problem above, for which an existing conventional program was already being used. A detailed statistical analysis of the operating circumstances of the program was made to evaluate the likely need for the more flexible type of program for which the toolkit was intended. The IKBS program captures the many disparate and conflicting constraints in the scheduling knowledge and emulates the behaviour of the program installed in the factory. In the factory system, many possible, newly-discovered, heuristics would be awkward to represent and it would be impossible to make many new extensions. The representation scheme is capable of admitting changes to the knowledge, relying on the inherent encapsulating properties of object-centres programming to protect and isolate data. The object-centred scheme is supported by an enhancement of the `C' programming language and runs under BSD 4.2 UNIX. The structuring technique, using objects, provides a mechanism for separating control of expression of rule-based knowledge from the knowledge itself and allowing explicit `contexts', within which appropriate expression of knowledge can be done. Facilities are provided for acquisition of knowledge in a consistent manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper is part of a project simulating maintenance operations in an automotive engine production facility. The decisions made by the people in charge of these operations form a crucial element of this simulation. Eliciting this knowledge is problematic. One approach is to use the simulation model as part of the knowledge elicitation process. This paper reports on the experience so far with using a simulation model to support knowledge management in this way. Issues are discussed regarding the data available, the use of the model, and the elicitation process itself. © 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The idea that knowledge needs to be codified is central to many claims that knowledge can be managed. However, there appear to be no empirical studies in the knowledge management context that examine the process of knowledge codification. This paper therefore seeks to explore codification as a knowledge management process. Design/methodology/approach - The paper draws on findings from research conducted around a knowledge management project in a section of the UK Post Office, using a methodology of participant-observation. Data were collected through observations of project meetings, correspondence between project participants, and individual interviews. Findings - The principal findings about the nature of knowledge codification are first, that the process of knowledge codification also involves the process of defining the codes needed to codify knowledge, and second, that people who participate in the construction of these codes are able to interpret and use the codes more similarly. From this it can be seen that the ability of people to decodify codes similarly places restrictions on the transferability of knowledge between them. Research limitations/implications - The paper therefore argues that a new conceptual approach is needed for the role of knowledge codification in knowledge management that emphasizes the importance of knowledge decodification. Such an approach would start with one's ability to decodify rather than codify knowledge as a prerequisite for knowledge management. Originality/value - The paper provides a conceptual basis for explaining limitations to the management and transferability of knowledge. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge is of crucial, and growing importance in social, political and economic relations in modern society. The range and variety of available knowledge dramatically enlarges the available options of social action. This five volume collection brings together a broad array of contributions from a variety of disciplines. Featuring essays from philosophers who have investigated the foundations of knowledge, and addressing different forms of knowledge in society such as common sense and practical knowledge, this collection also discusses the role of knowledge in economic process and gives attention to the role of expert knowledge in political decision making. Including a collection of articles from the sociology of knowledge and science, the set also provides a new introduction by the editors, making it a unique and invaluable research resource for both student and scholar.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Benchmarking exercises have become increasingly popular within the sphere of regional policy making. However, most exercises are restricted to comparing regions within a particular continental bloc or nation.This article introduces the World Knowledge Competitiveness Index (WKCI), which is one of the very few benchmarking exercises established to compare regions across continents.The article discusses the formulation of the WKCI and analyzes the results of the most recent editions.The results suggest that there are significant variations in the knowledge-based regional economic development models at work across the globe. Further analysis also indicates that Silicon Valley, as the highest ranked WKCI region, holds a unique economic position among the globe’s leading regions. However, significant changes in the sources of regional competitiveness are evolving as a result of the emergence of new regional hot spots in Asia. It is concluded that benchmarking is imperative to the learning process of regional policy making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.