888 resultados para Knowledge representation (Information theory)
Resumo:
This paper highlights the importance of design expertise, for designing liquid retaining structures, including subjective judgments and professional experience. Design of liquid retaining structures has special features different from the others. Being more vulnerable to corrosion problem, they have stringent requirements against serviceability limit state of crack. It is the premise of the study to transferring expert knowledge in a computerized blackboard system. Hybrid knowledge representation schemes, including production rules, object-oriented programming, and procedural methods, are employed to express engineering heuristics and standard design knowledge during the development of the knowledge-based system (KBS) for design of liquid retaining structures. This approach renders it possible to take advantages of the characteristics of each method. The system can provide the user with advice on preliminary design, loading specification, optimized configuration selection and detailed design analysis of liquid retaining structure. It would be beneficial to the field of retaining structure design by focusing on the acquisition and organization of expert knowledge through the development of recent artificial intelligence technology. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
In the context of a hostile funding environment, universities are increasingly asked to justify their output in narrowly defined economic terms, and this can be difficult in Humanities or Arts faculties where productivity is rarely reducible to a simple financial indicator. This can lead to a number of immediate consequences that I have no need to rehearse here, but can also result in some interesting tensions within the academic community itself. First is that which has become known as the ‘Science Wars’: the increasingly acrimonious exchanges between scientists and scientific academics and cultural critics or theorists about who has the right to describe the world. Much has already been said—and much remains to be said—about this issue, but it is not my intention to discuss it here. Rather, I will look at a second area of contestation: the incorporation of scientific theory into literary or cultural criticism. Much of this work comes from a genuine commitment to interdisciplinarity, and an appreciation of insights that a fresh perspective can bring to a familiar object. However, some can be seen as cynical attempts to lend literary studies the sort of empirical legitimacy of the sciences. In particular, I want to look at a number of critics who have applied information theory to the literary work. In this paper, I will examine several instances of this sort of criticism, and then, through an analysis of a novel by American author Richard Powers, Three Farmers on Their Way to a Dance, show how this sort of criticism merely reduces the meaningful analysis of a complex literary text.
Resumo:
Ontologies have become the knowledge representation medium of choice in recent years for a range of computer science specialities including the Semantic Web, Agents, and Bio-informatics. There has been a great deal of research and development in this area combined with hype and reaction. This special issue is concerned with the limitations of ontologies and how these can be addressed, together with a consideration of how we can circumvent or go beyond these constraints. The introduction places the discussion in context and presents the papers included in this issue.
Resumo:
Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.
Resumo:
In a certain automobile factory, batch-painting of the body types in colours is controlled by an allocation system. This tries to balance production with orders, whilst making optimally-sized batches of colours. Sequences of cars entering painting cannot be optimised for easy selection of colour and batch size. `Over-production' is not allowed, in order to reduce buffer stocks of unsold vehicles. Paint quality is degraded by random effects. This thesis describes a toolkit which supports IKBS in an object-centred formalism. The intended domain of use for the toolkit is flexible manufacturing. A sizeable application program was developed, using the toolkit, to test the validity of the IKBS approach in solving the real manufacturing problem above, for which an existing conventional program was already being used. A detailed statistical analysis of the operating circumstances of the program was made to evaluate the likely need for the more flexible type of program for which the toolkit was intended. The IKBS program captures the many disparate and conflicting constraints in the scheduling knowledge and emulates the behaviour of the program installed in the factory. In the factory system, many possible, newly-discovered, heuristics would be awkward to represent and it would be impossible to make many new extensions. The representation scheme is capable of admitting changes to the knowledge, relying on the inherent encapsulating properties of object-centres programming to protect and isolate data. The object-centred scheme is supported by an enhancement of the `C' programming language and runs under BSD 4.2 UNIX. The structuring technique, using objects, provides a mechanism for separating control of expression of rule-based knowledge from the knowledge itself and allowing explicit `contexts', within which appropriate expression of knowledge can be done. Facilities are provided for acquisition of knowledge in a consistent manner.
Resumo:
This paper describes the knowledge elicitation and knowledge representation aspects of a system being developed to help with the design and maintenance of relational data bases. The size algorithmic components. In addition, the domain contains multiple experts, but any given expert's knowledge of this large domain is only partial. The paper discusses the methods and techniques used for knowledge elicitation, which was based on a "broad and shallow" approach at first, moving to a "narrow and deep" one later, and describes the models used for knowledge representation, which were based on a layered "generic and variants" approach. © 1995.
Resumo:
We present information-theory analysis of the tradeoff between bit-error rate improvement and the data-rate loss using skewed channel coding to suppress pattern-dependent errors in digital communications. Without loss of generality, we apply developed general theory to the particular example of a high-speed fiber communication system with a strong patterning effect. © 2007 IEEE.
Resumo:
The basic structure of the General Information Theory (GIT) is presented in the paper. The main divisions of the GIT are outlined. Some new results are pointed.
Resumo:
In this paper we summarize our recently proposed work on the information theory analysis of regenerative channels. We discuss how the design and the transfer function properties of the regenerator affect the noise statistics and enable Shannon capacities higher than that of the corresponding linear channels (in the absence of regeneration).
Resumo:
Knowledge is the key for success. The adequate treatment you make on data for generating knowledge can make a difference in projects, processes, and networks. Such a treatment is the main goal of two important areas: knowledger representation and management. Our aim, in this book, is collecting sorne innovative ways of representing and managing knowledge proposed by several Latin American researchers under the premise of improving knowledge.
Resumo:
This paper proposes a principal-agent model between banks and firms with risk and asymmetric information. A mixed form of finance to firms is assumed. The capital structure of firms is a relevant cause for the final aggregate level of investment in the economy. In the model analyzed, there may be a separating equilibrium, which is not economically efficient, because aggregate investments fall short of the first-best level. Based on European firm-level data, an empirical model is presented which validates the result of the relevance of the capital structure of firms. The relative magnitude of equity in the capital structure makes a real difference to the profits obtained by firms in the economy.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Information can be expressed in many ways according to the different capacities of humans to perceive it. Current systems deals with multimedia, multiformat and multiplatform systems but another « multi » is still pending to guarantee global access to information, that is, multilinguality. Different languages imply different replications of the systems according to the language in question. No solutions appear to represent the bridge between the human representation (natural language) and a system-oriented representation. The United Nations University defined in 1997 a language to be the support of effective multilinguism in Internet. In this paper, we describe this language and its possible applications beyond multilingual services as the possible future standard for different language independent applications.
Resumo:
In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.