959 resultados para Knowledge representation
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
The goal of this thesis is to estimate the effect of the form of knowledge representation on the efficiency of knowledge sharing. The objectives include the design of an experimental framework which would allow to establish this effect, data collection, and statistical analysis of the collected data. The study follows the experimental quantitative design. The experimental questionnaire features three sample forms of knowledge: text, mind maps, concept maps. In the interview, these forms are presented to an interviewee, afterwards the knowledge sharing time and knowledge sharing quality are measured. According to the statistical analysis of 76 interviews, text performs worse in both knowledge sharing time and quality compared to visualized forms of knowledge representation. However, mind maps and concept maps do not differ in knowledge sharing time and quality, since this difference is not statistically significant. Since visualized structured forms of knowledge perform better than unstructured text in knowledge sharing, it is advised for companies to foster the usage of these forms in knowledge sharing processes inside the company. Aside of performance in knowledge sharing, the visualized structured forms are preferable due the possibility of their usage in the system of ontological knowledge management within an enterprise.
Resumo:
Ontic is an interactive system for developing and verifying mathematics. Ontic's verification mechanism is capable of automatically finding and applying information from a library containing hundreds of mathematical facts. Starting with only the axioms of Zermelo-Fraenkel set theory, the Ontic system has been used to build a data base of definitions and lemmas leading to a proof of the Stone representation theorem for Boolean lattices. The Ontic system has been used to explore issues in knowledge representation, automated deduction, and the automatic use of large data bases.
Resumo:
Focussing on Open Data and the need for cleaning data
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.
Resumo:
Characteristics of speech, especially figures of speech, are used by specific communities or domains, and, in this way, reflect their identities through their choice of vocabulary. This topic should be an object of study in the context of knowledge representation once it deals with different contexts of production of documents. This study aims to explore the dimensions of the concepts of euphemism, dysphemism, and orthophemism, focusing on the latter with the goal of extracting a concept which can be included in discussions about subject analysis and indexing. Euphemism is used as an alternative to a non-preferred expression or as an alternative to an offensive attribution-to avoid potential offense taken by the listener or by other persons, for instance, pass away. Dysphemism, on the other hand, is used by speakers to talk about people and things that frustrate and annoy them-their choice of language indicates disapproval and the topic is therefore denigrated, humiliated, or degraded, for instance, kick the bucket. While euphemism tries to make something sound better, dysphemism tries to make something sound worse. Orthophemism (Allan and Burridge 2006) is also used as an alternative to expressions, but it is a preferred, formal, and direct language of expression when representing an object or a situation, for instance, die. This paper suggests that the comprehension and use of such concepts could support the following issues: possible contributions from linguistics and terminology to subject analysis as demonstrated by Talamo et al. (1992); decrease of polysemy and ambiguity of terms used to represent certain topics of documents; and construction and evaluation of indexing languages. The concept of orthophemism can also serves to support associative relationships in the context of subject analysis, indexing, and even information retrieval related to more specific requests.
Resumo:
One common problem in all basic techniques of knowledge representation is the handling of the trade-off between precision of inferences and resource constraints, such as time and memory. Michalski and Winston (1986) suggested the Censored Production Rule (CPR) as an underlying representation and computational mechanism to enable logic based systems to exhibit variable precision in which certainty varies while specificity stays constant. As an extension of CPR, the Hierarchical Censored Production Rules (HCPRs) system of knowledge representation, proposed by Bharadwaj & Jain (1992), exhibits both variable certainty as well as variable specificity and offers mechanisms for handling the trade-off between the two. An HCPR has the form: Decision If(preconditions) Unless(censor) Generality(general_information) Specificity(specific_information). As an attempt towards evolving a generalized knowledge representation, an Extended Hierarchical Censored Production Rules (EHCPRs) system is suggested in this paper. With the inclusion of new operators, an Extended Hierarchical Censored Production Rule (EHCPR) takes the general form: Concept If (Preconditions) Unless (Exceptions) Generality (General-Concept) Specificity (Specific Concepts) Has_part (default: structural-parts) Has_property (default:characteristic-properties) Has_instance (instances). How semantic networks and frames are represented in terms of an EHCPRs is shown. Multiple inheritance, inheritance with and without cancellation, recognition with partial match, and a few default logic problems are shown to be tackled efficiently in the proposed system.
Resumo:
Since its emergence as a discipline, in the nineteenth century (1889), the theory and practice of Archival Science have focused on the arrangement and description of archival materials as complementary and inseparable nuclear processes that aim to classify, to order, to describe and to give access to records. These processes have their specific goals sharing one in common: the representation of archival knowledge. In the late 1980 a paradigm shift was announced in Archival Science, especially after the appearance of the new forms of document production and information technologies. The discipline was then invited to rethink its theoretical and methodological bases founded in the nineteenth century so it could handle the contemporary archival knowledge production, organization and representation. In this sense, the present paper aims to discuss, under a theoretical perspective, the archival representation, more specifically the archival description facing these changes and proposals, in order to illustrate the challenges faced by Contemporary Archival Science in a new context of production, organization and representation of archival knowledge.
Resumo:
The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.