948 resultados para Knowledge representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of designers’ knowledge and how they conduct design process has been widely investigated in design research. Understanding theoretical and experiential knowledge in design has involved recognition of the importance of designers’ experience of experiencing, seeing, and absorbing ideas from the world as points of reference (or precedents) that are consulted whenever a design problem arises (Lawson, 2004). Hence, various types of design knowledge have been categorized (Lawson, 2004), and the nature of design knowledge continues to be studied (Cross, 2006); nevertheless, the study of the experiential aspects embedded in design knowledge is a topic not fully addressed. In particular there has been little emphasis on the investigation of the ways in which designers’ individual experience influences different types of design tasks. This research focuses on the investigation of the ways in which designers inform a usability design process. It aims to understand how designers design product usability, what informs their process, and the role their individual experience (and episodic knowledge) plays within the design process. This paper introduces initial outcomes from an empirical study involving observation of a design task that emphasized usability issues. It discusses the experiential knowledge observed in the visual representations (sketches) produced by designers as part of the design tasks. Through the use of visuals as means to represent experiential knowledge, this paper presents initial research outcomes to demonstrate how designers’ individual experience is integrated into design tasks and communicated within the design process. Initial outcomes demonstrate the influence of designers’ experience in the design of product usability. It is expected that outcomes will help identify the causal relationships between experience, context of use, and product usability, which will contribute to enhance our understanding about the design of user-product interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkielma käsittelee nykyisiä kognitiotieteen teorioita käsitteistä ja niiden mallintamista oliokeskeisillä tietämyksen esittämisen menetelmillä. Käsiteteorioista käsitellään klassinen, määritelmäteoria, prototyyppiteoria, duaaliteoriat, uusklassinen teoria, teoria-teoria ja atomistinen teoria. Oliokeskeiset menetelmät ovat viime aikoina jakautuneet kahden tyyppisiin kieliin: oliopohjaisiin ja luokkapohjaisiin. Uudet olio-pohjaiset olio-ohjelmointikielet antavat käsitteiden representointiin mahdollisuuksia, jotka puuttuvat aikaisemmista luokka-pohjaisista kielistä ja myös kehysmenetelmistä. Tutkielma osoittaa, että oliopohjaisten kielten uudet piirteet tarjoavat keinoja, joilla käsitteitä voidaan esittää symbolisessa muodossa paremmin kuin perinteisillä menetelmillä. Niillä pystytään simuloimaan kaikkea mitä luokkapohjaisilla kielillä voidaan, mutta ne pystyvät lisäksi simuloimaan perheyhtäläisyyskäsitteitä ja mahdollistavat olioiden dynaamisen muuttamisen ilman, että siinä rikotaan psykologisen essentialismin periaatetta. Tutkielma osoittaa lisäksi vakavia puutteitta, jotka koskevat koko oliokeskeistä menetelmää. Avainsanat: käsitteet, käsiteteoriat, tekoäly, komputationaalinen psykologia, olio-ohjelmointi, tiedon esittäminen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

TYPICAL is a package for describing and making automatic inferences about a broad class of SCHEME predicate functions. These functions, called types following popular usage, delineate classes of primitive SCHEME objects, composite data structures, and abstract descriptions. TYPICAL types are generated by an extensible combinator language from either existing types or primitive terminals. These generated types are located in a lattice of predicate subsumption which captures necessary entailment between types; if satisfaction of one type necessarily entail satisfaction of another, the first type is below the second in the lattice. The inferences make by TYPICAL computes the position of the new definition within the lattice and establishes it there. This information is then accessible to both later inferences and other programs (reasoning systems, code analyzers, etc) which may need the information for their own purposes. TYPICAL was developed as a representation language for the discovery program Cyrano; particular examples are given of TYPICAL's application in the Cyrano program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes a system which maintains canonical expressions for designators under a set of equalities. Substitution is used to maintain all knowledge in terms of these canonical expressions. A partial order on designators, termed the better-name relation, is used in the choice of canonical expressions. It is shown that with an appropriate better-name relation an important engineering reasoning technique, propagation of constraints, can be implemented as a special case of this substitution process. Special purpose algebraic simplification procedures are embedded such that they interact effectively with the equality system. An electrical circuit analysis system is developed which relies upon constraint propagation and algebraic simplification as primary reasoning techniques. The reasoning is guided by a better-name relation in which referentially transparent terms are preferred to referentially opaque ones. Multiple description of subcircuits are shown to interact strongly with the reasoning mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

N.J. Lacey and M.H. Lee, ?The Implications of Philosophical Foundations for Knowledge Representation and Learning in Agents?, Springer-Verlag Lecture Notes on Artificial Intelligence, Vol 2636 on Adaptive Agents and Multi-Agent Systems, 2002.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lacey N and Lee M.H., The Implications of Philosophical Foundations for Knowledge Representation and Learning in Agents, in Proc. AISB?01 Symposium on Adaptive Agents and Multi-agent Systems, York, March 2001, pp13-24.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X. Fu and Q. Shen. 'Knowledge representation for fuzzy model composition', in Proceedings of the 21st International Workshop on Qualitative Reasoning, 2007, pp. 47-54. Sponsorship: EPSRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Bioinformática), Universidade de Lisboa, Faculdade de Ciências, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis is to estimate the effect of the form of knowledge representation on the efficiency of knowledge sharing. The objectives include the design of an experimental framework which would allow to establish this effect, data collection, and statistical analysis of the collected data. The study follows the experimental quantitative design. The experimental questionnaire features three sample forms of knowledge: text, mind maps, concept maps. In the interview, these forms are presented to an interviewee, afterwards the knowledge sharing time and knowledge sharing quality are measured. According to the statistical analysis of 76 interviews, text performs worse in both knowledge sharing time and quality compared to visualized forms of knowledge representation. However, mind maps and concept maps do not differ in knowledge sharing time and quality, since this difference is not statistically significant. Since visualized structured forms of knowledge perform better than unstructured text in knowledge sharing, it is advised for companies to foster the usage of these forms in knowledge sharing processes inside the company. Aside of performance in knowledge sharing, the visualized structured forms are preferable due the possibility of their usage in the system of ontological knowledge management within an enterprise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ontic is an interactive system for developing and verifying mathematics. Ontic's verification mechanism is capable of automatically finding and applying information from a library containing hundreds of mathematical facts. Starting with only the axioms of Zermelo-Fraenkel set theory, the Ontic system has been used to build a data base of definitions and lemmas leading to a proof of the Stone representation theorem for Boolean lattices. The Ontic system has been used to explore issues in knowledge representation, automated deduction, and the automatic use of large data bases.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Focussing on Open Data and the need for cleaning data