997 resultados para knowledge capture
Resumo:
onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.
Resumo:
Despite years of effort in building organisational taxonomies, the potential of ontologies to support knowledge management in complex technical domains is under-exploited. The authors of this chapter present an approach to using rich domain ontologies to support sense-making tasks associated with resolving mechanical issues. Using Semantic Web technologies, the authors have built a framework and a suite of tools which support the whole semantic knowledge lifecycle. These are presented by describing the process of issue resolution for a simulated investigation concerning failure of bicycle brakes. Foci of the work have included ensuring that semantic tasks fit in with users’ everyday tasks, to achieve user acceptability and support the flexibility required by communities of practice with differing local sub-domains, tasks, and terminology.
Resumo:
Includes bibliography
Resumo:
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.
Resumo:
Project managers in the construction industry increasingly seek to learn from other industrial sectors. Knowledge sharing between different contexts is thus viewed as an essential source of competitive advantage. It is important therefore for project managers from all sectors to address and develop appropriate methods of knowledge sharing. However, too often it is assumed that knowledge freely exists and can be captured and shared between contexts. Such assumptions belie complexities and problems awaiting the unsuspecting knowledge-sharing protagonist. Knowledge per se is a problematic esoteric concept that does not lend itself easily to codification. Specifically tacit knowledge possessed by individuals, presents particular methodological issues for those considering harnessing its utility in return for competitive advantage. The notion that knowledge is also embedded in specific social contexts compounds this complexity. It is argued that knowledge is highly individualistic and concomitant with the various surrounding contexts within which it is shaped and enacted. Indeed, these contexts are also shaped as a consequence of knowledge adding further complexity to the problem domain. Current methods of knowledge capture, transfer and, sharing fall short of addressing these problematic issues. Research is presented that addresses these problems and proposes an alternative method of knowledge sharing. Drawing on data and observations collected from its application, the findings clearly demonstrate the crucial role of re-contextualisation, social interaction and dialectic debate in understanding knowledge sharing.
Resumo:
In large organizations the resources needed to solve challenging problems are typically dispersed over systems within and beyond the organization, and also in different media. However, there is still the need, in knowledge environments, for extraction methods able to combine evidence for a fact from across different media. In many cases the whole is more than the sum of its parts: only when considering the different media simultaneously can enough evidence be obtained to derive facts otherwise inaccessible to the knowledge worker via traditional methods that work on each single medium separately. In this paper, we present a cross-media knowledge extraction framework specifically designed to handle large volumes of documents composed of three types of media text, images and raw data and to exploit the evidence across the media. Our goal is to improve the quality and depth of automatically extracted knowledge.
Resumo:
We propose a knowledge fusion architecture KnoFuss based on the application of problem-solving methods technology, which allows methods for subtasks of the fusion process to be combined and the best methods to be selected, depending on the domain and task at hand.
Resumo:
The present work consists on the development and exploration of a knowledge mapping model. It is aimed in presenting and testing that model in an exploratory manner for the operationalization in theoretical terms and its practical application. For this proposal, it approaches ¿knowledge management¿ by three dimensions of management: processes control, results control and leadership. Assuming the inherently personal character of knowledge and admitting the impossibility for the management to command the individuals knowledge mental processes, this study states the possibility for management to control the organizations information processes, to state and to monitor the objectives and to lead people. Therefore, the developed tool searches to graphically represent the people¿s knowledge, which becomes, consequently, information. Also it evaluates the individual¿s maturity against the identified knowledge and prescribes, according to the Situational Leadership Theory, a style of leadership in compliance with the respective maturities. The knowledge map considered here translates the graphical representation of the relevant knowledge that is said to reach the objectives of the organization. That fact allows the results management for two reasons: first, the knowledge items are directly or indirectly connected to an objective and second, the knowledge map developed indicates a importance grade of the identified knowledge for the main objective stated. The research strategy adopted to explore the model of knowledge mapping and to test its applicability, also identifying its possible contributions and limitations, is the Actionresearch. Following the research strategy¿s prescribed stages, the knowledge map, as considered by this study, was applied in a software development company to hospitals, clinics and restaurants management which is called ¿Tergus Systems and Consulting¿. More precisely, the knowledge map was applied in the customer support company¿s area for hospitals. The research¿s empirical evidences had concluded that the model is applicable with low level of complexity, but with great demand of time. The more important contributions are related to the identification of the relevant knowledge to the department objectives in set with a prescription of the knowledge capture and transference necessities, as well as, the orientation for the appropriate leadership style for each subordinate related to the knowledge item in question.
Resumo:
Provenance plays a major role when understanding and reusing the methods applied in a scientic experiment, as it provides a record of inputs, the processes carried out and the use and generation of intermediate and nal results. In the specic case of in-silico scientic experiments, a large variety of scientic workflow systems (e.g., Wings, Taverna, Galaxy, Vistrails) have been created to support scientists. All of these systems produce some sort of provenance about the executions of the workflows that encode scientic experiments. However, provenance is normally recorded at a very low level of detail, which complicates the understanding of what happened during execution. In this paper we propose an approach to automatically obtain abstractions from low-level provenance data by finding common workflow fragments on workflow execution provenance and relating them to templates. We have tested our approach with a dataset of workflows published by the Wings workflow system. Our results show that by using these kinds of abstractions we can highlight the most common abstract methods used in the executions of a repository, relating different runs and workflow templates with each other.
Resumo:
Os custos elevados de aquisição de conhecimento, a intensificação da concorrência e a necessidade de aproximação do consumidor vêm estimulando empresas a buscar formas alternativas de aumentar seu potencial de inovação pela integração de usuários. No entanto, a literatura e o senso comum convergem ao afirmar que nem todo usuário está habilitado a trazer conhecimentos que sustentem a vantagem competitiva das inovações. Nesse contexto, emerge a figura do lead user que, por definição, é capaz de sentir necessidades de produtos e serviços ainda não expressos por usuários regulares. Esses conhecimentos, quando adequadamente absorvidos, trazem benefícios expressivos às empresas que os incorporam ao DNP. Sabendo que as formas de incorporação de usuários apresentam variações, este estudo se destina a entender como empresas de diferentes setores absorvem conhecimentos de lead users por diferentes práticas de integração. Para tanto, foi escolhido o método de estudo de casos múltiplos incorporados, observados em três multinacionais de grande porte: Natura, Whirlpool e Microsoft (Bing). Ao todo foram avalidados cinco modos de integração distintos, escolhidos a partir de duas formações: individual (conhecimentos isolados de usuários distintos) e coletivo (conhecimentos articulados em discussões em grupo), analisados pelos métodos de indução analítica com síntese cruzada de dados. Os resultados mostraram que as categorias teóricas utilizadas para observação inicial do fenômenol: parâmetro de identificação e técnica de seleção (aquisição); mecanismo de interação (assimilação); mecanismos de socialização (transformação) e sistema de formalização (exploração) apoiaram parcialmente o entendimento das atividades do processo e, por esta razão, precisaram se complementadas pelas categorias emergentes: criação de contexto, motivação (aquisição); estímulos, parâmetro de observação, interpretação (assimilação); definição de papéis, coordenação de processos, combinação de conhecimento (transformação) e gestão do conhecimento (exploração), coletadas na fase empírica Essa complementação aumentou a robustez do modelo inicial e mostrou como a absorção de conhecimentos pode ser avaliada pelas dimensões absortivas. No entanto, as análises intra e intercasos que se seguiram, mostraram que esse entendimento era insuficiente para explicar a capacidade de absorção por diferentes práticas, uma vez que o fenômeno é influenciado por fatores contextuais associados tanto à prática de integração quanto ao modo como cada empresa se organiza para inovar (tipo de acesso ao colaborador). As reflexões teóricas realizadas a partir desses resultados permitiram contribuir com a teoria existente de duas formas: I) pelo entendimento estendido das atividades de absorção necessárias para incorporação de conhecimentos de lead users e III) pela proposição de um modelo conceitual amplo que abarcou diferentes práticas de integração considerando também os antecedentes de inovação, as atividades absortivas e os fatores adjuntos, inerentes a cada prática. Esta pesquisa objetiva contribuir para o conhecimento teórico sobre inovação e motivar reflexões que possam ser úteis para gerentes e executivos interessados em aprimorar suas práticas e processos de captação de conhecimentos de lead users.
Resumo:
In view of the need to provide tools to facilitate the re-use of existing knowledge structures such as ontologies, we present in this paper a system, AKTiveRank, for the ranking of ontologies. AKTiveRank uses as input the search terms provided by a knowledge engineer and, using the output of an ontology search engine, ranks the ontologies. We apply a number of metrics in an attempt to investigate their appropriateness for ranking ontologies, and compare the results with a questionnaire-based human study. Our results show that AKTiveRank will have great utility although there is potential for improvement.
Resumo:
Because poor quality semantic metadata can destroy the effectiveness of semantic web technology by hampering applications from producing accurate results, it is important to have frameworks that support their evaluation. However, there is no such framework developedto date. In this context, we proposed i) an evaluation reference model, SemRef, which sketches some fundamental principles for evaluating semantic metadata, and ii) an evaluation framework, SemEval, which provides a set of instruments to support the detection of quality problems and the collection of quality metrics for these problems. A preliminary case study of SemEval shows encouraging results.
Resumo:
The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.
Resumo:
We show a new method for term extraction from a domain relevant corpus using natural language processing for the purposes of semi-automatic ontology learning. Literature shows that topical words occur in bursts. We find that the ranking of extracted terms is insensitive to the choice of population model, but calculating frequencies relative to the burst size rather than the document length in words yields significantly different results.
Resumo:
One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.