918 resultados para knowledge construction


Relevância:

40.00% 40.00%

Publicador:

Resumo:

"October 1959."

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper has two objectives: first, to provide a brief review of developments in the sociology of scientific knowledge (SSK); second to apply an aspect of SSK theorising which is concerned with the construction of scientific knowledge. The paper offers a review of the streams of thought which can be identified within SSK and then proceeds to illustrate the theoretic constructs introduced in the earlier discussion by analysing a particular contribution to the literature on research methodology in accounting and organisations studies. The paper chosen for analysis is titled “Middle Range Thinking”. The objective of this paper is not to argue that the approach used in this paper is invalid, but to seek to expose the rhetorical nature of the argumentation which is used by the author of the paper.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Intranet technologies accessible through a web based platform are used to share and build knowledge bases in many industries. Previous research suggests that intranets are capable of providing a useful means to share, collaborate and transact information within an organization. To compete and survive successfully, business organisations are required to effectively manage various risks affecting their businesses. In the construction industry too this is increasingly becoming an important element in business planning. The ability of businesses, especially of SMEs which represent a significant portion in most economies, to manage various risks is often hindered by fragmented knowledge across a large number of businesses. As a solution, this paper argues that Intranet technologies can be used as an effective means of building and sharing knowledge and building up effective knowledge bases for risk management in SMEs, by specifically considering the risks of extreme weather events. The paper discusses and evaluates relevant literature in this regard and identifies the potential for further research to explore this concept.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An approach for knowledge extraction from the information arriving to the knowledge base input and also new knowledge distribution over knowledge subsets already present in the knowledge base is developed. It is also necessary to realize the knowledge transform into parameters (data) of the model for the following decision-making on the given subset. It is assumed to realize the decision-making with the fuzzy sets’ apparatus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is an analysis of the theoretical and practical construction of the methodology of Matrix Support by means of studies on Paideia Support (Institutional and Matrix Support), which is an inter-professional work of joint care in recent literature and official documents of the Unified Health System (SUS). An attempt was made to describe methodological concepts and strategies. A comparative analysis of Institutional Support and Matrix Support was also conducted using the epistemological framework of Field and Core Knowledge and Practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe two ways of optimizing score functions for protein sequence to structure threading. The first method adjusts parameters to improve sequence to structure alignment. The second adjusts parameters so as to improve a score function's ability to rank alignments calculated in the first score function. Unlike those functions known as knowledge-based force fields, the resulting parameter sets do not rely on Boltzmann statistics, have no claim to representing free energies and are purely constructions for recognizing protein folds. The methods give a small improvement, but suggest that functions can be profitably optimized for very specific aspects of protein fold recognition, Proteins 1999;36:454-461. (C) 1999 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Microarray transcript profiling has the potential to illuminate the molecular processes that are involved in the responses of cattle to disease challenges. This knowledge may allow the development of strategies that exploit these genes to enhance resistance to disease in an individual or animal population. Results: The Bovine Innate Immune Microarray developed in this study consists of 1480 characterised genes identified by literature searches, 31 positive and negative control elements and 5376 cDNAs derived from subtracted and normalised libraries. The cDNA libraries were produced from 'challenged' bovine epithelial and leukocyte cells. The microarray was found to have a limit of detection of 1 pg/mu g of total RNA and a mean slide-to-slide correlation co-efficient of 0.88. The profiles of differentially expressed genes from Concanavalin A ( ConA) stimulated bovine peripheral blood lymphocytes were determined. Three distinct profiles highlighted 19 genes that were rapidly up-regulated within 30 minutes and returned to basal levels by 24 h; 76 genes that were upregulated between 2 - 8 hours and sustained high levels of expression until 24 h and 10 genes that were down-regulated. Quantitative real-time RT-PCR on selected genes was used to confirm the results from the microarray analysis. The results indicate that there is a dynamic process involving gene activation and regulatory mechanisms re-establishing homeostasis in the ConA activated lymphocytes. The Bovine Innate Immune Microarray was also used to determine the cross-species hybridisation capabilities of an ovine PBL sample. Conclusion: The Bovine Innate Immune Microarray has been developed which contains a set of well-characterised genes and anonymous cDNAs from a number of different bovine cell types. The microarray can be used to determine the gene expression profiles underlying innate immune responses in cattle and sheep.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.