995 resultados para dynamic ontology
Resumo:
Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.
Resumo:
The fundamental failure of current approaches to ontology learning is to view it as single pipeline with one or more specific inputs and a single static output. In this paper, we present a novel approach to ontology learning which takes an iterative view of knowledge acquisition for ontologies. Our approach is founded on three open-ended resources: a set of texts, a set of learning patterns and a set of ontological triples, and the system seeks to maintain these in equilibrium. As events occur which disturb this equilibrium, actions are triggered to re-establish a balance between the resources. We present a gold standard based evaluation of the final output of the system, the intermediate output showing the iterative process and a comparison of performance using different seed input. The results are comparable to existing performance in the literature.
Resumo:
Companies face the challenges of expanding their markets, improving products, services and processes, and exploiting intellectual capital in a dynamic network. Therefore, more companies are turning to an Enterprise System (ES). Knowledge management (KM) has also received considerable attention and is continuously gaining the interest of industry, enterprises, and academia. For ES, KM can provide support across the entire lifecycle, from selection and implementation to use. In addition, it is also recognised that an ontology is an appropriate methodology to accomplish a common consensus of communication, as well as to support a diversity of KM activities, such as knowledge repository, retrieval, sharing, and dissemination. This paper examines the role of ontology-based KM for ES (OKES) and investigates the possible integration of ontology-based KM and ES. The authors develop a taxonomy as a framework for understanding OKES research. In order to achieve the objective of this study, a systematic review of existing research was conducted. Based on a theoretical framework of the ES lifecycle, KM, KM for ES, ontology, and ontology-based KM, guided by the framework of study, a taxonomy for OKES is established.
Resumo:
Ontologies play a core role to provide shared knowledge models to semantic-driven applications targeted by Semantic Web. Ontology metrics become an important area because they can help ontology engineers to assess ontology and better control project management and development of ontology based systems, and therefore reduce the risk of project failures. In this paper, we propose a set of ontology cohesion metrics which focuses on measuring (possibly inconsistent) ontologies in the context of dynamic and changing Web. They are: Number of Ontology Partitions (NOP), Number of Minimally Inconsistent Subsets (NMIS) and Average Value of Axiom Inconsistencies (AVAI). These ontology metrics are used to measure ontological semantics rather than ontological structure. They are theoretically validated for ensuring their theoretical soundness, and further empirically validated by a standard test set of debugging ontologies. The related algorithms to compute these ontology metrics also are discussed. These metrics proposed in this paper can be used as a very useful complementarity of existing ontology cohesion metrics.
Resumo:
With the continuous changes in application requirements of the enterprises, Web resources must be updated, so do the underlying ontologies that are associated with the Web resources. In the situation, it is very challenging for ontological engineers to specify the changes of ontologies, keep their consistencies and achieve semantic query of Web resources based on the evolving ontologies. We propose a construct called Prioritized Knowledge Base (PKB) based on SHOQ(D) description logic, and discuss some properties of PKB.PKB can be used for describing the evolutions and updates of ontologies with conflicting information. Furthermore, we develop some algorithms for checking conflict rules and performing semantic query based on PKB.
Resumo:
Background: In recent years, various types of cellular networks have penetrated biology and are nowadays used omnipresently for studying eukaryote and prokaryote organisms. Still, the relation and the biological overlap among phenomenological and inferential gene networks, e.g., between the protein interaction network and the gene regulatory network inferred from large-scale transcriptomic data, is largely unexplored.
Results: We provide in this study an in-depth analysis of the structural, functional and chromosomal relationship between a protein-protein network, a transcriptional regulatory network and an inferred gene regulatory network, for S. cerevisiae and E. coli. Further, we study global and local aspects of these networks and their biological information overlap by comparing, e.g., the functional co-occurrence of Gene Ontology terms by exploiting the available interaction structure among the genes.
Conclusions: Although the individual networks represent different levels of cellular interactions with global structural and functional dissimilarities, we observe crucial functions of their network interfaces for the assembly of protein complexes, proteolysis, transcription, translation, metabolic and regulatory interactions. Overall, our results shed light on the integrability of these networks and their interfacing biological processes.
Resumo:
There is still a lack of effective paradigms and tools for analysing and discovering the contents and relationships of project knowledge contexts in the field of project management. In this paper, a new framework for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps under big data environments is proposed and developed. The conceptual paradigm, theoretical underpinning, extended topic model, and illustration examples of the ontology model for project knowledge maps are presented, with further research work envisaged.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described.
Resumo:
The term secretome has been defined as a set of secreted proteins (Grimmond et al. [2003] Genome Res 13:1350-1359). The term secreted protein encompasses all proteins exported from the cell including growth factors, extracellular proteinases, morphogens, and extracellular matrix molecules. Defining the genes encoding secreted proteins that change in expression during organogenesis, the dynamic secretome, is likely to point to key drivers of morphogenesis. Such secreted proteins are involved in the reciprocal interactions between the ureteric bud (UB) and the metanephric mesenchyme (AM) that occur during organogenesis of the metanephros. Some key metanephric secreted proteins have been identified, but many remain to be determined. In this study, microarray expression profiling of E10.5, E11.5, and E13.5 kidney and consensus bioinformatic analysis were used to define a dynamic secretome of early metanephric development. In situ hybridisation was used to confirm microarray results and clarify spatial expression patterns for these genes. Forty-one secreted factors were dynamically expressed between the E10.5 and E13.5 timeframe profiled, and 25 of these factors had not previously been implicated in kidney development. A text-based anatomical ontology was used to spatially annotate the expression pattern of these genes in cultured metanephric explants.
Resumo:
In this paper we present a new approach to ontology learning. Its basis lies in a dynamic and iterative view of knowledge acquisition for ontologies. The Abraxas approach is founded on three resources, a set of texts, a set of learning patterns and a set of ontological triples, each of which must remain in equilibrium. As events occur which disturb this equilibrium various actions are triggered to re-establish a balance between the resources. Such events include acquisition of a further text from external resources such as the Web or the addition of ontological triples to the ontology. We develop the concept of a knowledge gap between the coverage of an ontology and the corpus of texts as a measure triggering actions. We present an overview of the algorithm and its functionalities.
Resumo:
Software architecture plays an essential role in the high level description of a system design, where the structure and communication are emphasized. Despite its importance in the software engineering process, the lack of formal description and automated verification hinders the development of good software architecture models. In this paper, we present an approach to support the rigorous design and verification of software architecture models using the semantic web technology. We view software architecture models as ontology representations, where their structures and communication constraints are captured by the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL). Specific configurations on the design are represented as concrete instances of the ontology, to which their structures and dynamic behaviors must conform. Furthermore, ontology reasoning tools can be applied to perform various automated verification on the design to ensure correctness, such as consistency checking, style recognition, and behavioral inference.