76 resultados para Semantic Web Technologies


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automated ontology population using information extraction algorithms can produce inconsistent knowledge bases. Confidence values assigned by the extraction algorithms may serve as evidence in helping to repair inconsistencies. The Dempster-Shafer theory of evidence is a formalism, which allows appropriate interpretation of extractors’ confidence values. This chapter presents an algorithm for translating the subontologies containing conflicts into belief propagation networks and repairing conflicts based on the Dempster-Shafer plausibility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence has extends to many areas of these fields and includes contributions to Machine Translation, word sense disambiguation, dialogue modeling and Information Extraction.This book celebrates the work of Yorick Wilks from the perspective of his peers. It consists of original chapters each of which analyses an aspect of his work and links it to current thinking in that area. His work has spanned over four decades but is shown to be pertinent to recent developments in language processing such as the Semantic Web.This volume forms a two-part set together with Words and Intelligence I, Selected Works by Yorick Wilks, by the same editors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of a supply chain depends critically on the coordinating actions and decisions undertaken by the trading partners. The sharing of product and process information plays a central role in the coordination and is a key driver for the success of the supply chain. In this paper we propose the concept of "Linked pedigrees" - linked datasets, that enable the sharing of traceability information of products as they move along the supply chain. We present a distributed and decentralised, linked data driven architecture that consumes real time supply chain linked data to generate linked pedigrees. We then present a communication protocol to enable the exchange of linked pedigrees among trading partners. We exemplify the utility of linked pedigrees by illustrating examples from the perishable goods logistics supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data integration for the purposes of tracking, tracing and transparency are important challenges in the agri-food supply chain. The Electronic Product Code Information Services (EPCIS) is an event-oriented GS1 standard that aims to enable tracking and tracing of products through the sharing of event-based datasets that encapsulate the Electronic Product Code (EPC). In this paper, the authors propose a framework that utilises events and EPCs in the generation of "linked pedigrees" - linked datasets that enable the sharing of traceability information about products as they move along the supply chain. The authors exploit two ontology based information models, EEM and CBVVocab within a distributed and decentralised framework that consumes real time EPCIS events as linked data to generate the linked pedigrees. The authors exemplify the usage of linked pedigrees within the fresh fruit and vegetables supply chain in the agri-food sector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sentiment lexicons for sentiment analysis offer a simple, yet effective way to obtain the prior sentiment information of opinionated words in texts. However, words’ sentiment orientations and strengths often change throughout various contexts in which the words appear. In this paper, we propose a lexicon adaptation approach that uses the contextual semantics of words to capture their contexts in tweet messages and update their prior sentiment orientations and/or strengths accordingly. We evaluate our approach on one state-of-the-art sentiment lexicon using three different Twitter datasets. Results show that the sentiment lexicons adapted by our approach outperform the original lexicon in accuracy and F-measure in two datasets, but give similar accuracy and slightly lower F-measure in one dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we show how event processing over semantically annotated streams of events can be exploited, for implementing tracing and tracking of products in supply chains through the automated generation of linked pedigrees. In our abstraction, events are encoded as spatially and temporally oriented named graphs, while linked pedigrees as RDF datasets are their specific compositions. We propose an algorithm that operates over streams of RDF annotated EPCIS events to generate linked pedigrees. We exemplify our approach using the pharmaceuticals supply chain and show how counterfeit detection is an implicit part of our pedigree generation. Our evaluation results show that for fast moving supply chains, smaller window sizes on event streams provide significantly higher efficiency in the generation of pedigrees as well as enable early counterfeit detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides a summary of the Social Media and Linked Data for Emergency Response (SMILE) workshop, co-located with the Extended Semantic Web Conference, at Montpellier, France, 2013. Following paper presentations and question answering sessions, an extensive discussion and roadmapping session was organised which involved the workshop chairs and attendees. Three main topics guided the discussion - challenges, opportunities and showstoppers. In this paper, we present our roadmap towards effectively exploiting social media and semantic web techniques for emergency response and crisis management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software architecture plays an essential role in the high level description of a system design, where the structure and communication are emphasized. Despite its importance in the software engineering process, the lack of formal description and automated verification hinders the development of good software architecture models. In this paper, we present an approach to support the rigorous design and verification of software architecture models using the semantic web technology. We view software architecture models as ontology representations, where their structures and communication constraints are captured by the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL). Specific configurations on the design are represented as concrete instances of the ontology, to which their structures and dynamic behaviors must conform. Furthermore, ontology reasoning tools can be applied to perform various automated verification on the design to ensure correctness, such as consistency checking, style recognition, and behavioral inference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sharing of product and process information plays a central role in coordinating supply chains operations and is a key driver for their success. "Linked pedigrees" - linked datasets, that encapsulate event based traceability information of artifacts as they move along the supply chain, provide a scalable mechanism to record and facilitate the sharing of track and trace knowledge among supply chain partners. In this paper we present "OntoPedigree" a content ontology design pattern for the representation of linked pedigrees, that can be specialised and extended to define domain specific traceability ontologies. Events captured within the pedigrees are specified using EPCIS - a GS1 standard for the specification of traceability information within and across enterprises, while certification information is described using PROV - a vocabulary for modelling provenance of resources. We exemplify the utility of OntoPedigree in linked pedigrees generated for supply chains within the perishable goods and pharmaceuticals sectors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Convergence of technologies in the Internet and the field of expert systems have offered new ways of sharing and distributing knowledge. However, there has been a general lack of research in the area of web-based expert systems (ES). This paper addresses the issues associated with the design, development, and use of web-based ES from a standpoint of the benefits and challenges of developing and using them. The original theory and concepts in conventional ES were reviewed and a knowledge engineering framework for developing them was revisited. The study considered three web-based ES: WITS-advisor - for e-business strategy development, Fish-Expert - for fish disease diagnosis, and IMIS - to promote intelligent interviews. The benefits and challenges in developing and using ES are discussed by comparing them with traditional standalone systems from development and application perspectives. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.