846 resultados para Semantic Publishing, Linked Data, Bibliometrics, Informetrics, Data Retrieval, Citations
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Six actions for collation collective intelligence to inform and accelerate change
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
PRELIDA (PREserving LInked DAta) is an FP7 Coordination Action funded by the European Commission under the Digital Preservation Theme. PRELIDA targets the particular stakeholders of the Linked Data community, including data providers, service providers, technology providers and end user communities. These stakeholders have not been traditionally targeted by the Digital Preservation community, and are typically not aware of the digital preservation solutions already available. So an important task of PRELIDA is to raise awareness of existing preservation solutions and to facilitate their uptake. At the same time, the Linked Data cloud has specific characteristics in terms of structuring, interlinkage, dynamicity and distribution that pose new challenges to the preservation community. PRELIDA organises in-depth discussions among the two communities to identify which of these characteristics require novel solutions, and to develop road maps for addressing the new challenges. PRELIDA will complete its lifecycle at the end of this year, and the talk will report about the major findings.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
Os museus são instituições que desempenham um importante papel para a sociedade, com seus acervos de grande valor cultural e científico. É dever dos museus promover o acesso aos acervos e realizar ações de comunicação para divulgação e acesso público aos bens culturais que compõem suas coleções. Os museus vêm empregando a Tecnologia da Informação e Comunicação para apoiar suas atividades, ampliar o leque de serviços prestados à sociedade, promover a cultura, ciência e conhecimento, divulgar e disponibilizar seus acervos por meio da Web. Para disponibilizar as informações de acervos de museus, tornando uma navegação mais intuitiva e natural, e possibilitar a troca de informações entre os museus, visando a Recuperação da Informação, o reuso e interoperabilidade dos dados, é preciso adaptá-las para o formato da Web Semântica. Este estudo propõe uma solução para integrar os dados de acervos da Rede de Museus e Espaços de Ciências e Cultura da Universidade Federal de Minas Gerais e disponibilizá-los na Web, utilizando conceitos da Web Semântica e Linked Data. Para atingir esse objetivo, será desenvolvido um estudo experimental e um protótipo de aplicação para validá-lo e responder à pergunta de competência.
Resumo:
In 2005, the University of Maryland acquired over 70 digital videos spanning 35 years of Jim Henson’s groundbreaking work in television and film. To support in-house discovery and use, the collection was cataloged in detail using AACR2 and MARC21, and a web-based finding aid was also created. In the past year, I created an "r-ball" (a linked data set described using RDA) of these same resources. The presentation will compare and contrast these three ways of accessing the Jim Henson Works collection, with insights gleaned from providing resource discovery using RIMMF (RDA in Many Metadata Formats).
Resumo:
The main databases related to metabolic pathways, such as Kegg, Brenda, Reactome and Biocyc, provide partially interlinked data on metabolic pathways. This limitation only allows independent searches to retrieve cross-database information on metabolism and restricts the use of more complex searches to discover new knowledge or relationships.
Resumo:
Presentation given as part of the EPrints/dotAC training day on 26 Mar 2010.
Resumo:
WAIS Seminar, presented 29 Mar 2012
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.
Resumo:
Supply chains comprise of complex processes spanning across multiple trading partners. The various operations involved generate large number of events that need to be integrated in order to enable internal and external traceability. Further, provenance of artifacts and agents involved in the supply chain operations is now a key traceability requirement. In this paper we propose a Semantic web/Linked data powered framework for the event based representation and analysis of supply chain activities governed by the EPCIS specification. We specifically show how a new EPCIS event type called "Transformation Event" can be semantically annotated using EEM - The EPCIS Event Model to generate linked data, that can be exploited for internal event based traceability in supply chains involving transformation of products. For integrating provenance with traceability, we propose a mapping from EEM to PROV-O. We exemplify our approach on an abstraction of the production processes that are part of the wine supply chain.