976 resultados para Linked Open Data Android iOS Semantic Web Turismo Tourism


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper discusses the Europeana Creative project which aims to facilitate re-use of cultural heritage metadata and content by the creative industries. The paper focuses on the contribution of Ontotext to the project activities. The Europeana Data Model (EDM) is further discussed as a new proposal for structuring the data that Europeana will ingest, manage and publish. The advantages of using EDM instead of the current ESE metadata set are highlighted. Finally, Ontotext’s EDM Endpoint is presented, based on OWLIM semantic repository and SPARQL query language. A user-friendly RDF view is presented in order to illustrate the possibilities of Forest - an extensible modular user interface framework for creating linked data and semantic web applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indicators are widely used by organizations as a way of evaluating, measuring and classifying organizational performance. As part of performance evaluation systems, indicators are often shared or compared across internal sectors or with other organizations. However, indicators can be vague and imprecise, and also can lack semantics, making comparisons with other indicators difficult. Thus, this paper presents a knowledge model based on an ontology that may be used to represent indicators semantically and generically, dealing with the imprecision and vagueness, and thus facilitating better comparison. Semantic technologies are shown to be suitable for this solution, so that it could be able to represent complex data involved in indicators comparison.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Overview of the growth of policies and a critical appraisal of the issues affecting open access, open data and open science policies. Example policies and a roadmap for open access, open research data and open science are included.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Web has come a long way since its inception in 2001, especially in terms of technical development and research progress. However, adoption by non- technical practitioners is still an ongoing process, and in some areas this process is just now starting. Emergency response is an area where reliability and timeliness of information and technologies is of essence. Therefore it is quite natural that more widespread adoption in this area has not been seen until now, when Semantic Web technologies are mature enough to support the high requirements of the application area. Nevertheless, to leverage the full potential of Semantic Web research results for this application area, there is need for an arena where practitioners and researchers can meet and exchange ideas and results. Our intention is for this workshop, and hopefully coming workshops in the same series, to be such an arena for discussion. The Extended Semantic Web Conference (ESWC - formerly the European Semantic Web conference) is one of the major research conferences in the Semantic Web field, whereas this is a suitable location for this workshop in order to discuss the application of Semantic Web technology to our specific area of applications. Hence, we chose to arrange our first SMILE workshop at ESWC 2013. However, this workshop does not focus solely on semantic technologies for emergency response, but rather Semantic Web technologies in combination with technologies and principles for what is sometimes called the "social web". Social media has already been used successfully in many cases, as a tool for supporting emergency response. The aim of this workshop is therefore to take this to the next level and answer questions like: "how can we make sense of, and furthermore make use of, all the data that is produced by different kinds of social media platforms in an emergency situation?" For the first edition of this workshop the chairs collected the following main topics of interest: • Semantic Annotation for understanding the content and context of social media streams. • Integration of Social Media with Linked Data. • Interactive Interfaces and visual analytics methodologies for managing multiple large-scale, dynamic, evolving datasets. • Stream reasoning and event detection. • Social Data Mining. • Collaborative tools and services for Citizens, Organisations, Communities. • Privacy, ethics, trustworthiness and legal issues in the Social Semantic Web. • Use case analysis, with specific interest for use cases that involve the application of Social Media and Linked Data methodologies in real-life scenarios. All of these, applied in the context of: • Crisis and Disaster Management • Emergency Response • Security and Citizen Journalism The workshop received 6 high-quality paper submissions and based on a thorough review process, thanks to our program committee, the decision was made to accept four of these papers for the workshop (67% acceptance rate). These four papers can be found later in this proceedings volume. Three out of four of these papers particularly discuss the integration and analysis of social media data, using Semantic Web technologies, e.g. for detecting complex events in social media streams, for visualizing and analysing sentiments with respect to certain topics in social media, or for detecting small-scale incidents entirely through the use of social media information. Finally, the fourth paper presents an architecture for using Semantic Web technologies in resource management during a disaster. Additionally, the workshop featured an invited keynote speech by Dr. Tomi Kauppinen from Aalto university. Dr. Kauppinen shared experiences from his work on applying Semantic Web technologies to application fields such as geoinformatics and scientific research, i.e. so-called Linked Science, but also recent ideas and applications in the emergency response field. His input was also highly valuable for the roadmapping discussion, which was held at the end of the workshop. A separate summary of the roadmapping session can be found at the end of these proceedings. Finally, we would like to thank our invited speaker Dr. Tomi Kauppinen, all our program committee members, as well as the workshop chair of ESWC2013, Johanna Völker (University of Mannheim), for helping us to make this first SMILE workshop a highly interesting and successful event!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supply chains comprise of complex processes spanning across multiple trading partners. The various operations involved generate large number of events that need to be integrated in order to enable internal and external traceability. Further, provenance of artifacts and agents involved in the supply chain operations is now a key traceability requirement. In this paper we propose a Semantic web/Linked data powered framework for the event based representation and analysis of supply chain activities governed by the EPCIS specification. We specifically show how a new EPCIS event type called "Transformation Event" can be semantically annotated using EEM - The EPCIS Event Model to generate linked data, that can be exploited for internal event based traceability in supply chains involving transformation of products. For integrating provenance with traceability, we propose a mapping from EEM to PROV-O. We exemplify our approach on an abstraction of the production processes that are part of the wine supply chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Describes and analyzes the results obtained after analysis of the publications present in Scopus data base and used that tool rankings generated by the research group Scimago on the production of the different countries of Central America on the issue of documentation the means of mass communication. Performed a comparative about different countries in the region and the scientific analyzes. Finally, and given and data analysis, a number of recommendations are made to improve the production and the presence in indexed database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La diffusione del Semantic Web e di dati semantici in formato RDF, ha creato la necessità di un meccanismo di trasformazione di tali informazioni, semplici da interpretare per una macchina, in un linguaggio naturale, di facile comprensione per l'uomo. Nella dissertazione si discuterà delle soluzioni trovate in letteratura e, nel dettaglio, di RSLT, una libreria JavaScript che cerca di risolvere questo problema, consentendo la creazione di applicazioni web in grado di eseguire queste trasformazioni tramite template dichiarativi. Verranno illustrati, inoltre, tutti i cambiamenti e tutte le modi�che introdotte nella versione 1.1 della libreria, la cui nuova funzionalit�à principale �è il supporto a SPARQL 1.0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm: a new way of doing science, oriented towards the openness of every phase of the scientific research process, able to take full advantage of the digital ICTs. The emergence of this paradigm is relatively recent, but in the last years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm (eg., think about the European Open Science Cloud, EOSC; or the establishment of the Horizon Europe programme). This dissertation provides a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, addressing the major legal challenges of its implementation. The study investigates the notion of Open Science, proposing a definition that takes into account all its dimensions related to the human and fundamental rights framework in which Open Science is grounded. The inquiry addresses the legal challenges related to the openness of research data, in light of the European Open Data framework and the impact of the GDPR on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the e-infrastructures. The focus is on a specific type of computational infrastructure: the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights.