89 resultados para Semantic Web, Exploratory Search, Recommendation Systems
Resumo:
The Semantic Web is growing at a fast pace, recently boosted by the creation of the Linked Data initiative and principles. Methods, standards, techniques and the state of technology are becoming more mature and therefore are easing the task of publication and consumption of semantic information on the Web.
Resumo:
Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge.
Resumo:
RDB2RDF systems generate RDF from relational databases, operating in two di�erent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.
Resumo:
Social software tools have become an integral part of students? personal lives and their primary communication medium. Likewise, these tools are increasingly entering the enterprise world (within the recent trend known as Enterprise 2.0) and becoming a part of everyday work routines. Aiming to keep the pace with the job requirements and also to position learning as an integral part of students? life, the field of education is challenged to embrace social software. Personal Learning Environments (PLEs) emerged as a concept that makes use of social software to facilitate collaboration, knowledge sharing, group formation around common interests, active participation and reflective thinking in online learning settings. Furthermore, social software allows for establishing and maintaining one?s presence in the online world. By being aware of a student's online presence, a PLE is better able to personalize the learning settings, e.g., through recommendation of content to use or people to collaborate with. Aiming to explore the potentials of online presence for the provision of recommendations in PLEs, in the scope of the OP4L project, we have develop a software solution that is based on a synergy of Semantic Web technologies, online presence and socially-oriented learning theories. In this paper we present the current results of this research work.
Resumo:
Testbeds proposed so far to evaluate, compare, and eventually improve SPARQL query federation systems have still some limitations. Some variables and con�gurations that may have an impact on the behavior of these systems (e.g., network latency, data partitioning and query properties) are not su�ciently de�ned; this a�ects the results and repeatability of independent evaluation studies, and hence the insights that can be obtained from them. In this paper we evaluate FedBench, the most comprehensive testbed up to now, and empirically probe the need of considering additional dimensions and variables. The evaluation has been conducted on three SPARQL query federation systems, and the analysis of these results has allowed to uncover properties of these systems that would normally be hidden with the original testbeds.
Resumo:
The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built.
Resumo:
The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.
Resumo:
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.
Resumo:
Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.
Resumo:
There are several different standardised and widespread formats to represent emotions. However, there is no standard semantic model yet. This paper presents a new ontology, called Onyx, that aims to become such a standard while adding concepts from the latest Semantic Web models. In particular, the ontology focuses on the representation of Emotion Analysis results. But the model is abstract and inherits from previous standards and formats. It can thus be used as a reference representation of emotions in any future application or ontology. To prove this, we have translated resources from EmotionML representation to Onyx. We also present several ways in which developers could benefit from using this ontology instead of an ad-hoc presentation. Our ultimate goal is to foster the use of semantic technologies for emotion Analysis while following the Linked Data ideals.
Resumo:
One of the challenges facing the current web is the efficient use of all the available information. The Web 2.0 phenomenon has favored the creation of contents by average users, and thus the amount of information that can be found for diverse topics has grown exponentially in the last years. Initiatives such as linked data are helping to build the Semantic Web, in which a set of standards are proposed for the exchange of data among heterogeneous systems. However, these standards are sometimes not used, and there are still plenty of websites that require naive techniques to discover their contents and services. This paper proposes an integrated framework for content and service discovery and extraction. The framework is divided into several layers where the discovery of contents and services is made in a representational stateless transfer system such as the web. It employs several web mining techniques as well as feature-oriented modeling for the discovery of cross-cutting features in web resources. The framework is used in a scenario of electronic newspapers. An intelligent agent crawls the web for related news, and uses services and visits links automatically according to its goal. This scenario illustrates how the discovery is made at different levels and how the use of semantics helps implement an agent that performs high-level tasks.
Resumo:
Los servicios en red que conocemos actualmente están basados en documentos y enlaces de hipertexto que los relacionan entre sí sin aportar verdadera información acerca de los contenidos que representan. Podría decirse que se trata de “una red diseñada por personas para ser interpretada por personas”. El objetivo principal de los últimos años es encaminar esta red hacia una web de conocimiento, en la que la información pueda ser interpretada por agentes computerizados de manera automática. Para llevar a cabo esta transformación es necesaria la utilización de nuevas tecnologías especialmente diseñadas para la descripción de contenidos como son las ontologías. Si bien las redes convencionales están evolucionando, no son las únicas que lo están haciendo. El rápido crecimiento de las redes de sensores y el importante aumento en el número de dispositivos conectados a internet, hace necesaria la incorporación de tecnologías de la web semántica a este tipo de redes. Para la realización de este Proyecto de Fin de Carrera se utilizará la ontología SSN, diseñada para la descripción semántica de sensores y las redes de las que forman parte con el fin de permitir una mejor interacción entre los dispositivos y los sistemas que hacen uso de ellos. El trabajo desarrollado a lo largo de este Proyecto de Fin de Carrera gira en torno a esta ontología, siendo el principal objetivo la generación semiautomática de código a partir de un modelo de sistemas descrito en función de las clases y propiedades proporcionadas por SSN. Para alcanzar este fin se dividirá el proyecto en varias partes. Primero se realizará un análisis de la ontología mencionada. A continuación se describirá un sistema simulado de sensores y por último se implementarán las aplicaciones para la generación automática de interfaces y la representación gráfica de los dispositivos del sistema a partir de la representación del éste en un fichero de tipo OWL. ABSTRACT. The web we know today is based on documents and hypertext links that relate these documents with each another, without providing consistent information about the contents they represent. It could be said that its a network designed by people to be used by people. The main goal of the last couple of years is to guide this network into a web of knowledge, where information can be automatically processed by machines. This transformation, requires the use of new technologies specially designed for content description such as ontologies. Nowadays, conventional networks are not the only type of networks evolving. The use of sensor networks and the number of sensor devices connected to the Internet is rapidly increasing, making the use the integration of semantic web technologies to this kind of networks completely necessary. The SSN ontology will be used for the development of this Final Degree Dissertation. This ontology was design to semantically describe sensors and the networks theyre part of, allowing a better interaction between devices and the systems that use them. The development carried through this Final Degree Dissertation revolves around this ontology and aims to achieve semiautomatic code generation starting from a system model described based on classes and properties provided by SSN. To reach this goal, de Dissertation will be divided in several parts. First, an analysis about the mentioned ontology will be made. Following this, a simulated sensor system will be described, and finally, the implementation of the applications will take place. One of these applications will automatically generate de interfaces and the other one will graphically represents the devices in the sensor system, making use of the system representation in an OWL file.
Resumo:
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
Resumo:
The W3C Linked Data Platform (LDP) candidate recom- mendation defines a standard HTTP-based protocol for read/write Linked Data. The W3C R2RML recommendation defines a language to map re- lational databases (RDBs) and RDF. This paper presents morph-LDP, a novel system that combines these two W3C standardization initiatives to expose relational data as read/write Linked Data for LDP-aware ap- plications, whilst allowing legacy applications to continue using their relational databases.
Resumo:
Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. Conclusions: The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.