908 resultados para Semantic Web Services


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Testbeds proposed so far to evaluate, compare, and eventually improve SPARQL query federation systems have still some limitations. Some variables and con�gurations that may have an impact on the behavior of these systems (e.g., network latency, data partitioning and query properties) are not su�ciently de�ned; this a�ects the results and repeatability of independent evaluation studies, and hence the insights that can be obtained from them. In this paper we evaluate FedBench, the most comprehensive testbed up to now, and empirically probe the need of considering additional dimensions and variables. The evaluation has been conducted on three SPARQL query federation systems, and the analysis of these results has allowed to uncover properties of these systems that would normally be hidden with the original testbeds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the context of the Semantic Web, resources on the net can be enriched by well-defined, machine-understandable metadata describing their associated conceptual meaning. These metadata consisting of natural language descriptions of concepts are the focus of the activity we describe in this chapter, namely, ontology localization. In the framework of the NeOn Methodology, ontology localization is defined as the activity of adapting an ontology to a particular language and culture. This adaptation mainly involves the translation of the natural language descriptions of the ontology from a source natural language to a target natural language, with the final objective of obtaining a multilingual ontology, that is, an ontology documented in several natural languages. The purpose of this chapter is to provide detailed and prescriptive methodological guidelines to support the performance of this activity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Con este proyecto se pretende crear un procedimiento general para la implantación de aplicaciones de procesado de imágenes en cámaras de video IP y la distribución de dicha información mediante Arquitecturas Orientadas a Servicios (SOA). El objetivo principal es crear una aplicación que se ejecute en una cámara de video IP y realice un procesado básico sobre las imágenes capturadas (detección de colores, formas y patrones) permitiendo distribuir el resultado del procesado mediante las arquitecturas SOA descritas en la especificación DPWS (Device Profile for Web Services). El estudio se va a centrar principalmente en la transformación automática de código de procesado de imágenes escrito en Matlab (archivos .m) a un código C ANSI (archivos .c) que posteriormente se compilará para la arquitectura del procesador de la cámara (arquitectura CRIS, similar a la RISC pero con un conjunto reducido de instrucciones). ABSTRACT. This project aims to create a general procedure for the implementation of image processing applications in IP video cameras and the distribution of such information through Service Oriented Architectures (SOA). The main goal is to create an application that runs on IP video camera and carry out a basic processing on the captured images ( color detection, shapes and patterns) allowing to distribute the result of process by SOA architectures described in the DPWS specification (Device Profile for Web Services). The study will focus primarily on the automated transform of image processing code written in Matlab files (. M) to ANSI C code files (. C) which is then compiled to the processor architecture of the camera (CRIS architecture , similar to the RISC but with a reduced instruction set).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este Proyecto Final de Carrera se centra principalmente en el estudio de las tecnologías aplicadas al Hogar Digital (HD) y así poder desarrollar nuevas herramientas aplicadas a este Sector. La primera tecnología estudiada es LonWorks que se escogió porque en la EUITT se cuenta con un entorno domótico donde se aplica esta tecnología, materializado como una maqueta de una instalación LonWorks en un HD. Sobre este entorno se ha desarrollado una nueva forma de gestionar y controlar los elementos de una Red LonWorks mediante la tecnología Universal Plug & Play (UPnP). Los dispositivos LonWorks han sido recubiertos con una capa con la que se consigue tratar los elementos LonWorks como dispositivos UPnP, pudiendo de esta manera trabajar bajo un mismo formato de dispositivo. Este formato está definido por un documento denominado Descripción del Dispositivo UPnP. Por tanto, no es necesario conocer el estado de los dispositivos de la Red LonWorks, sino únicamente trabajar bajo una Red UPnP, creando nuevos dispositivos UPnP, puntos de Control y servicios a partir de los elementos LonWorks. Una vez realizado el recubrimiento del sistema LonWorks con UPnP, se ha definido y desarrollado una aplicación para Android que, permite controlar los elementos de la maqueta del HD desde un Smartphone o una Tablet. El acceso a la Red LonWorks de la maqueta del HD se hace a través de la interfaz Web Services SOAP/XML del dispositivo iLON100. ABSTRACT. This Final Degree Project is mainly focused on the study of the technologies applied to Digital Home and thus be able to develop new tools applied to this area. The first study is LonWorks technology which was chosen because the EUITT has a domotic environment where this technology is applied, materialized as a demonstrator of a LonWorks installation in HD. With this environment has developed a new way to manage and control the elements of a LonWorks network through technology by Universal Plug & Play (UPnP). LonWorks devices are covered with a layer and it is achieved LonWorks elements treat as UPnP devices. In this way the LonWorks elements can work under one device format. This format is defined by a document called UPnP Device Description. Therefore, it isn't necessary to know the state of the LonWorks network devices, but only work under an UPnP network, creating new UPnP devices, control points and services from LonWorks elements. After backfill of LonWorks system with UPnP, have being defined and developed an Android application that allow controlling the elements of the mockup of HD from Smartphone or a Tablet. The LonWorks network Access of the mockup of HD is done through the Web Interface Services SOAP / XML iLON100 device.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Query rewriting is one of the fundamental steps in ontologybased data access (OBDA) approaches. It takes as inputs an ontology and a query written according to that ontology, and produces as an output a set of queries that should be evaluated to account for the inferences that should be considered for that query and ontology. Different query rewriting systems give support to different ontology languages with varying expressiveness, and the rewritten queries obtained as an output do also vary in expressiveness. This heterogeneity has traditionally made it difficult to compare different approaches, and the area lacks in general commonly agreed benchmarks that could be used not only for such comparisons but also for improving OBDA support. In this paper we compile data, dimensions and measurements that have been used to evaluate some of the most recent systems, we analyse and characterise these assets, and provide a unified set of them that could be used as a starting point towards a more systematic benchmarking process for such systems. Finally, we apply this initial benchmark with some of the most relevant OBDA approaches in the state of the art.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La razón de este proyecto, es la de desarrollar el módulo de cursos de la plataforma de Massive Online Open Courses (MOOCs), CloudRoom. Dicho módulo está englobado en una arquitectura orientada a servicios (SOA) y en una infraestructura de Cloud Computing utilizando Amazon Web Services (AWS). Nuestro objetivo es el de diseñar un Software as a Service (SaaS) robusto con las cualidades que a un producto de este tipo se le estiman: alta disponibilidad, alto rendimiento, gran experiencia de usuario y gran extensibilidad del sistema. Para lograrlo, se llevará a cabo la integración de las últimas tendencias tecnológicas dentro del desarrollo de sistemas distribuidos como Neo4j, Node.JS, Servicios RESTful, CoffeeScript. Todo esto siguiendo un estrategia de desarrollo PLAN-DO-CHECK utilizando Scrum y prácticas de metodologías ágiles. ---ABSTRACT---The reason of this Project is to develop the courses‟ module of CloudRoom, a Massive Online Open Courses platform. This module is encapsulated in a service-oriented architecture (SOA) based on a Cloud Computing infrastructure built on Amazon Web Services (AWS). Our goal is to design a robust Software as a Service (SaaS) with the qualities that are estimated in a product of this type: high availability, high performance, great user experience and great extensibility of the system. In order to address this, we carry out the integration of the latest technology trends in the development of distributed systems: Neo4j, Node.JS, RESTful Services and CoffeeScript. All of this, following a development strategy PLAN-DO-CHECK, using Scrum and practices of agile methodologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la actualidad se está viviendo el auge del Cloud Computing (Computación en la Nube) y cada vez son más las empresas importantes en el sector de las Tecnologías de la Información que apuestan con fuerza por estos servicios. Por un lado, algunas ofrecen servicios, como Amazon y su sistema IaaS (Infrastructure as a Service) Amazon Web Services (AWS); por otro, algunas los utilizan, como ocurre en el caso de este proyecto, en el que Telefonica I+D hace uso de los servicios proporcionados por AWS para sus proyectos. Debido a este crecimiento en el uso de las aplicaciones distribuidas es importante tener en cuenta el papel que desempeñan los desarrolladores y administradores de sistemas que han de trabajar y mantener todas las máquinas remotas de uno o varios proyectos desde una única máquina local. El ayudar a realizar estas tareas de la forma más cómoda y automática posible es el objetivo principal de este proyecto. En concreto, el objetivo de este proyecto es el diseño y la implementación de una solución software que ayude a la productividad en el desarrollo y despliegue de aplicaciones en un conjunto de máquinas remotas desde una única máquina local, teniendo como base una prueba de concepto realizada anteriormente que prueba las funcionalidades más básicas de las librerías utilizadas para el desarrollo de la herramienta. A lo largo de este proyecto se han estudiado las diferentes alternativas que se encuentran en el mercado que ofrecen al menos parte de la soluci6n a los problemas abordados, pese a que los requisitos de la empresa indicaban que la herramienta debía implementarse de forma completa. Se estudió a fondo después la prueba de concepto de la que se partía para, con los conocimientos adquiridos sobre el tema, mejorarla cumpliendo los objetivos marcados. Tras el desarrollo y la implementaci6n completa de la herramienta se proponen posibles caminos a seguir en el futuro. ---ABSTRACT---Nowadays we are experiencing the rise of Cloud Computing and every day more and more important IT companies are betting hard for this kind of services. On one hand, some of these companies offer services such as Amazon IaaS (Infrastructure as a Service) system Amazon Web Services (AWS); on the other hand, some of them use these services, as in the case of this project, in which Telefonica I+D uses the services provided by AWS in their projects. Due this growth in the use of distributed applications it is important to consider the developers and system administrators' roles, who have to work and do the maintenance of all the remote machines from one or several projects from a single local machine. The main goal of this project is to help with these tasks making them as comfortable and automatically as possible. Specifically, the goal of this project is the design and implementation of a software solution that helps to achieve a better productivity in the development of applications on a set of remote machines from a single local machine, based on a proof of concept developed before, in which the basic functionality of the libraries used in this tool were tested. Throughout this project the different alternatives on the market that offer at least part of the solution to the problem addressed have been studied, although according to the requirements of the company, the tool should be implemented from scratch. After that, the basic proof of concept was thoroughly studied and improved with the knowledge acquired on the subject, fulfilling the marked goals. Once the development and full implementation of the tool is done, some ways of improvement for the future are suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La web ha sufrido una drástica transformación en los últimos años, debido principalmente a su popularización y a la enorme cantidad de información que alberga. Debido a estos factores se ha dado el salto de la denominada Web de Documentos, a la Web Semántica, donde toda la información está relacionada con otra. Las principales ventajas de la información enlazada estriban en la facilidad de reutilización, accesibilidad y disponibilidad para ser encontrada por el usuario. En este trabajo se pretende poner de manifiesto la utilidad de los datos enlazados aplicados al ámbito geográfico y mostrar como pueden ser empleados hoy en día. Para ello se han explotado datos enlazados de carácter espacial provenientes de diferentes fuentes, a través de servidores externos o endpoints SPARQL. Además de eso se ha trabajado con un servidor privado capaz de proporcionar información enlazada almacenada en un equipo personal. La explotación de información enlazada se ha implementado en una aplicación web en lenguaje JavaScript, tratando de abstraer totalmente al usuario del tratamiento de los datos a nivel interno de la aplicación. Esta aplicación cuenta además con algunos módulos y opciones capaces de interactuar con las consultas realizadas a los servidores, consiguiendo un entorno más intuitivo y agradable para el usuario. ABSTRACT: In recent years the web has suffered a drastic transformation because of the popularization and the huge amount of stored information. Due to these factors it has gone from Documents web to Semantic web, where the data are linked. The main advantages of Linked Data lie in the ease of his reuse, accessibility and availability to be located by users. The aim of this research is to highlight the usefulness of the geographic linked data and show how can be used at present time. To get this, the spatial linked data coming from several sources have been managed through external servers or also called endpoints. Besides, it has been worked with a private server able to provide linked data stored in a personal computer. The use of linked data has been implemented in a JavaScript web application, trying completely to abstract the internally data treatment of the application to make the user ignore it. This application has some modules and options that are able to interact with the queries made to the servers, getting a more intuitive and kind environment for users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el marco del proyecto europeo FI-WARE, en el CoNWet Lab (laboratorio de la ETSI Informáticos de la UPM) se ha implementado la plataforma Web Wstore que es una implementación de referencia del Store Generic Enabler perteneciente a dicho proyecto. El objetivo de FI-WARE es crear la plataforma núcleo del Internet del Futuro (IoF) con la intención de incrementar la competitividad global europea en el mundo de las TI. El proyecto introduce una infraestructura innovadora para la creación y distribución de servicios digitales en internet. WStore ofrece a los proveedores de servicios la plataforma donde publicar sus ofertas y desde la cual los clientes podrán acceder ellas. Estos proveedores ofrecen servicios Web, aplicaciones, widgets y data sets del mismo modo que Google ofrece aplicaciones en la tienda online Google Play o Apple en el App Store. WStore está implementada actualmente como una plataforma Web, por lo que una organización que desee ofrecer el servicio de la store necesita instalar el software en un servidor propio y disponer de un dominio para ofrecer sus productos. El objetivo de este trabajo es migrar WStore a un entorno de computación en la nube de manera que con una única instancia se ofrezca el servicio a las organizaciones que deseen disponer de su propia plataforma, de la cual tendrán total control como si se encontrase en su propia infraestructura. Para esto se implementa una versión de WStore que será desplegada en una infraestructura cloud y ofrecida como Software as a Service. La implementación incluye una serie de módulos de código que se podrán añadir opcionalmente en el proceso de instalación si se desea que la instancia instalada sea Multitenant. Además, en este trabajo se estudian y prueban las herramientas que ofrece MongoDB para desplegar la plataforma Wstore Multitenant en una infraestructura cloud. Estas herramientas son replica sets y sharding que permiten desplegar una base de datos escalable y de alta disponibilidad. ---ABSTRACT---In the context of the European project FI-WARE, the CoNWeT Lab (IT Lab from ETSIINF UPM university) has been implemented the web platform WStore. WStore is a reference implementation of the Generic Enabler Store from FI-WARE project. The FI-WARE goal is to create the core platform of the Future Internet (IoF) with the intention of enhancing Europe's global competitiveness in IT technologies. FI-WARE introduces an innovative infrastructure for the creation and distribution of digital services over the Internet. WStore offers to service providers a platform to publicate offerings and where customers can access them. The providers offer web services, applications, widgets and data sets in the same way that Google offers online applications on Google Play or Apple on App Store plataforms. WStore is currently implemented as a web platform, so if an organization wants to offer the store service, it need to install the software on it’s own serves and have a domain to offer their products. The objective of this paper is to migrate WStore to a cloud computing environment where a single instance of the WStore is offered as a web service to organizations who want their own store. Customers (tenants) of the WStore web service will have total control over the software and WStore administration. The implementation includes several code modules that can be optionally added in the installation process to build a Multitenant instance. In addition, this paper review the tools that MongoDB provide for scalability and high availability (replica sets and sharding) with the purpose of deploying multi-tenant WStore on a cloud infrastructure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los servicios en red que conocemos actualmente están basados en documentos y enlaces de hipertexto que los relacionan entre sí sin aportar verdadera información acerca de los contenidos que representan. Podría decirse que se trata de “una red diseñada por personas para ser interpretada por personas”. El objetivo principal de los últimos años es encaminar esta red hacia una web de conocimiento, en la que la información pueda ser interpretada por agentes computerizados de manera automática. Para llevar a cabo esta transformación es necesaria la utilización de nuevas tecnologías especialmente diseñadas para la descripción de contenidos como son las ontologías. Si bien las redes convencionales están evolucionando, no son las únicas que lo están haciendo. El rápido crecimiento de las redes de sensores y el importante aumento en el número de dispositivos conectados a internet, hace necesaria la incorporación de tecnologías de la web semántica a este tipo de redes. Para la realización de este Proyecto de Fin de Carrera se utilizará la ontología SSN, diseñada para la descripción semántica de sensores y las redes de las que forman parte con el fin de permitir una mejor interacción entre los dispositivos y los sistemas que hacen uso de ellos. El trabajo desarrollado a lo largo de este Proyecto de Fin de Carrera gira en torno a esta ontología, siendo el principal objetivo la generación semiautomática de código a partir de un modelo de sistemas descrito en función de las clases y propiedades proporcionadas por SSN. Para alcanzar este fin se dividirá el proyecto en varias partes. Primero se realizará un análisis de la ontología mencionada. A continuación se describirá un sistema simulado de sensores y por último se implementarán las aplicaciones para la generación automática de interfaces y la representación gráfica de los dispositivos del sistema a partir de la representación del éste en un fichero de tipo OWL. ABSTRACT. The web we know today is based on documents and hypertext links that relate these documents with each another, without providing consistent information about the contents they represent. It could be said that its a network designed by people to be used by people. The main goal of the last couple of years is to guide this network into a web of knowledge, where information can be automatically processed by machines. This transformation, requires the use of new technologies specially designed for content description such as ontologies. Nowadays, conventional networks are not the only type of networks evolving. The use of sensor networks and the number of sensor devices connected to the Internet is rapidly increasing, making the use the integration of semantic web technologies to this kind of networks completely necessary. The SSN ontology will be used for the development of this Final Degree Dissertation. This ontology was design to semantically describe sensors and the networks theyre part of, allowing a better interaction between devices and the systems that use them. The development carried through this Final Degree Dissertation revolves around this ontology and aims to achieve semiautomatic code generation starting from a system model described based on classes and properties provided by SSN. To reach this goal, de Dissertation will be divided in several parts. First, an analysis about the mentioned ontology will be made. Following this, a simulated sensor system will be described, and finally, the implementation of the applications will take place. One of these applications will automatically generate de interfaces and the other one will graphically represents the devices in the sensor system, making use of the system representation in an OWL file.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Online services are no longer isolated. The release of public APIs and technologies such as web hooks are allowing users and developers to access their information easily. Intelligent agents could use this information to provide a better user experience across services, connecting services with smart automatic. behaviours or actions. However, agent platforms are not prepared to easily add external sources such as web services, which hinders the usage of agents in the so-called Evented or Live Web. As a solution, this paper introduces an event-based architecture for agent systems, in accordance with the new tendencies in web programming. In particular, it is focused on personal agents that interact with several web services. With this architecture, called MAIA, connecting to new web services does not involve any modification in the platform.