43 resultados para SOAP web service

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. Conclusions: The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to congure the annotations to their specic needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation condence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este Proyecto Fin de Carrera (PFC) tiene como objetivos el análisis, diseño e implementación de un sistema web que permita a los usuarios familiarizarse con el Índice de Desarrollo Humano (IDH), publicado anualmente por Naciones Unidas, ofreciendo un servicio de gestión y descarga de una aplicación móvil relacionada con dicho índice. La aplicación móvil es un juego educativo basado en preguntas sobre el IDH de los países, desarrollada en paralelo con este proyecto. El servicio web implementado en este proyecto facilita tanto la descarga, administración y actualización de contenidos como la interacción entre los usuarios. El sistema está formado por un servidor web, una base de datos de usuarios y contenidos y un portal web desde el cual puede descargarse la aplicación móvil, realizar consultas sobre estadísticas de juego y conocer el IDH sin necesidad de jugar. El buscador avanzado que ha sido desarrollado para conocer el IDH permite al usuario adquirir destrezas y entrenarse por sí solo para mejorar sus resultados de juego. Los administradores del sistema tienen la capacidad de gestionar el contenido del portal, los usuarios que solicitan darse de alta y la funcionalidad ofrecida, es decir, actualización del juego, foros y noticias. La instalación del sistema implementado en un servidor web ha permitido su verificación exitosa así como la provisión del servicio de información y sensibilización sobre el IDH, actualizado mediante la información de Naciones Unidas, motivación original del proyecto. ABSTRACT This Final Year Project takes as targets the analysis, design and implementation of a web system that allows to the users to familiarize with the Human Development Index (HDI), published annually by United Nations, offering a service of management and download a mobile application associated with that index. The mobile application is an educational game based on questions on the IDH of the countries, developed in parallel with this project. The web service implemented by means of this Project facilitates download, administration and update of contents and the interaction between the users across the cooperative game. The system consists of a web server, a database of users and content and a web portal from which you can download the mobile application, perform queries on game statistics, or discover the HDI without need for play. The advanced search engine that has been developed for the HDI allows the user to purchase and train for skills to improve their game results. System administrators have the ability to manage the content of the portal, users requesting register and the functionality offered, i.e., update to the game, forums and news. The installation of the system that was implemented has allowed successful verification and the provision of an information and awareness on the HDI, updated with the information from the United Nations, original motivation of the project.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente documento aborda la problemática surgida en torno al desarrollo de una plataforma para gestionar las guías docentes de la Universidad Politécnica de Madrid, centrándose en el uso de las tecnologías Javascript, así como de lo algoritmos, plugins y bibliotecas auxiliares creadas y utilizadas. Por último, se muestran los resultados obtenidos del análisis y puesta en práctica de lo expuesto en el documento, así como conclusiones y sugerencias de futuras líneas de trabajo para este mismo proyecto. ---ABSTRACT---This document explains the problems found when developing a web service whose purpose is the management of learning guides at \Universidad Politecnica de Madrid". This final thesis focus on the use of Javascript technologies and the plugins, algorithms and auxiliar libraries used and developed. Finally, results of the analysis, development of the ideas exposed in this document, and conclusions and future working lines are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabajo contiene el estudio de las tecnologías que se están usando actualmente en web, tratando de explicar cuáles son sus principales componentes, su objetivo y funcionamiento. En base a un supuesto teórico de un montaje para un servicio web con un número muy alto de usuarios, y basándose en las tecnologías estudiadas, se propone un posible montaje completo de un sistema, que sería capaz de gestionar correctamente todas las peticiones, evitando fallos y tiempos de indisponibilidad. Se a~nade un análisis teórico de los costes deribados de la implantación del sistema, comparándolo con un sistema web convencional, y otro análisis con el funcionamiento de una caché y los benéficos, en carga, derivados de su uso.---ABSTRACT---This work contains a study about new web technologies. Its objective is to explain the web technologies componentes with their particular usage and performance. Based on a theorical postulation about a preparation of a web service with a large number of users, and working with the studied technologies, a complete system assembling is proposed. This system will be able to attend all the incoming requests, without failures nor downtimes. It is attached a theorical study of the derivative costs associated to the system implementation, compared to a traditional one. In addition, another study is included with the work ow of a cache and the benefits derived of its usage in work terms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One important steps in a successful project-based-learning methodology (PBL) is the process of providing the students with a convenient feedback that allows them to keep on developing their projects or to improve them. However, this task is more difficult in massive courses, especially when the project deadline is close. Besides, the continuous evaluation methodology makes necessary to find ways to objectively and continuously measure students' performance without increasing excessively instructors' work load. In order to alleviate these problems, we have developed a web service that allows students to request personal tutoring assistance during the laboratory sessions by specifying the kind of problem they have and the person who could help them to solve it. This service provides tools for the staff to manage the laboratory, for performing continuous evaluation for all students and for the student collaborators, and to prioritize tutoring according to the progress of the student's project. Additionally, the application provides objective metrics which can be used at the end of the subject during the evaluation process in order to support some students' final scores. Different usability statistics and the results of a subjective evaluation with more than 330 students confirm the success of the proposed application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the first five SEALS Evaluation Campaigns over the semantic technologies covered by the SEALS project (ontology engineering tools, ontology reasoning tools, ontology matching tools, semantic search tools, and semantic web service tools). It presents the evaluations and test data used in these campaigns and the tools that participated in them along with a comparative analysis of their results. It also presents some lessons learnt after the execution of the evaluation campaigns and draws some final conclusions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biomedical researchers and clinicians working with molecular technologies in routine clinical practice often need to review the available literature to gather information regarding specific sequences of nucleic acids. This includes, for instance, finding articles related to a concrete DNA sequence, or identifying empirically-validated primer/probe sequences to evaluate the presence of different micro-organisms. Unfortunately, these hard and time-consuming tasks often need to be manually performed by researchers themselves since no publicly available biomedical literature search engine, e.g. PubMed, PubMed Central (PMC), etc., provides the required search functionalities. In this article, we describe PubDNA Finder, a web service that enables users to perform advanced searches on PubMed Central-indexed full text articles with sequences of nucleic acids

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the main goals and outcomes of the EU-funded Framework 7 project entitled Semantic Evaluation at Large Scale (SEALS). The growth and success of the Semantic Web is built upon a wide range of Semantic technologies from ontology engineering tools through to semantic web service discovery and semantic search. The evaluation of such technologies ? and, indeed, assessments of their mutual compatibility ? is critical for their sustained improvement and adoption. The SEALS project is creating an open and sustainable platform on which all aspects of an evaluation can be hosted and executed and has been designed to accommodate most technology types. It is envisaged that the platform will become the de facto repository of test datasets and will allow anyone to organise, execute and store the results of technology evaluations free of charge and without corporate bias. The demonstration will show how individual tools can be prepared for evaluation, uploaded to the platform, evaluated according to some criteria and the subsequent results viewed. In addition, the demonstration will show the flexibility and power of the SEALS Platform for evaluation organisers by highlighting some of the key technologies used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el marco del proyecto europeo FI-WARE, en el CoNWet Lab (laboratorio de la ETSI Informáticos de la UPM) se ha implementado la plataforma Web Wstore que es una implementación de referencia del Store Generic Enabler perteneciente a dicho proyecto. El objetivo de FI-WARE es crear la plataforma núcleo del Internet del Futuro (IoF) con la intención de incrementar la competitividad global europea en el mundo de las TI. El proyecto introduce una infraestructura innovadora para la creación y distribución de servicios digitales en internet. WStore ofrece a los proveedores de servicios la plataforma donde publicar sus ofertas y desde la cual los clientes podrán acceder ellas. Estos proveedores ofrecen servicios Web, aplicaciones, widgets y data sets del mismo modo que Google ofrece aplicaciones en la tienda online Google Play o Apple en el App Store. WStore está implementada actualmente como una plataforma Web, por lo que una organización que desee ofrecer el servicio de la store necesita instalar el software en un servidor propio y disponer de un dominio para ofrecer sus productos. El objetivo de este trabajo es migrar WStore a un entorno de computación en la nube de manera que con una única instancia se ofrezca el servicio a las organizaciones que deseen disponer de su propia plataforma, de la cual tendrán total control como si se encontrase en su propia infraestructura. Para esto se implementa una versión de WStore que será desplegada en una infraestructura cloud y ofrecida como Software as a Service. La implementación incluye una serie de módulos de código que se podrán añadir opcionalmente en el proceso de instalación si se desea que la instancia instalada sea Multitenant. Además, en este trabajo se estudian y prueban las herramientas que ofrece MongoDB para desplegar la plataforma Wstore Multitenant en una infraestructura cloud. Estas herramientas son replica sets y sharding que permiten desplegar una base de datos escalable y de alta disponibilidad. ---ABSTRACT---In the context of the European project FI-WARE, the CoNWeT Lab (IT Lab from ETSIINF UPM university) has been implemented the web platform WStore. WStore is a reference implementation of the Generic Enabler Store from FI-WARE project. The FI-WARE goal is to create the core platform of the Future Internet (IoF) with the intention of enhancing Europe's global competitiveness in IT technologies. FI-WARE introduces an innovative infrastructure for the creation and distribution of digital services over the Internet. WStore offers to service providers a platform to publicate offerings and where customers can access them. The providers offer web services, applications, widgets and data sets in the same way that Google offers online applications on Google Play or Apple on App Store plataforms. WStore is currently implemented as a web platform, so if an organization wants to offer the store service, it need to install the software on it’s own serves and have a domain to offer their products. The objective of this paper is to migrate WStore to a cloud computing environment where a single instance of the WStore is offered as a web service to organizations who want their own store. Customers (tenants) of the WStore web service will have total control over the software and WStore administration. The implementation includes several code modules that can be optionally added in the installation process to build a Multitenant instance. In addition, this paper review the tools that MongoDB provide for scalability and high availability (replica sets and sharding) with the purpose of deploying multi-tenant WStore on a cloud infrastructure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente TFG está enmarcado en el contexto de la biología sintética (más concretamente en la automatización de protocolos) y representa una parte de los avances en este sector. Se trata de una plataforma de gestión de laboratorios autónomos. El resultado tecnológico servirá para ayudar al operador a coordinar las máquinas disponibles en un laboratorio a la hora de ejecutar un experimento basado en un protocolo de biología sintética. En la actualidad los experimentos biológicos tienen una tasa de éxito muy baja en laboratorios convencionales debido a la cantidad de factores externos que intervienen durante el protocolo. Además estos experimentos son caros y requieren de un operador pendiente de la ejecución en cada fase del protocolo. La automatización de laboratorios puede suponer un aumento de la tasa de éxito, además de una reducción de costes y de riesgos para los trabajadores en el entorno del laboratorio. En la presente propuesta se pretende que se dividan las distintas entidades de un laboratorio en unidades funcionales que serán los elementos a ser coordinados por la herramienta resultado del TFG. Para aportar flexibilidad a la herramienta se utilizará una arquitectura orientada a servicios (SOA). Cada unidad funcional desplegará un servicio web proporcionando su funcionalidad al resto del laboratorio. SOA es esencial para la comunicación entre máquinas ya que permite la abstracción del tipo de máquina que se trate y como esté implementada su funcionalidad. La principal dificultad del TFG consiste en lidiar con las dificultades de integración y coordinación de las distintas unidades funcionales para poder gestionar adecuadamente el ciclo de vida de un experimento. Para ello se ha realizado un análisis de herramientas disponibles de software libre. Finalmente se ha escogido la plataforma Apache Camel como marco sobre el que crear la herramienta específica planteada en el TFG. Apache Camel juega un papel importantísimo en este proyecto, ya que establece las capas de conexión a los distintos servicios y encamina los mensajes oportunos a cada servicio basándose en el contenido del fichero de entrada. Para la preparación del prototipo se han desarrollado una serie de servicios web que permitirán realizar pruebas y demostraciones de concepto de la herramienta en sí. Además se ha desarrollado una versión preliminar de la aplicación web que utilizará el operador del laboratorio para gestionar las peticiones, decidiendo que protocolo se ejecuta a continuación y siguiendo el flujo de tareas del experimento.---ABSTRACT---The current TFG is bound by synthetic biology context (more specifically in the protocol automation) and represents an element of progression in this sector. It consists of a management platform for automated laboratories. The technological result will help the operator to coordinate the available machines in a lab, this way an experiment based on a synthetic biological protocol, could be executed. Nowadays, the biological experiments have a low success rate in conventional laboratories, due to the amount of external factors that intrude during the protocol. On top of it, these experiments are usually expensive and require of an operator monitoring at every phase of the protocol. The laboratories’ automation might mean an increase in the success rate, and also a reduction of costs and risks for the lab workers. The current approach is hoped to divide the different entities in a laboratory in functional units. Those will be the elements to be coordinated by the tool that results from this TFG. In order to provide flexibility to the system, a service-oriented architecture will be used (SOA). Every functional unit will deploy a web service, publishing its functionality to the rest of the lab. SOA is essential to facilitate the communication between machines, due to the fact that it provides an abstraction on the type of the machine and how its functionality is implemented. The main difficulty of this TFG consists on grappling with the integration and coordination problems, being able to manage successfully the lifecycle of an experiment. For that, a benchmark has been made on the available open source tools. Finally Apache Camel has been chosen as a framework over which the tool defined in the TFG will be created. Apache Camel plays a fundamental role in this project, given that it establishes the connection layers to the different services and routes the suitable messages to each service, based on the received file’s content. For the prototype development a number of services that will allow it to perform demonstrations and concept tests have been deployed. Furthermore a preliminary version of the webapp has been developed. It will allow the laboratory operator managing petitions, to decide what protocol goes next as it executes the flow of the experiment’s tasks.