57 resultados para Web services. Service orchestration languages. PEWS. Graphreduction machines
Resumo:
In this demo paper we describe an iOS-based application that allows visualizing live bus transport data in Madrid from static and streaming RDF endpoints, reusing the Web services provided by the bus transport authority in the city and wrapping them using SPARQLStream
Resumo:
The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.
Resumo:
Online services are no longer isolated. The release of public APIs and technologies such as web hooks are allowing users and developers to access their information easily. Intelligent agents could use this information to provide a better user experience across services, connecting services with smart automatic. behaviours or actions. However, agent platforms are not prepared to easily add external sources such as web services, which hinders the usage of agents in the so-called Evented or Live Web. As a solution, this paper introduces an event-based architecture for agent systems, in accordance with the new tendencies in web programming. In particular, it is focused on personal agents that interact with several web services. With this architecture, called MAIA, connecting to new web services does not involve any modification in the platform.
Resumo:
The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.
Resumo:
Desde el inicio de la globalización, el aprendizaje de la lengua inglesa se ha instaurado como una necesidad. Hoy en día, con la adopción del Espacio Europeo de Educación Superior este lenguaje no sólo se impone como un requisito para los estudiantes sino que se exige un nivel B2, lo cual significa un esfuerzo mayor tanto como para el alumno como para el profesor a la hora de hacer de este ejercicio un hábito y lograr la evaluación continua de los mismos. Este proyecto intenta extender las funcionalidades de una aplicación existente llamada Illlab con ejercicios que se adapten al nivel B2 y permitan la interacción entre alumnos durante la realización de estos ejercicios. El objetivo de esta aplicación es el de desarrollar ejercicios extra en la aplicación Illlab que añadan complejidad para el aprendizaje de inglés de un nivel B2 y que además se puedan realizar actividades entre los alumnos. La idea es hacer una aplicación de preguntas y respuestas “multiple choice” con cuatro opciones por pregunta. El fuerte de este juego está en presentar material variado sobre uso de la lengua y además permitir el juego entre varios alumnos. La extensión de ILLLab se plantea como un proyecto para desarrollar interfaces y funcionalidades adicionales en la antigua aplicación. La principal funcionalidad que se añade es un juego de preguntas y respuestas con opciones múltiples para un nivel B2 y las interfaces responden a necesidades de intercambio y manejo de contenido por Internet mediante estándares aceptados en el mundo del aprendizaje digital tales como Common Cartridge o SCORM. Este proyecto simplemente adapta la aplicación para su uso en un entorno de evaluación de actividades en el cual el profesor tiene acceso a las actividades que realizan los alumnos de un curso para su posterior evaluación. Antiguamente ILLLab sólo contenía ejercicios que se llevaban a cabo en el dispositivo móvil por lo que el control de estas actividades no era posible. La mejora se presenta como una interfaz Common Cartridge para el manejo del contenido, una interfaz de comunicación sobre servicios web tipo REST y el manejo de base de datos mediante Hibernate que agrupa una serie de librerías Java para la persistencia de objetos de la base de datos. ABSTRACT. Since the onset of globalization, the learning of the English language has become as a necessity. Today, with the adoption of the European Higher Education Area this language is not only imposed as a requirement for students but a B2 level is required, which means a greater effort both to the student and teacher when it comes to make the learning exercise a habit and achieve continuous evaluation of students. This project aims to extend the functionality of an existing application called Illlab with an exercise that suits the B2 level and allow interaction between students while performing these exercises. The purpose of this application is to develop an additional exercise in the application Illlab that adds complexity for learning English at B2 level and also enables the interaction among students. The main idea is to make an application in multiple choices style with four options. The strength of this game is to present varied material on use of Enlgish and also allow play between two students. ILLLab extension is conceived as a project to develop interfaces and additional functionalities in the old application. The main functionalities added are a game of questions and answers with multiple choices for a B2 level and interfaces that meet information exchange requirements and content management over the Internet using standards adopted in the world of digital learning such as Common Cartridge or SCORM. This project simply adapts the application for its use in an activities evaluation environment in which the teacher has access to the activities performed by students in a course for further evaluation. The former versión of ILLLab contained only exercises that were carried out on the mobile device so that the evaluation of these activities was not possible. The improvement comes as a Common Cartridge interface for content management, a communication interface with REST web services and a database access using Hibernate which groups a number of Java libraries for object persistence in the database.
Resumo:
El propósito de este proyecto es el desarrollo de un sistema de aprovisionamiento electrónico para gestionar los pedidos de las tiendas al almacén mediante mensajería SOAP. El sistema consiste en dos aplicaciones Web, la primera instalada en el almacén y otra instalada en las tiendas asociadas a dicho almacén. Ambas aplicaciones se desarrollarán en Java y JSP utilizando el Framework Spring e Hibernate para la persistencia en base de datos. La mensajería entre las aplicaciones se realizará con mensajes SOAP enviados a servicios Web publicados en ambas aplicaciones. En la primera parte del trabajo se realizará una explicación del Framework de Spring e Hibernate focalizando sobre todo en los módulos utilizados en el trabajo. También se realizará una explicación acerca de la mensajería SOAP y los servicios Web. En la segunda parte se realizarán las dos aplicaciones del sistema. La aplicación de gestión de la tienda permitirá a los usuarios realizar pedidos al almacén, recibir las mercancías y consultar el histórico de pedidos realizados. Además tendrá publicados dos servicios web para recibir las expediciones de los pedidos y los productos nuevos o modificados en el almacén. La aplicación de gestión del almacén permitirá a los usuarios crear / modificar productos, expedir los pedidos recibidos de las tiendas y consultar el histórico de pedidos recibidos. Además tendrá publicados dos servicios web para recibir los pedidos y las recepciones de mercancías desde las tiendas. En esta aplicación también se implementará una tarea programada que se ejecutará cada tres minutos y que sincronizará con las tiendas los productos nuevos o modificados en el almacén mediante mensajes SOAP. SUMMARY The aim of this project is the development of an e-procurement system to manage orders from shops to the storehouse using SOAP messaging. The system consists of two Web applications, the first one is installed in the storehouse and the other is installed in the shops associated to that storehouse. Both applications will be developed in Java and JSP using the Spring Framework and Hibernate for database persistence. The messaging between applications is performed with SOAP messages sent to Web services published in both applications. In the first part of the project an explanation of the Spring Framework and Hibernate will be performed, especially focusing on modules used in the project. An explanation about SOAP messaging and Web services will be carried out too. In the second part of the project the two system applications will be performed. The store management application will allow the users to make purchase orders to the storehouse, receive items and consult the order history carried out. In addition it will have two Web Services published in order to receive the shipping orders and the new or modified products in the storehouse. The management application of the storehouse will allow the users to create and modify products, send the orders received from stores and consult the orders history received. Besides, it will have two Web Services published to receive the orders and receipts from stores. A scheduled task run every three minutes will also be performed in this application. It will synchronize the new or modified products with stores using SOAP messaging.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Next Generation Networks (NGN) provide Telecommunications operators with the possibility to share their resources and infrastructure, facilitate the interoperability with other networks, and simplify and unify the management, operation and maintenance of service offerings, thus enabling the fast and cost-effective creation of new personal, broadband ubiquitous services. Unfortunately, service creation over NGN is far from the success of service creation in the Web, especially when it comes to Web 2.0. This paper presents a novel approach to service creation and delivery, with a platform that opens to non-technically skilled users the possibility to create, manage and share their own convergent (NGN-based and Web-based) services. To this end, the business approach to user-generated services is analyzed and the technological bases supporting the proposal are explained.
Resumo:
The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.
Resumo:
La Internet de las Cosas (IoT), como parte de la Futura Internet, se ha convertido en la actualidad en uno de los principales temas de investigación; en parte gracias a la atención que la sociedad está poniendo en el desarrollo de determinado tipo de servicios (telemetría, generación inteligente de energía, telesanidad, etc.) y por las recientes previsiones económicas que sitúan a algunos actores, como los operadores de telecomunicaciones (que se encuentran desesperadamente buscando nuevas oportunidades), al frente empujando algunas tecnologías interrelacionadas como las comunicaciones Máquina a Máquina (M2M). En este contexto, un importante número de actividades de investigación a nivel mundial se están realizando en distintas facetas: comunicaciones de redes de sensores, procesado de información, almacenamiento de grandes cantidades de datos (big--‐data), semántica, arquitecturas de servicio, etc. Todas ellas, de forma independiente, están llegando a un nivel de madurez que permiten vislumbrar la realización de la Internet de las Cosas más que como un sueño, como una realidad tangible. Sin embargo, los servicios anteriormente mencionados no pueden esperar a desarrollarse hasta que las actividades de investigación obtengan soluciones holísticas completas. Es importante proporcionar resultados intermedios que eviten soluciones verticales realizadas para desarrollos particulares. En este trabajo, nos hemos focalizado en la creación de una plataforma de servicios que pretende facilitar, por una parte la integración de redes de sensores y actuadores heterogéneas y geográficamente distribuidas, y por otra lado el desarrollo de servicios horizontales utilizando dichas redes y la información que proporcionan. Este habilitador se utilizará para el desarrollo de servicios y para la experimentación en la Internet de las Cosas. Previo a la definición de la plataforma, se ha realizado un importante estudio focalizando no sólo trabajos y proyectos de investigación, sino también actividades de estandarización. Los resultados se pueden resumir en las siguientes aseveraciones: a) Los modelos de datos definidos por el grupo “Sensor Web Enablement” (SWE™) del “Open Geospatial Consortium (OGC®)” representan hoy en día la solución más completa para describir las redes de sensores y actuadores así como las observaciones. b) Las interfaces OGC, a pesar de las limitaciones que requieren cambios y extensiones, podrían ser utilizadas como las bases para acceder a sensores y datos. c) Las redes de nueva generación (NGN) ofrecen un buen sustrato que facilita la integración de redes de sensores y el desarrollo de servicios. En consecuencia, una nueva plataforma de Servicios, llamada Ubiquitous Sensor Networks (USN), se ha definido en esta Tesis tratando de contribuir a rellenar los huecos previamente mencionados. Los puntos más destacados de la plataforma USN son: a) Desde un punto de vista arquitectónico, sigue una aproximación de dos niveles (Habilitador y Gateway) similar a otros habilitadores que utilizan las NGN (como el OMA Presence). b) Los modelos de datos están basado en los estándares del OGC SWE. iv c) Está integrado en las NGN pero puede ser utilizado sin ellas utilizando infraestructuras IP abiertas. d) Las principales funciones son: Descubrimiento de sensores, Almacenamiento de observaciones, Publicacion--‐subscripcion--‐notificación, ejecución remota homogénea, seguridad, gestión de diccionarios de datos, facilidades de monitorización, utilidades de conversión de protocolos, interacciones síncronas y asíncronas, soporte para el “streaming” y arbitrado básico de recursos. Para demostrar las funcionalidades que la Plataforma USN propuesta pueden ofrecer a los futuros escenarios de la Internet de las Cosas, se presentan resultados experimentales de tres pruebas de concepto (telemetría, “Smart Places” y monitorización medioambiental) reales a pequeña escala y un estudio sobre semántica (sistema de información vehicular). Además, se está utilizando actualmente como Habilitador para desarrollar tanto experimentación como servicios reales en el proyecto Europeo SmartSantander (que aspira a integrar alrededor de 20.000 dispositivos IoT). v Abstract Internet of Things, as part of the Future Internet, has become one of the main research topics nowadays; in part thanks to the pressure the society is putting on the development of a particular kind of services (Smart metering, Smart Grids, eHealth, etc.), and by the recent business forecasts that situate some players, like Telecom Operators (which are desperately seeking for new opportunities), at the forefront pushing for some interrelated technologies like Machine--‐to--‐Machine (M2M) communications. Under this context, an important number of research activities are currently taking place worldwide at different levels: sensor network communications, information processing, big--‐ data storage, semantics, service level architectures, etc. All of them, isolated, are arriving to a level of maturity that envision the achievement of Internet of Things (IoT) more than a dream, a tangible goal. However, the aforementioned services cannot wait to be developed until the holistic research actions bring complete solutions. It is important to come out with intermediate results that avoid vertical solutions tailored for particular deployments. In the present work, we focus on the creation of a Service--‐level platform intended to facilitate, from one side the integration of heterogeneous and geographically disperse Sensors and Actuator Networks (SANs), and from the other the development of horizontal services using them and the information they provide. This enabler will be used for horizontal service development and for IoT experimentation. Prior to the definition of the platform, we have realized an important study targeting not just research works and projects, but also standardization topics. The results can be summarized in the following assertions: a) Open Geospatial Consortium (OGC®) Sensor Web Enablement (SWE™) data models today represent the most complete solution to describe SANs and observations. b) OGC interfaces, despite the limitations that require changes and extensions, could be used as the bases for accessing sensors and data. c) Next Generation Networks (NGN) offer a good substrate that facilitates the integration of SANs and the development of services. Consequently a new Service Layer platform, called Ubiquitous Sensor Networks (USN), has been defined in this Thesis trying to contribute to fill in the previous gaps. The main highlights of the proposed USN Platform are: a) From an architectural point of view, it follows a two--‐layer approach (Enabler and Gateway) similar to other enablers that run on top of NGN (like the OMA Presence). b) Data models and interfaces are based on the OGC SWE standards. c) It is integrated in NGN but it can be used without it over open IP infrastructures. d) Main functions are: Sensor Discovery, Observation Storage, Publish--‐Subscribe--‐Notify, homogeneous remote execution, security, data dictionaries handling, monitoring facilities, authorization support, protocol conversion utilities, synchronous and asynchronous interactions, streaming support and basic resource arbitration. vi In order to demonstrate the functionalities that the proposed USN Platform can offer to future IoT scenarios, some experimental results have been addressed in three real--‐life small--‐scale proofs--‐of concepts (Smart Metering, Smart Places and Environmental monitoring) and a study for semantics (in--‐vehicle information system). Furthermore we also present the current use of the proposed USN Platform as an Enabler to develop experimentation and real services in the SmartSantander EU project (that aims at integrating around 20.000 IoT devices).
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Because of the growing availability of third-party APIs, services, widgets and any other reusable web component, mashup developers now face a vast amount of candidate components for their developments. Moreover, these components quite often are scattered in many different repositories and web sites, which makes difficult their selection or discovery. In this paper, we discuss the problem of component selection in Service-Oriented Architectures (SOA) and Mashup-Driven Development, and introduce the Linked Mashups Ontology (LiMOn), a model that allows describing mashups and their components for integrating and sharing mashup information such as categorization or dependencies. The model has allowed the building of an integrated, centralized metadirectory of web components for query and selection, which has served to evaluate the model. The metadirectory allows accessing various heterogeneous repositories of mashups and web components while using external information from the Linked Data cloud, helping mashup development.
Resumo:
One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.
Resumo:
The LifeWear-Mobilized Lifestyle with Wearables (Lifewear) project attempts to create Ambient Intelligence (AmI) ecosystems by composing personalized services based on the user information, environmental conditions and reasoning outputs. Two of the most important benefits over traditional environments are 1) take advantage of wearable devices to get user information in a nonintrusive way and 2) integrate this information with other intelligent services and environmental sensors. This paper proposes a new ontology composed by the integration of users and services information, for semantically representing this information. Using an Enterprise Service Bus, this ontology is integrated in a semantic middleware to provide context-aware personalized and semantically annotated services, with discovery, composition and orchestration tasks. We show how these services support a real scenario proposed in the Lifewear project.
Resumo:
The evolution of communications networks to Next Generation Networks (NGN) has encouraged the development of new services. Nowadays, several technologies are being integrated into telecommunications services in order to provide new functionalities, resulting in what are known as converged services. The objective is to adapt the behavior of the services to the necessities of different users, generating customized services. Some of the main technologies involved in their development are those related to the Web. But due to this type of services implies the combination of different technologies, their development is a very complex process that has to be improved to reduce the time and cost required, with the aim of promoting the success of such services. This paper proposes to apply software reuse through the utilization of a component library and presents one focused on ECharts for SIP Servlets (E4SS). It is a framework, based on the SIP Servlet specification, which uses finite state machines for the definition of converged communications services. Also, to promote the use of the library, a methodology is proposed in order to facilitate the integration between the library operations and the software development cycle.