898 resultados para embedded Web services
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
End-User Development Success Factors and their Application to Composite Web Development Environments
Resumo:
The Future Internet is expected to be composed of a mesh of interoperable Web services accessed from all over the Web. This approach has not yet caught on since global user-service interaction is still an open issue. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. The weakness lies in the abstraction of the underlying service front-end architecture rather than the infrastructure technologies themselves. In our opinion, the best approach is to offer end-to-end composition from user interface to service invocation, as well as an understandable abstraction of both building blocks and a visual composition technique. In this paper we formalize our vision with regard to the next-generation front-end Web technology that will enable integrated access to services, contents and things in the Future Internet. We present a novel reference architecture designed to empower non-technical end users to create and share their own self-service composite applications. A tool implementing this architecture has been developed as part of the European FP7 FAST Project and EzWeb Project, allowing us to validate the rationale behind our approach.
Resumo:
In this introductory chapter we put in context and give a brief outline of the work that we thoroughly present in the rest of the dissertation. We consider this work divided in two main parts. The first part is the Firenze Framework, a knowledge level description framework rich enough to express the semantics required for describing both semantic Web services and semantic Grid services. We start by defining what the Semantic Grid is and its relation with the Semantic Web; and the possibility of their convergence since both initiatives have become mainly service-oriented. We also introduce the main motivators of the creation of this framework, one is to provide a valid description framework that works at knowledge level; the other to provide a description framework that takes into account the characteristics of Grid services in order to be able to describe them properly. The other part of the dissertation is devoted to Vega, an event-driven architecture that, by means of proposed knowledge level description framework, is able to achieve high scale provisioning of knowledge-intensive services. In this introductory chapter we portrait the anatomy of a generic event-driven architecture, and we briefly enumerate their main characteristics, which are the reason that make them our choice.
Resumo:
n this paper, we present the design and implementation of a prototype system of Smart Parking Services based on Wireless Sensor Networks (WSNs) that allows vehicle drivers to effectively find the free parking places. The proposed scheme consists of wireless sensor networks, embedded web-server, central web-server and mobile phone application. In the system, low-cost wireless sensors networks modules are deployed into each parking slot equipped with one sensor node. The state of the parking slot is detected by sensor node and is reported periodically to embedded web-server via the deployed wireless sensor networks. This information is sent to central web-server using Wi-Fi networks in real-time, and also the vehicle driver can find vacant parking lots using standard mobile devices.
Resumo:
La Ciencia Ciudadana nace del resultado de involucrar en las investigaciones científicas a todo tipo de personas, las cuales pueden participar en un determinado experimento analizando o recopilando datos. No hace falta que tengan una formación científica para poder participar, es decir cualquiera puede contribuir con su granito de arena. La ciencia ciudadana se ha convertido en un elemento a tener en cuenta a la hora de realizar tareas científicas que requieren mucha dedicación, o que simplemente por el volumen de trabajo que estas implican, resulta casi imposible que puedan ser realizadas por una sola persona o un pequeño grupo de trabajo. El proyecto GLORIA (GLObal Robotic-telescopes Intelligent Array) es la primera red de telescopios robóticos del mundo de acceso libre que permite a los usuarios participar en la investigación astronómica mediante la observación con telescopios robóticos, y/o analizando los datos que otros usuarios han adquirido con GLORIA, o desde otras bases de datos de libre acceso. Con el objetivo de contribuir a esta iniciativa se ha propuesto crear una plataforma web que pasará a formar parte del Proyecto GLORIA, en la que se puedan realizar experimentos astronómicos. Con el objetivo de fomentar la ciencia y el aprendizaje colaborativo se propone construir una aplicación web que se ejecute en la plataforma Facebook. Los experimentos los proporciona la red de telescopios del proyecto GLORIA mediante servicios web y están definidos mediante XML. La aplicación web recibe el XML con la descripción del experimento, lo interpreta y lo representa en la plataforma Facebook para que los usuarios potenciales puedan realizar los experimentos. Los resultados de los experimentos realizados se envían a una base de datos de libre acceso que será gestionada por el proyecto GLORIA, para su posterior análisis por parte de expertos. ---ABSTRACT---The citizen’s science is born out of the result of involving all type of people in scientific investigations, in which, they can participate in a determined experiment analyzing or compiling data. There is no need to have a scientific training in order to participate, but, anyone could contribute doing one’s bit. The citizen’s science has become an element to take into account when carrying out scientific tasks that require a lot dedication, or that, for the volume of work that these involve, are nearly impossible to be carried out by one person or a small working group. The GLORIA Project (Global Robotic-Telescopes Intelligent Array) is the first network of free access robotic telescopes in the world that permits the users to participate in the astronomic investigation by means of observation with robotic telescopes, and/or analyzing data from other users that have obtained through GLORIA, or from other free-access databases. With the aim of contributing to this initiative, a web platform has been created and will be part of the GLORIA Project, in which astronomic experiments can be carried out. With the objective of promoting science and collaborative apprenticeship, a web application carried out in the FACEBOOK platform is to be built. The experiments are founded by the telescopes network of the GLORIA project by means of web services and are defined through XML. The web application receives the XML with the description of the experiment, interprets it and represents it in the FACEBOOK platform in order for potential users may perform the experiments. The results of the experiments carried out are sent to a free-access database that will be managed by the GLORIA Project for its analysis on the part of experts.
Resumo:
Actualmente, la Web provee un inmenso conjunto de servicios (WS-*, RESTful, OGC WFS), los cuales están normalmente expuestos a través de diferentes estándares que permiten localizar e invocar a estos servicios. Estos servicios están, generalmente, descritos utilizando información textual, sin una descripción formal, es decir, la descripción de los servicios es únicamente sintáctica. Para facilitar el uso y entendimiento de estos servicios, es necesario anotarlos de manera formal a través de la descripción de los metadatos. El objetivo de esta tesis es proponer un enfoque para la anotación semántica de servicios Web en el dominio geoespacial. Este enfoque permite automatizar algunas de las etapas del proceso de anotación, mediante el uso combinado de recursos ontológicos y servicios externos. Este proceso ha sido evaluado satisfactoriamente con un conjunto de servicios en el dominio geoespacial. La contribución principal de este trabajo es la automatización parcial del proceso de anotación semántica de los servicios RESTful y WFS, lo cual mejora el estado del arte en esta área. Una lista detallada de las contribuciones son: • Un modelo para representar servicios Web desde el punto de vista sintáctico y semántico, teniendo en cuenta el esquema y las instancias. • Un método para anotar servicios Web utilizando ontologías y recursos externos. • Un sistema que implementa el proceso de anotación propuesto. • Un banco de pruebas para la anotación semántica de servicios RESTful y OGC WFS. Abstract The Web contains an immense collection of Web services (WS-*, RESTful, OGC WFS), normally exposed through standards that tell us how to locate and invocate them. These services are usually described using mostly textual information and without proper formal descriptions, that is, existing service descriptions mostly stay on a syntactic level. If we want to make such services potentially easier to understand and use, we may want to annotate them formally, by means of descriptive metadata. The objective of this thesis is to propose an approach for the semantic annotation of services in the geospatial domain. Our approach automates some stages of the annotation process, by using a combination of thirdparty resources and services. It has been successfully evaluated with a set of geospatial services. The main contribution of this work is the partial automation of the process of RESTful and WFS semantic annotation services, what improves the current state of the art in this area. The more detailed list of contributions are: • A model for representing Web services. • A method for annotating Web services using ontological and external resources. • A system that implements the proposed annotation process. • A gold standard for the semantic annotation of RESTful and OGC WFS services, and algorithms for evaluating the annotations.
Resumo:
El autor de este proyecto es miembro reciente de la asociación SoloBoulder, dedicada a la modalidad de escalada boulder, noticias y actualidad, contenido multimedia, promoción de un equipo de escaladores y defensa de valores medioambientales en la montaña. El principal canal de distribución de contenidos es una página web existente previa a este proyecto. La asociación ha detectado una escasez y mala calidad de recursos en internet en cuanto a guías de zonas donde poder practicar el boulder. Tal circunstancia impulsa la iniciativa de este proyecto fin de carrera. El objetivo general es el desarrollo de una nueva aplicación que proporcione a los usuarios a nivel mundial una guía interactiva de boulder y otros puntos de interés, una red social que permita la creación cooperativa y orgánica de contenido, y servicios web para el consumo de la información desde otras plataformas u organizaciones. El nuevo software desarrollado es independiente de la página web de SoloBoulder previa. No obstante, ambas partes se integran bajo el mismo domino web y aspecto. La nueva aplicación ofrece a escaladores y turistas un servicio informativo e interactivo de calidad, con el que se espera aumentar el número de visitas en todo el sitio web y poder ampliar la difusión de valores medioambientales, diversificar las zonas de boulder y regular las masificadas, favorecer el deporte y brindar al escalador una oportunidad de autopromoción personal. Una gran motivación para el autor también es el proceso de investigación y formación en tecnologías, patrones arquitecturales de diseño y metodologías de trabajo adaptadas a las tendencias actuales en la ingeniería de software, con especial curiosidad hacia el mundo web. A este respecto podemos destacar: metodología de trabajo en proyectos, análisis de proyectos, arquitecturas de software, diseño de software, bases de datos, programación y buenas prácticas, seguridad, interfaz gráfica web, diseño gráfico, Web Performance Optimization, Search Engine Optimization, etc. En resumen, este proyecto constituye un aprendizaje y puesta en práctica de diversos conocimientos adquiridos durante la ejecución del mismo, así como afianzamiento de materias estudiadas en la carrera. Además, el producto desarrollado ofrece un servicio de calidad a los usuarios y favorece el deporte y la autopromoción del escalador. ABSTRACT. The author of this Project is recent member of the association SoloBoulder, dedicated to a rock climbing discipline called bouldering, news, multimedia content, promotion of a team of climbers and defense of environmental values in the mountain. The main content distribution channel is a web page existing previous to this project. The association has detected scarcity and bad quality of resources on the internet about guides of bouldering areas. This circumstance motivates the initiative of this project. The general objective is the development of a new application which provides a worldwide, interactive bouldering guide, including other points of interest, a social network which allows the cooperative and organic creation of content, and web services for consumption of information from other platforms or organizations. The new software developed is independent of the previous SoloBoulder web page. However, both parts are integrated under the same domain and appearance. The new application offers to climbers and tourists a quality informative and interactive service, with which we hope to increase the number of visits in the whole web site and be able to expand the dissemination of environmental values, diversify boulder areas and regulate the overcrowded ones, encourage sport and offer to the climber an opportunity of self-promotion. A strong motivation for the author is also the process of investigation and education in technologies, architectural design patterns and working methodologies adapted to the actual trends in software engineering, with special curiosity about the web world. In this regard we could highlight: project working methodologies, project analysis, software architectures, software design, data bases, programming and good practices, security, graphic web interface, graphic design, Web Performance Optimization, Search Engine Optimization, etc. To sum up, this project constitutes learning and practice of diverse knowledge acquired during its execution, as well as consolidation of subjects studied in the degree. In addition, the product developed offers a quality service to the users and favors the sport and the selfpromotion of the climber.
Resumo:
Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. Conclusions: The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.
Resumo:
The roiling financial markets, constantly changing tax law and increasing complexity of planning transaction increase the demand of aggregated family wealth management (FWM) services. However, current trend of developing such advisory systems is mainly focusing on financial or investment side. In addition, these existing systems lack of flexibility and are hard to be integrated with other organizational information systems, such as CRM systems. In this paper, a novel architecture of Web-service-agents-based FWM systems has been proposed. Multiple intelligent agents are wrapped as Web services and can communicate with each other via Web service protocols. On the one hand, these agents can collaborate with each other and provide comprehensive FWM advices. On the other hand, each service can work independently to achieve its own tasks. A prototype system for supporting financial advice is also presented to demonstrate the advances of the proposed Webservice- agents-based FWM system architecture.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.
Resumo:
Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.
Resumo:
Semantic Web Service, one of the most significant research areas within the Semantic Web vision, has attracted increasing attention from both the research community and industry. The Web Service Modelling Ontology (WSMO) has been proposed as an enabling framework for the total/partial automation of the tasks (e.g., discovery, selection, composition, mediation, execution, monitoring, etc.) involved in both intra- and inter-enterprise integration of Web services. To support the standardisation and tool support of WSMO, a formal model of the language is highly desirable. As several variants of WSMO have been proposed by the WSMO community, which are still under development, the syntax and semantics of WSMO should be formally defined to facilitate easy reuse and future development. In this paper, we present a formal Object-Z formal model of WSMO, where different aspects of the language have been precisely defined within one unified framework. This model not only provides a formal unambiguous model which can be used to develop tools and facilitate future development, but as demonstrated in this paper, can be used to identify and eliminate errors present in existing documentation.
Resumo:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.