910 resultados para web service django vagrant reproducible research reporducibility
Resumo:
Nella tesi vengono descritte le caratteristiche principali del linguaggio di programmazione service-oriented Jolie, analizzandone ampiamente la sintassi e proponendo esempi di utilizzo degli operatori e dei costrutti. Viene fatta una panoramica di SOC, SOA, Web Services, Cloud Computing, Orchestrazione, Coreografia, Deployment e Behaviour, gli ultimi due analizzati in diversi capitoli. La tesi si conclude con un esempio di conversione di servizi WSDL in Jolie, producendo un esempio di utilizzo del Web Service convertito. Nel documento vengono accennati i progressi storici del linguaggio ed i loro sviluppatori, nonché le API fornite dal linguaggio.
Resumo:
La presenza sempre più massiccia di fornitori di servizi basati su web service ha portato in rilievo uno dei limiti di questo approccio, l’impossibilità di rendere automatizzabili i task di ricerca, invocazione e orchestrazione dei servizi. Il raggiungimento di questo obiettivo risulta impossibile a causa della mancanza di informazioni comprensibili ad una macchina attraverso le quali un agente software può effettuare delle scelte tra vari servizi esposti. Il fallimento della “ricerca intelligente” di un servizio pubblicato sta nella stessa modellazione dei servizi. I linguaggi attualmente disponibili permettono di modellare un servizio solo dal punto di vista sintattico. Definire le operazioni proposte, il tipo di parametri accettati e il tipo di output prodotto non è sufficiente a comprendere cosa il servizio può fare. I web services semantici consentono di superare questo limite fornendo uno stack semantico, il quale ha il compito di racchiudere le informazioni relative ai servizi, il loro funzionamento e gli obiettivi raggiungibili organizzando la conoscenza in ontologie. La formalizzazione dei modelli ontologici e la loro integrazione con i servizi esistenti è uno dei problemi più interessanti che ha catturato l’attenzione di numerosi studi di settore. Negli ultimi anni numerose sono state le soluzioni proposte. Tra queste si possono considerare due principali vie di sviluppo che hanno visto un’intensa attività sperimentale. Il primo scenario è volto a modellare in maniera formale la conoscenza legata ai servizi esposti, il secondo integra i servizi già esistenti con nuove strutture semantiche in modo da conservare le infrastrutture presenti. Entrambi i filoni hanno come scopo quello di fornire la conoscenza adatta a sistemi esperti che consentano di automatizzare la ricerca dei servizi in base ai desideri dei clienti, permettendo la loro composizione dinamica basata su un’interazione utile e indipendente dai protocolli che vincolano il trasporto delle informazioni.
Resumo:
Lo scopo di questa tesi è dimostrare quale sia il miglior Web Framework con linguaggio Python fra i tre principali esponenti: Django, web2py e TurboGears. Inizialmente verrà effettuata un’analisi generale sui Web Framework, in particolare quelli con architettura MVC poiché sarà l’architettura utilizzata da Django, web2py e TurboGears. Successivamente, per ogni Web Framework verrà analizzata la struttura generale e i componenti core degli stessi. Tuttavia per stabilire chi sia il migliore fra di essi bisogna anche analizzare come essi gestiscono altri ambiti dello sviluppo web e quindi vengono analizzati tutti i tools messi a disposizione dai Web Framework. Alla fine verranno tratte le conclusioni in cui verrà chiarito quale sia il Web Framework migliore per uno sviluppatore e perché, andando a riassumere le caratteristiche di tutti e tre.
Resumo:
Community research fatigue has been understudied within the context of community-university relationships and knowledge production. Community-based research (CBR), often occurring within a limited geography and population, increases the possibility that community members feel exhausted or over-whelmed by university research —particularly when they do not see tangible results from research activities. Prompted by informal stories of research fatigue from community members, a small graduate student team sought to understand the extent to which community members experienced research fatigue, and what factors contributed to or relieved feelings of research fatigue. In order to explore these dimensions of research fatigue, semi-structured, face-to-face interviews were conducted with 21 participants, including community members (n = 9), staff and faculty (n = 10), and students (n = 2). The objective of the research was to identify university practices that contribute to research fatigue and how to address the issue at the university level. Qualitative data analysis revealed several important actionable findings: the structure and conduct of community-based research, structured reciprocity and impact, and the role of trust in research. This study’s findings are used to assess the quality of Clark University’s research relationship with its adjacent community. Recommendations are offered; such as to improve partnerships, the impact of CBR, and to develop clear principles of practice.
Resumo:
We present the cacher and CodeDepends packages for R, which provide tools for (1) caching and analyzing the code for statistical analyses and (2) distributing these analyses to others in an efficient manner over the web. The cacher package takes objects created by evaluating R expressions and stores them in key-value databases. These databases of cached objects can subsequently be assembled into “cache packages” for distribution over the web. The cacher package also provides tools to help readers examine the data and code in a statistical analysis and reproduce, modify, or improve upon the results. In addition, readers can easily conduct alternate analyses of the data. The CodeDepends package provides complementary tools for analyzing and visualizing the code for a statistical analysis and this functionality has been integrated into the cacher package. In this chapter we describe the cacher and CodeDepends packages and provide examples of how they can be used for reproducible research.
Resumo:
Das Web 2.0 eröffnet Wissenschaftlerinnen und Wissenschaftlern neue Möglichkeiten mit Wissen und Informationen umzugehen: Das Recherchieren von Informationen und Quellen, der Austausch von Wissen mit anderen, das Verwalten von Ressourcen und das Erstellen von eigenen Inhalten im Web ist einfach und kostengünstig möglich. Dieser Artikel thematisiert die Bedeutung des Web 2.0 für den Umgang mit Wissen und Informationen und zeigt auf, wie durch die Kooperation vieler Einzelner das Schaffen von neuem Wissen und von Innovationen möglich wird. Diskutiert werden der Einfluss des Web 2.0 auf die Wissenschaft und mögliche Vor- und Nachteile der Nutzung. Außerdem wird ein kurzer Überblick über Studien gegeben, die die Nutzung des Web 2.0 in der Gesamtbevölkerung untersuchen. Im empirischen Teil des Artikels werden Methode und Ergebnisse der Befragungsstudie „Wissenschaftliches Arbeiten im Web 2.0“ vorgestellt. Befragt wurden Nachwuchswissenschaftlerinnen und Nachwuchswissenschaftler in Deutschland zur Nutzung des Web 2.0 für die eigene wissenschaftliche Arbeit. Dabei zeigt sich, dass insbesondere die Wikipedia von einem Großteil der Befragten intensiv bis sehr intensiv für den Einstieg in die Recherche verwendet wird. Die aktive Nutzung des Web 2.0, z.B. durch das Schreiben eines eigenen Blogs oder dem Mitarbeiten bei der Online-Enzyklopädie Wikipedia ist bis jetzt noch gering. Viele Dienste sind unbekannt oder werden eher skeptisch beurteilt, der lokale Desktopcomputer wurde noch nicht vom Web als zentraler Speicherort abgelöst.
Resumo:
A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.
Resumo:
RESTful services gained a lot of attention recently, even in the enterprise world, which is traditionally more web-service centric. Data centric RESfFul services, as previously mainly known in web environments, established themselves as a second paradigm complementing functional WSDL-based SOA. In the Internet of Things, and in particular when talking about sensor motes, the Constraint Application Protocol (CoAP) is currently in the focus of both research and industry. In the enterprise world a protocol called OData (Open Data Protocol) is becoming the future RESTful data access standard. To integrate sensor motes seamlessly into enterprise networks, an embedded OData implementation on top of CoAP is desirable, not requiring an intermediary gateway device. In this paper we introduce and evaluate an embedded OData implementation. We evaluate the OData protocol in terms of performance and energy consumption, considering different data encodings, and compare it to a pure CoAP implementation. We were able to demonstrate that the additional resources needed for an OData/JSON implementation are reasonable when aiming for enterprise interoperability, where OData is suggested to solve both the semantic and technical interoperability problems we have today when connecting systems
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to congure the annotations to their specic needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation condence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
Resumo:
This paper proposes a new methodology focused on implementing cost effective architectures on Cloud Computing systems. With this methodology the paper presents some disadvantages of systems that are based on single Cloud architectures and gives some advices for taking into account in the development of hybrid systems. The work also includes a validation of these ideas implemented in a complete videoconference service developed with our research group. This service allows a great number of users per conference, multiple simultaneous conferences, different client software (requiring transcodification of audio and video flows) and provides a service like automatic recording. Furthermore it offers different kinds of connectivity including SIP clients and a client based on Web 2.0. The ideas proposed in this article are intended to be a useful resource for any researcher or developer who wants to implement cost effective systems on several Clouds
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Este Proyecto Fin de Carrera (PFC) tiene como objetivos el análisis, diseño e implementación de un sistema web que permita a los usuarios familiarizarse con el Índice de Desarrollo Humano (IDH), publicado anualmente por Naciones Unidas, ofreciendo un servicio de gestión y descarga de una aplicación móvil relacionada con dicho índice. La aplicación móvil es un juego educativo basado en preguntas sobre el IDH de los países, desarrollada en paralelo con este proyecto. El servicio web implementado en este proyecto facilita tanto la descarga, administración y actualización de contenidos como la interacción entre los usuarios. El sistema está formado por un servidor web, una base de datos de usuarios y contenidos y un portal web desde el cual puede descargarse la aplicación móvil, realizar consultas sobre estadísticas de juego y conocer el IDH sin necesidad de jugar. El buscador avanzado que ha sido desarrollado para conocer el IDH permite al usuario adquirir destrezas y entrenarse por sí solo para mejorar sus resultados de juego. Los administradores del sistema tienen la capacidad de gestionar el contenido del portal, los usuarios que solicitan darse de alta y la funcionalidad ofrecida, es decir, actualización del juego, foros y noticias. La instalación del sistema implementado en un servidor web ha permitido su verificación exitosa así como la provisión del servicio de información y sensibilización sobre el IDH, actualizado mediante la información de Naciones Unidas, motivación original del proyecto. ABSTRACT This Final Year Project takes as targets the analysis, design and implementation of a web system that allows to the users to familiarize with the Human Development Index (HDI), published annually by United Nations, offering a service of management and download a mobile application associated with that index. The mobile application is an educational game based on questions on the IDH of the countries, developed in parallel with this project. The web service implemented by means of this Project facilitates download, administration and update of contents and the interaction between the users across the cooperative game. The system consists of a web server, a database of users and content and a web portal from which you can download the mobile application, perform queries on game statistics, or discover the HDI without need for play. The advanced search engine that has been developed for the HDI allows the user to purchase and train for skills to improve their game results. System administrators have the ability to manage the content of the portal, users requesting register and the functionality offered, i.e., update to the game, forums and news. The installation of the system that was implemented has allowed successful verification and the provision of an information and awareness on the HDI, updated with the information from the United Nations, original motivation of the project.
Resumo:
The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.
Resumo:
El presente documento aborda la problemática surgida en torno al desarrollo de una plataforma para gestionar las guías docentes de la Universidad Politécnica de Madrid, centrándose en el uso de las tecnologías Javascript, así como de lo algoritmos, plugins y bibliotecas auxiliares creadas y utilizadas. Por último, se muestran los resultados obtenidos del análisis y puesta en práctica de lo expuesto en el documento, así como conclusiones y sugerencias de futuras líneas de trabajo para este mismo proyecto. ---ABSTRACT---This document explains the problems found when developing a web service whose purpose is the management of learning guides at \Universidad Politecnica de Madrid". This final thesis focus on the use of Javascript technologies and the plugins, algorithms and auxiliar libraries used and developed. Finally, results of the analysis, development of the ideas exposed in this document, and conclusions and future working lines are presented.