803 resultados para composite web services
Resumo:
Scholarly communication over the past 10 to 15 years has gained a tremendous momentum with the advent of Internet and the World Wide Web. The web has transformed the ways by which people search, find, use and communicate information. Innovations in web technology since 2005 have brought out an array of new services and facilities and an enhanced version of the web named Web 2.0. Web 2.0 facilitates a collaborative environment in which the information users can interact with the information. Web 2.0 enables its users to create, annotate, review, share re-use and represent the information in new ways thereby optimizing the information dissemination
Resumo:
Enterprise-Resource-Planning-Systeme (ERP-Systeme) bilden für die meisten mittleren und großen Unternehmen einen essentiellen Bestandteil ihrer IT-Landschaft zur Verwaltung von Geschäftsdaten und Geschäftsprozessen. Geschäftsdaten werden in ERP-Systemen in Form von Geschäftsobjekten abgebildet. Ein Geschäftsobjekt kann mehrere Attribute enthalten und über Assoziationen zu anderen Geschäftsobjekten einen Geschäftsobjektgraphen aufspannen. Existierende Schnittstellen ermöglichen die Abfrage von Geschäftsobjekten, insbesondere mit Hinblick auf deren Attribute. Die Abfrage mit Bezug auf ihre Position innerhalb des Geschäftsobjektgraphen ist jedoch über diese Schnittstellen häufig nur sehr schwierig zu realisieren. Zur Vereinfachung solcher Anfragen können semantische Technologien, wie RDF und die graphbasierte Abfragesprache SPARQL, verwendet werden. SPARQL ermöglicht eine wesentlich kompaktere und intuitivere Formulierung von Anfragen gegen Geschäftsobjektgraphen, als es mittels der existierenden Schnittstellen möglich ist. Die Motivation für diese Arbeit ist die Vereinfachung bestimmter Anfragen gegen das im Rahmen dieser Arbeit betrachtete SAP ERP-System unter Verwendung von SPARQL. Zur Speicherung von Geschäftsobjekten kommen in ERP-Systemen typischerweise relationale Datenbanken zum Einsatz. Die Bereitstellung von SPARQL-Endpunkten auf Basis von relationalen Datenbanken ist ein seit längerem untersuchtes Gebiet. Es existieren verschiedene Ansätze und Tools, welche die Anfrage mittels SPARQL erlauben. Aufgrund der Komplexität, der Größe und der Änderungshäufigkeit des ERP-Datenbankschemas können solche Ansätze, die direkt auf dem Datenbankschema aufsetzen, nicht verwendet werden. Ein praktikablerer Ansatz besteht darin, den SPARQL-Endpunkt auf Basis existierender Schnittstellen zu realisieren. Diese sind weniger komplex als das Datenbankschema, da sie die direkte Abfrage von Geschäftsobjekten ermöglichen. Dadurch wird die Definition des Mappings erheblich vereinfacht. Das ERP-System bietet mehrere Schnittstellen an, die sich hinsichtlich des Aufbaus, der Zielsetzung und der verwendeten Technologie unterscheiden. Unter anderem wird eine auf OData basierende Schnittstelle zur Verfügung gestellt. OData ist ein REST-basiertes Protokoll zur Abfrage und Manipulation von Daten. Von den bereitgestellten Schnittstellen weist das OData-Interface gegenüber den anderen Schnittstellen verschiedene Vorteile bei Realisierung eines SPARQL-Endpunktes auf. Es definiert eine Abfragesprache und einen Link-Adressierungsmechanismus, mit dem die zur Beantwortung einer Anfrage benötigten Service-Aufrufe und die zu übertragende Datenmenge erheblich reduziert werden können. Das Ziel dieser Arbeit besteht in der Entwicklung eines Verfahrens zur Realisierung eines SPARQL-Endpunktes auf Basis von OData-Services. Dazu wird zunächst eine Architektur vorgestellt, die als Grundlage für die Implementierung eines entsprechenden Systems dienen kann. Ausgehend von dieser Architektur, werden die durch den aktuellen Forschungsstand noch nicht abgedeckten Bereiche ermittelt. Nach bestem Wissen ist diese Arbeit die erste, welche die Abfrage von OData-Schnittstellen mittels SPARQL untersucht. Dabei wird als Teil dieser Arbeit ein neuartiges Konzept zur semantischen Beschreibung von OData-Services vorgestellt. Dieses ermöglicht die Definition von Abbildungen der von den Services bereitgestellten Daten auf RDF-Graphen. Aufbauend auf den Konzepten zur semantischen Beschreibung wird eine Evaluierungssemantik erarbeitet, welche die Auflösung von Ausdrücken der SPARQL-Algebra gegen semantisch annotierte OData-Services definiert. Dabei werden die Daten aller OData-Services ermittelt, die zur vollständigen Abarbeitung einer Anfrage benötigt werden. Zur Abfrage der relevanten Daten wurden Konzepte zur Erzeugung der entsprechenden OData-URIs entwickelt. Das vorgestellte Verfahren wurde prototypisch implementiert und anhand zweier Anwendungsfälle für die im betrachteten Szenario maßgeblichen Servicemengen evaluiert. Mit den vorgestellten Konzepten besteht nicht nur die Möglichkeit, einen SPARQL-Endpunkt für ein ERP-System zu realisieren, vielmehr kann jede Datenquelle, die eine OData-Schnittstelle anbietet, mittels SPARQL angefragt werden. Dadurch werden große Datenmengen, die bisher für die Verarbeitung mittels semantischer Technologien nicht zugänglich waren, für die Integration mit dem Semantic Web verfügbar gemacht. Insbesondere können auch Datenquellen, deren Integration miteinander bisher nicht oder nur schwierig möglich war, über Systeme zur föderierten Abfrage miteinander integriert werden.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
Network connectivity is reaching more and more into the physical world. This is potentially transformative – allowing every object and service in the world to talk to one other—and to their users—through any networked interface; where online services are the connective tissue of the physical world and where physical objects are avatars of online services.
Resumo:
Abstract: There is a lot of hype around the Internet of Things along with talk about 100 billion devices within 10 years time. The promise of innovative new services and efficiency savings is fueling interest in a wide range of potential applications across many sectors including smart homes, healthcare, smart grids, smart cities, retail, and smart industry. However, the current reality is one of fragmentation and data silos. W3C is seeking to fix that by exposing IoT platforms through the Web with shared semantics and data formats as the basis for interoperability. This talk will address the abstractions needed to move from a Web of pages to a Web of things, and introduce the work that is being done on standards and on open source projects for a new breed of Web servers on microcontrollers to cloud based server farms. Speaker Biography -Dave Raggett : Dave has been involved at the heart of web standards since 1992, and part of the W3C Team since 1995. As well as working on standards, he likes to dabble with software, and more recently with IoT hardware. He has participated in a wide range of European research projects on behalf of W3C/ERCIM. He currently focuses on Web payments, and realising the potential for the Web of Things as an evolution from the Web of pages. Dave has a doctorate from the University of Oxford. He is a visiting professor at the University of the West of England, and lives in the UK in a small town near to Bath.
Resumo:
Abstract A frequent assumption in Social Media is that its open nature leads to a representative view of the world. In this talk we want to consider bias occurring in the Social Web. We will consider a case study of liquid feedback, a direct democracy platform of the German pirate party as well as models of (non-)discriminating systems. As a conclusion of this talk we stipulate the need of Social Media systems to bias their working according to social norms and to publish the bias they introduce. Speaker Biography: Prof Steffen Staab Steffen studied in Erlangen (Germany), Philadelphia (USA) and Freiburg (Germany) computer science and computational linguistics. Afterwards he worked as researcher at Uni. Stuttgart/Fraunhofer and Univ. Karlsruhe, before he became professor in Koblenz (Germany). Since March 2015 he also holds a chair for Web and Computer Science at Univ. of Southampton sharing his time between here and Koblenz. In his research career he has managed to avoid almost all good advice that he now gives to his team members. Such advise includes focusing on research (vs. company) or concentrating on only one or two research areas (vs. considering ontologies, semantic web, social web, data engineering, text mining, peer-to-peer, multimedia, HCI, services, software modelling and programming and some more). Though, actually, improving how we understand and use text and data is a good common denominator for a lot of Steffen's professional activities.
Resumo:
El proyecto de creación de una Aplicación web sobre los diferentes perfiles de mercado de las provincias canadienses, consiste en la recopilación de información de diversas fuentes académicas, con el fin de crear una guía electrónica que permita a los exportadores colombianos tener información precisa y actualizada sobre cada una de las provincias canadienses.
Resumo:
El projecte iSAC (Servei Intel·ligent d’Atenció Ciutadana via web) es va iniciar el mes de gener de 2006 amb l’ajut del nou coneixement científic en agents intel·ligents, junt amb l’aplicació de les Tecnologies de la Informació i la Comunicació (TIC) i els cercadors. Actualment, el servei actual d’atenció al ciutadà està composat per dues àrees: l’atenció directa a les oficines i l’atenció telefònica a través del Call Center. Les limitacions de personal i horari d’atenció fan que aquest servei perdi eficàcia. Es vol desenvolupar un producte amb una tecnologia capaç d’ampliar i millorar la capacitat i la qualitat de l’atenció ciutadana en les administracions públiques, sigui quina sigui la seva dimensió. Tot i això, aquest projecte l’explotaran especialment els ajuntaments, als quals la ciutadania s'acosta amb tot tipus de preguntes i dubtes, habitualment no restringides a l'àmbit local. Més concretament, es vol automatitzar a través d’un portal web l’atenció al ciutadà per tal d’obtenir un servei més efectiu
Resumo:
Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.
Resumo:
This paper describes the development and validation of a novel web-based interface for the gathering of feedback from building occupants about their environmental discomfort including signs of Sick Building Syndrome (SBS). The gathering of such feedback may enable better targeting of environmental discomfort down to the individual as well as the early detection and subsequently resolution by building services of more complex issues such as SBS. The occupant's discomfort is interpreted and converted to air-conditioning system set points using Fuzzy Logic. Experimental results from a multi-zone air-conditioning test rig have been included in this paper.
Resumo:
As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Resumo:
Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.
Resumo:
This paper approaches the subject of brand equity measurement on and offline. The existing body of research knowledge on brand equity measurement has derived from classical contexts; however, the majority of today's brands prosper simultaneously online and offline. Since branding on the Web needs to address the unique characteristics of computer-mediated environments, it was posited that classical measures of brand equity were inadequate for this category of brands. Aaker's guidelines for building a brand equity measurement system were thus followed and his brand equity ten was employed as a point of departure. The main challenge was complementing traditional measures of brand equity with new measures pertinent to the Web. Following 16 semi-structured interviews with experts, ten additional measures were identified.
Resumo:
With the rapid advancement of the webtechnology, more and more educationalresources, including software applications forteaching/learning methods, are available acrossthe web, which enables learners to access thelearning materials and use various ways oflearning at any time and any place. Moreover,various web-based teaching/learning approacheshave been developed during the last decade toenhance the capability of both educators andlearners. Particularly, researchers from bothcomputer science and education are workingtogether, collaboratively focusing ondevelopment of pedagogically enablingtechnologies which are believed to improve theinfrastructure of education systems andprocesses, including curriculum developmentmodels, teaching/learning methods, managementof educational resources, systematic organizationof communication and dissemination ofknowledge and skills required by and adapted tousers. Despite of its fast development, however,there are still great gaps between learningintentions, organization of supporting resources,management of educational structures,knowledge points to be learned and interknowledgepoint relationships such as prerequisites,assessment of learning outcomes, andtechnical and pedagogic approaches. Moreconcretely, the issues have been widelyaddressed in literature include a) availability andusefulness of resources, b) smooth integration ofvarious resources and their presentation, c)learners’ requirements and supposed learningoutcomes, d) automation of learning process interms of its schedule and interaction, and e)customization of the resources and agilemanagement of the learning services for deliveryas well as necessary human interferences.Considering these problems and bearing in mindthe advanced web technology of which weshould make full use, in this report we willaddress the following two aspects of systematicarchitecture of learning/teaching systems: 1)learning objects – a semantic description andorganization of learning resources using the webservice models and methods, and 2) learningservices discovery and learning goals match foreducational coordination and learning serviceplanning.
Resumo:
Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and repurpose this explicitly captured knowledge. Within the myGrid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways these can be described by their authors (and subsequent users), and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web Service and Semantic Web Service infrastructure whilst still supporting the specific needs of the e-Scientist. myGrid components enable a workflow life-cycle that extends beyond execution, to include discovery of previous relevant designs, reuse of those designs, and subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results, and to support the repurposing process.