810 resultados para web processing service (WPS)
Resumo:
随着Internet上异构应用系统的大量增加和SOA技术的空前发展,Web服务技术变得越来越重要,已经成为了学术界和工业界关注的热点。在Web服务技术中,服务发现为Web服务消费者调用Web服务提供者提供的服务提供了桥梁,起到非常重要的承接作用,成为了Web服务技术中的重点。目前的Web服务发现机制主要有两种,第一种是传统的Web服务发现方式,主要基于UDDI(Universal Description, Discovery and Integration)的纯粹关键字查找;第二种是基于Web服务的语义信息,进行Web服务间的语义匹配。 第一种方法的基础UDDI是国际标准,而且应用也最为广泛,但UDDI中对于Web服务的描述是基于语法的,而且缺乏Web服务所特有的I/O属性和服务质量属性等信息。第二种方法基于Web服务的语义信息,包括Web服务所特有的I/O属性,但因为缺乏灵活有效的Web服务匹配方法和与之对应的Web服务匹配框架,限制了其应用。 基于此,本文在对当前语义Web服务匹配技术分析和研究的基础上,对当前的语义Web服务匹配方法进行了改进,同时提出了基于过滤器(filter)的语义Web服务匹配框架模型。本文的主要工作有: 1)对目前的语义Web服务匹配技术进行了较为全面深入的探讨和综述。 2)对当前的语义Web服务匹配的各个阶段进行了详细的分析,对其中的匹配方法进行了改进。提出了基于向量空间模型(VSM)和TF-IDF(Term Frequency–Inverse Document Frequency)思想的本体权重的计算方法,本体层次关系图中边的权重的计算方法和本体之间相似度的计算方法。 3)提出了基于web服务黑盒属性的语义。 4)语义Web服务匹配框架方面,本文提出了基于filter的语义Web服务匹配框架,并将其延伸到非语义Web服务系统中。
Resumo:
形式化定义了Web服务组合过程中的5种基本逻辑结构,并采用有色Petri网表示,然后将其抽象为服务的代数运算;在此基础上,提出了经过服务运算后得到的服务的性质及组合服务的构造方法;最后通过实例分析,说明该建模方法可以保证组合的服务是正确且可终止的。
Resumo:
Web服务组合的正确性验证对提高软件开发效率、实现服务增值具有重要意义。为从高层抽象层次研究Web服务组合的正确性及其形式化验证方法,考虑到Web服务组合中的实时特征,在采用软件体系结构描述语言XYZ/ADL对Web服务组合进行描述的基础上,将其实时描述部分XYZ/RE转换至时间自动机模型,组合后系统应满足的性质用分支时序逻辑CTL公式表示,最后应用模型检测工具UPPAAL实现了Web服务组合正确性的自动化验证。
Resumo:
随着网络技术、特别是Internet技术的发展,分布式系统的高睦能、高可靠性、高灵敏度、可扩展性和系统透明性使得基于分布式系统的应用越来越广泛。在分布式坏境下,信息系统的集成是必须认真考虑的问题。研究如何集成和统一访问分布的、异构的数据资源,实现各种数据的转换、交流与共享;以及如何建立一个开放、可扩展和异构兼容的新一代信息管理系统成为当今网络应用的重要研究方向。本文针对建立月球探测数据管理和集成系统的数据集成与操作集成两方面的内容,重点分析和研究了数据存储管理和系统集成技术,结合月球探测数据管理的应用集成需求与特点提出了基于XML技术的数据存储与管理和基于Web Service技术的系统集成方案。并利用XML/WebService技术基于Net平台开发了一个月球探测数据管理与集成系统实例。研究内容主要包括以下四个方面: (1)基于XML的数据模型与数据库存储。利用XML的自描述性、独立于平台和应用、半结构化,机器可处理、可扩展性以及便于网络传输和广泛支持的特性,实现了基于关系数据库的XML数据存储。对系统中的结构化和非结构化数据信息、都进行了XML标记定义,实现了信息存储和查询的小粒度,增强了数据的表示、查询、插入和删除等数据处理能力和效率。同时,利用XML作为月球探测数据交换和信息传输的格式,也为实现与异构系统数据的互操作提供了理想的角军决方案。(2)基于WebServices的分布式信息系统集成的体系结构。在对XML、SOAP、WSDL和UDDI关键技术和标准规范进行研究与探讨基础上,针对绕月探测数据管理的需求提出了基于XML/Web Service的三层分布式结构模型。分别是表示层、应用逻辑层、服务端数据层。实现了系统功能的可迁移性和可装配性、各层间传输过程中数据流的XML化、接口定义的动态性。与传统技术开发的耦合的分布式应用系统相比,系统在跨平台性、可配置性、可伸缩性、可维护性等方面都有了大幅度的提高。(3)基于,NET平台系统的开发与实现。深入分析和研究了Mic1’osoft.NET平台的核心技术与整体技术框架,在VisLla1Sttldio.NET开发环境中利用C#、ASP.NET、ADO.NET基于关系型数据库Oracle9i开发实现了统一身份认证系统和月球探测数据管理与集成系统。统一身份认证系统是一个通用的统一用户身份认证管理系统,包括用户管理、身份验证、实体管理、日志监控和消息、管理等功能,达到了一次登录,所有系统共用的目的。月球探测数据管理与集成系统包括数据管理、信息发布、系统管理、综合查询和应用集成五大功能模块,相对于原来的紧祸合的应用系统而言,系统的开发效率、重用性、祸合度、灵活性和自适 应性都有了很大的提高。(4)基于XML/WebServices的动态系统集成。分析了传统分布式对象模型在异构环境集成的弊端,基于XML、SOAP和WSDL等技术规范基础上,实现了统一身份认证系统、月球探测数据管理与集成系统、小空间碎片数据库系统以及其它语言开发的应用系统的信息集成、实现了资源时空的有效整合。通过这些集成应用实例,充分体现和说明了Web Services技术在应用系统集成方面的优越性。本研究的成果,也将为地球化学研究领域涉及海量数据的处理、管理和系统集成提供示范实例,推动地球化学数据的融合和综合应用。
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
The SafeWeb anonymizing system has been lauded by the press and loved by its users; self-described as "the most widely used online privacy service in the world," it served over 3,000,000 page views per day at its peak. SafeWeb was designed to defeat content blocking by firewalls and to defeat Web server attempts to identify users, all without degrading Web site behavior or requiring users to install specialized software. In this article we describe how these fundamentally incompatible requirements were realized in SafeWeb's architecture, resulting in spectacular failure modes under simple JavaScript attacks. These exploits allow adversaries to turn SafeWeb into a weapon against its users, inflicting more damage on them than would have been possible if they had never relied on SafeWeb technology. By bringing these problems to light, we hope to remind readers of the chasm that continues to separate popular and technical notions of security.
Resumo:
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.
Resumo:
As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.
Resumo:
In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.
Resumo:
Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.
Resumo:
Web services based systems have recently found their way into many applications such as e-commerce, corporate integration and e-learning. Construction of new services or introducing new functions to existing services requires composition of web services. Current approaches to service composition often require major programming effort; this is time consuming and requires considerable developer expertise. In this paper, we explore the real and rich scenarios found in e-learning where education services are offered through the Internet by networked universities to potentially millions in the world. These services are derived from existing/emerging business operation processes and commonly offered through a web interface, combined with other services such as email and ftp services, to support partial/full business processes. We identify the requirements for a generic portal framework for easy integration of existing expertise and services of individual institutions (enterprises). We examine the existing technologies and standards, and point out the gaps to be filled in designing the architecture of the framework
Resumo:
Kurzel(2004) points out that researchers in e-learning and educational technologists, in a quest to provide improved Learning Environments (LE) for students are focusing on personalising the experience through a Learning Management System (LMS) that attempts to tailor the LE to the individual (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez,2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman; 2003). According to Kurzel (2004) this tailoring can have an impact on content and how it’s accessed; the media forms used; method of instruction employed and the learning styles supported. This project is aiming to move personalisation forward to the next generation, by tackling the issue of Personalised e-Learning platforms as pre-requisites for building and generating individualised learning solutions. The proposed development is to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system’s processing engine. This paper will discuss some of our early work and ideas.
Resumo:
With emergence of "Semantic Web" there has been much discussion about the impact of technologies such as XML and RDF on the way we use the Web for developing e-learning applications and perhaps more importantly on how we can personalise these applications. Personalisation of e-learning is viewed by many authors (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez, 2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman, 2003) as the key challenge for the learning technologists. According to Kurzel (2004) the tailoring of e-learning applications can have an impact on content and how it's accesses; the media forms used; method of instruction employed and the learning styles supported. This paper will report on a research project currently underway at the eCentre in University of Greenwich which is exploring different approaches and methodologies to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system's processing engine.
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.