837 resultados para Long-Polling, GCM, Google Cloud Messaging, RESTful Web services, Push, Notifiche
Resumo:
当前的Web服务发现机制大多依赖集中式的统一描述、发现和集成注册中心,但组织机构出于安全和地域的考虑,倾向于构建私有的分布式注册中心,只有注册且可信的请求者才能浏览到他们有权限访问的服务信息。该文给出Web服务发现阶段基于角色的访问控制模型RBAC4WSD,发现代理依照服务提供者指定的安全策略对请求者实施访问控制,并以跨国公司内部的文档服务为例介绍原型系统的实现。
Resumo:
随着网络应用的蓬勃发展,Web服务越来越普及。在实际应用中,往往需要对已有Web服务进行集成。目前通常的企业Web服务集成过程,都是先根据企业的业务流程建立相应的Web服务流程模型,再由此建立应用系统。而目前已有的建模手段的共同问题是:无法确保模型的正确性和与具体业务的紧密吻合。该文提出一种基于Petri网的Web服务流程建模方法。该建模方法通过将Petri网引入建模过程弥补了原建模过程中无法直观感受模型的不足,通过建立一套完备的形式化定义保证了建立模型的正确性,通过将紧同步随机Petri网引入建模过程可以更好的描述实际业务。通过使用该方法, 可以很好地解决现在Web流程建模过程中存在的问题。此方法也为其他领域中的流程建模仿真提供了一种很好的解决问题的方法和思路。
Resumo:
根据WebOffice系统中浏览器-服务器端通信需求,提出了一种调用Web服务的浏览器端代理方法.比较了此方法和传统的服务器端方法的优点和缺点,分析了此方法的适用场合.最后给出了实现的要点:WSDL的加载和解析、对象类型的序列化和反序列化、SOAP协议的封包和绑定.
Resumo:
随着面向服务计算技术的成熟,服务复合已成为Internet上开发企业间业务协作的一种新模式,WS-BPEL是服务复合事实上的标准.但是由于复合服务所依赖的第三方伙伴服务的分布、自治和松散耦合等特性,在执行过程中易受到伙伴服务失效的影响,可靠性无法得到保证,因此需要支持在运行时对伙伴服务进行动态替换.目前的BPEL规范只提供有限的服务替换功能,当与伙伴服务的交互涉及到一系列有状态的会话操作时,服务替换就更加复杂.通过对面向方面的研究,提出面向BPEL语言的状态方面扩展.通过状态方面,记录与伙伴服务交互过程中的会话信息.在伙伴服务失效时,通过透明地替换伙伴服务,使得与当前伙伴服务的会话信息传播到功能等价的另一个伙伴服务上,以保证流程的正常执行.通过该方法,使得BPEL流程具有一定的自愈能力,增强了流程执行的可靠性.
Resumo:
语义Web和Web服务是当前热门的应用技术,而两者的结合——语义Web服务将在未来几年具有极大的应用前景。语义使得~Web~服务的自动发现和自动组装变得非常容易、高效,但是随之而来的问题是如何确保自动组装的~Web~服务组合在本体知识库的状态下是一致的,并且在执行过程中也始终保持一致。 本文介绍了一种基于路径的语义Web服务组合验证方案,通过对服务组合中的单个Web服务进行输入、输出、前置条件和执行效果(IOPE)进行语义标注,旨在找出组合执行过程中在任何结点处可能存在的不一致性。文章定义了显式不一致性和隐式不一致性,首先提出了语义Web服务前置条件和执行效果的正规化表示方法,然后介绍了顺序和并发执行的多个Web服务之执行效果自动化累积算法,接下来给出了基于路径的一致性验证算法。文章还给出了实现该验证方案的平台架构,给出了相关实验过程和结果。 文章还完整地介绍了中科院某研究所本体的构造和扩充过程,定义了研究生毕业申请服务组合,并介绍了服务的IOPE标注,最后运用验证方案对该组合进行验证,给出了验证结果。
Resumo:
Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic features; however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well-known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web users; however using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size-based caching policies have diminished; and (2) the potential for caching requested files in the network has declined.
Resumo:
A Web-service based approach is presented which enables geographically dispersed users to share software resources over the Internet. A service-oriented software sharing system has been developed, which consists of shared applications, client applications and three types of services: application proxy service, proxy implementation service and application manager service. With the aids of the services, the client applications interact with the shared applications to implement a software sharing task. The approach satisfies the requirements of copyright protection and reuse of legacy codes. In this paper, the role of Web-services and the architecture of the system are presented first, followed by a case study to illustrate the approach developed.
Resumo:
An orchestration is a multi-threaded computation that invokes a number of remote services. In practice, the responsiveness of a web-service fluctuates with demand; during surges in activity service responsiveness may be degraded, perhaps even to the point of failure. An uncertainty profile formalizes a user's perception of the effects of stress on an orchestration of web-services; it describes a strategic situation, modelled by a zero-sum angel–daemon game. Stressed web-service scenarios are analysed, using game theory, in a realistic way, lying between over-optimism (services are entirely reliable) and over-pessimism (all services are broken). The ‘resilience’ of an uncertainty profile can be assessed using the valuation of its associated zero-sum game. In order to demonstrate the validity of the approach, we consider two measures of resilience and a number of different stress models. It is shown how (i) uncertainty profiles can be ordered by risk (as measured by game valuations) and (ii) the structural properties of risk partial orders can be analysed.
Resumo:
The risks associated with zoonotic infections transmitted by companion animals are a serious public health concern: the control of zoonoses incidence in domestic dogs, both owned and stray, is hence important to protect human health. Integrated dog population management (DPM) programs, based on the availability of information systems providing reliable data on the structure and composition of the existing dog population in a given area, are fundamental for making realistic plans for any disease surveillance and action system. Traceability systems, based on the compulsory electronic identification of dogs and their registration in a computerised database, are one of the most effective ways to ensure the usefulness of DPM programs. Even if this approach provides many advantages, several areas of improvement have emerged in countries where it has been applied. In Italy, every region hosts its own dog register but these are not compatible with one another. This paper shows the advantages of a web-based-application to improve data management of dog regional registers. The approach used for building this system was inspired by farm animal traceability schemes and it relies on a network of services that allows multi-channel access by different devices and data exchange via the web with other existing applications, without changing the pre-existing platforms. Today the system manages a database for over 300,000 dogs registered in three different Italian regions. By integrating multiple Web Services, this approach could be the solution to gather data at national and international levels at reasonable cost and creating a traceability system on a large scale and across borders that can be used for disease surveillance and development of population management plans. © 2012 Elsevier B.V.
Resumo:
Dissertação de natureza científica realizada para obtenção do grau de Mestre em Engenharia de Redes de Computadores e Multimédia
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
Ce mémoire de maîtrise a été rédigé dans l’objectif d’explorer une inégalité. Une inégalité dans les pratiques liées à la saisie et l’exploitation des données utilisateur dans la sphère des technologies et services Web, plus particulièrement dans la sphère des GIS (Geographic Information Systems). En 2014, de nombreuses entreprises exploitent les données de leurs utilisateurs afin d’améliorer leurs services ou générer du revenu publicitaire. Du côté de la sphère publique et gouvernementale, ce changement n’a pas été effectué. Ainsi, les gouvernements fédéraux et municipaux sont démunis de données qui permettraient d’améliorer les infrastructures et services publics. Des villes à travers le monde essayent d’améliorer leurs services et de devenir « intelligentes » mais sont dépourvues de ressources et de savoir faire pour assurer une transition respectueuse de la vie privée et des souhaits des citadins. Comment une ville peut-elle créer des jeux de données géo-référencés sans enfreindre les droits des citadins ? Dans l’objectif de répondre à ces interrogations, nous avons réalisé une étude comparative entre l’utilisation d’OpenStreetMap (OSM) et de Google Maps (GM). Grâce à une série d’entretiens avec des utilisateurs de GM et d’OSM, nous avons pu comprendre les significations et les valeurs d’usages de ces deux plateformes. Une analyse mobilisant les concepts de l’appropriation, de l’action collective et des perspectives critiques variées nous a permis d’analyser nos données d’entretiens pour comprendre les enjeux et problèmes derrière l’utilisation de technologies de géolocalisation, ainsi que ceux liés à la contribution des utilisateurs à ces GIS. Suite à cette analyse, la compréhension de la contribution et de l’utilisation de ces services a été recontextualisée pour explorer les moyens potentiels que les villes ont d’utiliser les technologies de géolocalisation afin d’améliorer leurs infrastructures publiques en respectant leurs citoyens.
Resumo:
This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.