853 resultados para Web service
Resumo:
WS-BPEL(Web Service Business Process Execution Language,简称BPEL)是Web服务规范族中服务复合层的重要标准。BPEL支持通过对Web服务的编制(Orchestration)来构建业务流程,从而使编程人员能够集中关注业务逻辑。BPEL引擎系统是一个支持BPEL语言描述的业务流程运行的服务器中间件系统,使用BPEL引擎可以执行BPEL语言编写的业务流程。作为一个网络服务器系统,BPEL引擎将不可避免的处理大量的并发请求。如何设计实现BPEL引擎使之能高效的处理并发将是高性能BPEL引擎设计的关键问题。 并发服务器系统通常采用多线程和事件驱动两种并发模型。传统上大多数服务器软件都建立在多线程(或多进程)模型的基础上。但在高负载条件下,过多的线程和线程间的上下文切换会造成系统较大的开销,这些开销是导致系统性能下降的主要原因。事件驱动模型是一种只采用少量固定数量线程的并发模型,一般说来,它的伸缩性更好,并且有更高的处理效率。 本文对高并发服务器系统中所使用的事件驱动模型进行了分析和研究,并且结合BPEL语言规范的特点,提出了事件驱动的BPEL引擎实现技术方案。论文重点研究了BPEL事件结构和有限状态机(Finite State Machine,简称FSM)刻画BPEL流程和活动行为的原理,针对BPEL语言语法特点,构造了完整的BPEL FSM模型,包括了状态空间和基于ECA(Event-Condition-Action)模式的状态转移规则。 在基于事件驱动模型的BPEL引擎架构原理的指导下,我们设计并实现了基于事件驱动模型的OnceBPEL2.0引擎系统。并且,我们对采用多线程模型实现的OnceBPEL1.0系统和采用事件驱动模型实现的OnceBPEL2.0系统进行了性能测试和分析比较。从我们的测试数据和分析结果可以看出,采用事件驱动模型的OnceBPEL2.0系统比采用多线程模型的OnceBPEL1.0有了较大的性能提升。
Resumo:
Migrating legacy system with web service is an effective and economic way of reusing legacy software in a SOA environment.In this paper,we present an approach for migrating a three-tie object-oriented legacy system to SOA environment.The key issue of the approach is about services identification from large numbers of classes.And we propose a bottom-up method to model the system with UML and identify services from UML then.This approach can be a reference to an auto-migrating process.
Resumo:
供应链协调在国际供应链管理领域备受关注,是涉及多学科高度交叉、知识广泛集成的前沿热点研究领域。它综合了经济学、管理学、营销学、现代网络及信息科学等技术,通过供应链各利益实体之间竞争、协作,实现供应链整体效益的提升。其作为我国实现产业结构优化升级的重要途径,已经成为企业继自然资源、人力资本后的第三个利润增长源泉。 本论文在深入研究供应链管理、供应链协调及契约理论的基础上,分析指出了供应链契约协调研究发展的趋势。依据该趋势本文主要研究的内容是对当前供应链契约协调理论研究的补充和扩展。本文立足于解决我国供应链管理及协调中诸多实际问题,针对现代市场环境中客户需求个性化、多样性及不确定性的特点,研究了供应链契约的统一框架,方法及技术路线,重点研究了客户需求驱动的供应链分销契约协调问题,建立了不同条件下的契约优化模型,给出基于博弈均衡解的一系列证明,并分析了契约参数对供应链整体及各成员绩效的影响,最后基于SOA架构设计并实现了契约自动协商平台。论文的主要研究内容包括以下4个方面: 1. 建立了多对一供应链、确定性客户需求、完全信息下的契约优化模型。基于多个竞争的制造商和一个独立的、共同的零售商组成的多对一供应链分销过程,在客户需求与零售价线性相关、双方掌握完全信息的情况下,针对零售商的保留利润内生给定的特点,建立了收入共享契约框架下的Stackelberg博弈模型和数量折扣契约、两部费用契约下的契约参数优化模型;分析了契约的不同提供方、契约类型的选择和契约参数的优化对供应链整体绩效及利润在各方分配的影响;证明了当制造商所提供的产品具有高度可替代性时,实际增加了零售商的内生保留利润,即增强了零售商相对于制造商的议价能力,在这种情况下,制造商将更倾向于提供批发价契约而不是较为复杂的契约形式。最后通过数值仿真实验,分析验证了上述理论研究的结果。 2.构建了两阶段供应链、短视客户需求、不对称信息、产能约束条件下的混合契约优化模型。针对由单制造商和单零售商组成的双寡头垄断供应链、基于短视客户需求的报童模型、各方需求预测信息不对称的情况,建立了由预购契约和回购契约组成的改进型回购契约优化模型。通过该契约模型同时实现了协同预测、产能优化和供应链分销协调的目的。通过数值仿真实验,验证了改进型回购契约下的供应链协调。通过风险-利润边际和信息不对称度阐述了两段供应链无效性的原因。 3.建立了三阶段供应链、策略型需求、完全信息下的契约优化模型。在由制造商、零售商和理性客户组成的三阶段供应链结构中,根据理性客户及其对产品需求具有策略性的特点,基于零售商和客户间的理性预期均衡构建了研究策略型客户行为的模型框架。分析了集中式供应链绩效与批发价契约及价格补偿契约下分散式供应链绩效的关系,得出在这些契约协调下的策略型客户需求驱动型供应链分销渠道中,分散式供应链绩效严格优于集中式供应链绩效的创新性结论。 4.构建了基于SOA的契约自动协商平台。综合上述契约优化理论研究结论和已有的研究成果,抽象、封装了包含契约类型和契约参数的契约优化服务模型库。基于面向服务的体系架构(SOA)思想,运用数据挖掘软件Weka细化客户需求类型及供应链环境,采用Web Service技术封装各种契约服务,利用企业服务总线(ESB)提供各服务组件绑定、交互和管理通道,以及通过BPEL建模工具对各种服务进行符合逻辑的编排、重组和发布。通过所构建的契约协商平台,实现了供应链分销过程中契约协商的网络化、自动化、智能化和柔性化。
Resumo:
在企业大量地部署信息化系统之后,系统的弊端逐渐暴露,主要表现在:各系统建设在异构平台上,缺乏互操作性,形成信息孤岛;系统与工作流程粘度高,不能便捷的改变流程以满足市场快速变化的需要;系统间通信困难,难以有效地协调;不能同企业外的系统互通以增强企业间的业务联系。在现有的系统不能有效地满足企业需求的背景下,开发新的工作流系统来整合已有的信息系统,既可以满足企业的需求,又可避免浪费企业在信息化建设方面的投资,成为企业应用技术新的发展方向。 本文首先详细研究了早期工作流系统的特点和弊端,从业务人员和管理者的角度对信息系统提出新的功能要求;然后对当前主流的技术和标准进行研究分析,旨在寻找一种支持业务流程快速改变和部署、可跨平台通信、能有效地协调各信息系统的解决方案。在深入研究COM、CORBA、RMI、Web Service、BPEL等技术的基础之上,采用以Web Service为业务单元、应用BPEL语言将Web Service组合成为业务流程的方式作为解决方案。 在此基础上,本文根据软件分层思想,设计并实现了一个六层模式的BPEL引擎,并针对BPEL引擎的分层特点设计一套模块组合方案,可以根据需求扩展和替换其中的引擎组件。为更好地实现系统集成,本文对BPEL的端点引用机制做了修改和扩展,使其不仅能够集成默认的Web Service端点,还能直接集成系统中已有的流程和实现定义接口的子系统。同时深入研究面向服务的软件架构和企业服务总线技术,将引擎设计为插件软件,运行在满足JBI标准的企业服务总线上,进一步降低了BPEL引擎同其他系统的耦合性。 最后,以国内某家电公司的运营环境为背景,设计一套测试案例以检验BPEL引擎的功能完备性和性能可靠性;分析结果表明BPEL引擎满足用户的要求并具备一定的稳定性。
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.
Resumo:
The needs for various forms of information systems relating to the European environment and ecosystem are reviewed, and limitations indicated. Existing information systems are reviewed and compared in terms of aims and functionalities. We consider TWO technical challenges involved in attempting to develop an IEEICS. First, there is the challenge of developing an Internet-based communication system which allows fluent access to information stored in a range of distributed databases. Some of the currently available solutions are considered, i.e. Web service federations. The second main challenge arises from the fact that there is general intra-national heterogeneity in the definitions adopted, and the measurement systems used throughout the nations of Europe. Integrated strategies are needed.
Resumo:
eScience is an umbrella concept which covers internet technologies, such as web service orchestration that involves manipulation and processing of high volumes of data, using simple and efficient methodologies. This concept is normally associated with bioinformatics, but nothing prevents the use of an identical approach for geoinfomatics and OGC (Open Geospatial Consortium) web services like WPS (Web Processing Service). In this paper we present an extended WPS implementation based on the PyWPS framework using an automatically generated WSDL (Web Service Description Language) XML document that replicates the WPS input/output document structure used during an Execute request to a server. Services are accessed using a modified SOAP (Simple Object Access Protocol) interface provided by PyWPS, that uses service and input/outputs identifiers as element names. The WSDL XML document is dynamically generated by applying XSLT (Extensible Stylesheet Language Transformation) to the getCapabilities XML document that is generated by PyWPS. The availability of the SOAP interface and WSDL description allows WPS instances to be accessible to workflow development software like Taverna, enabling users to build complex workflows using web services represented by interconnecting graphics. Taverna will transform the visual representation of the workflow into a SCUFL (Simple Conceptual Unified Flow Language) based XML document that can be run internally or sent to a Taverna orchestration server. SCUFL uses a dataflow-centric orchestration model as opposed to the more commonly used orchestration language BPEL (Business Process Execution Language) which is process-centric.
Resumo:
When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
In the last two decades, there was a proliferation of programming exercise formats that hinders interoperability in automatic assessment. In the lack of a widely accepted standard, a pragmatic solution is to convert content among the existing formats. BabeLO is a programming exercise converter providing services to a network of heterogeneous e-learning systems such as contest management systems, programming exercise authoring tools, evaluation engines and repositories of learning objects. Its main feature is the use of a pivotal format to achieve greater extensibility. This approach simplifies the extension to other formats, just requiring the conversion to and from the pivotal format. This paper starts with an analysis of programming exercise formats representative of the existing diversity. This analysis sets the context for the proposed approach to exercise conversion and to the description of the pivotal data format. The abstract service definition is the basis for the design of BabeLO, its components and web service interface. This paper includes a report on the use of BabeLO in two concrete scenarios: to relocate exercises to a different repository, and to use an evaluation engine in a network of heterogeneous systems.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
This paper presents the system developed to promote the rational use of electric energy among consumers and, thus, increase the energy efficiency. The goal is to provide energy consumers with an application that displays the energy consumption/production profiles, sets up consuming ceilings, defines automatic alerts and alarms, compares anonymously consumers with identical energy usage profiles by region and predicts, in the case of non-residential installations, the expected consumption/production values. The resulting distributed system is organized in two main blocks: front-end and back-end. The front-end includes user interface applications for Android mobile devices and Web browsers. The back-end provides data storage and processing functionalities and is installed in a cloud computing platform - the Google App Engine - which provides a standard Web service interface. This option ensures interoperability, scalability and robustness to the system.
Resumo:
Metabolic homeostasis is achieved by complex molecular and cellular networks that differ significantly among individuals and are difficult to model with genetically engineered lines of mice optimized to study single gene function. Here, we systematically acquired metabolic phenotypes by using the EUMODIC EMPReSS protocols across a large panel of isogenic but diverse strains of mice (BXD type) to study the genetic control of metabolism. We generated and analyzed 140 classical phenotypes and deposited these in an open-access web service for systems genetics (www.genenetwork.org). Heritability, influence of sex, and genetic modifiers of traits were examined singly and jointly by using quantitative-trait locus (QTL) and expression QTL-mapping methods. Traits and networks were linked to loci encompassing both known variants and novel candidate genes, including alkaline phosphatase (ALPL), here linked to hypophosphatasia. The assembled and curated phenotypes provide key resources and exemplars that can be used to dissect complex metabolic traits and disorders.
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.
Resumo:
Die Technologie dienstorientierter Architekturen (Service-oriented Architectures, kurz SOA) weckt große Visionen auf Seiten der Industrie wie auch der Forschung. Sie hat sich als derzeit ideale Lösung für Umgebungen, in denen sich die Anforderungen an die IT-Bedürfnisse rapide ändern, erwiesen. Heutige IT-Systeme müssen Managementaufgaben wie Softwareinstallation, -anpassung oder -austausch erlauben, ohne dabei den laufenden Betrieb wesentlich zu stören. Die dafür nötige Flexibilität bieten dienstorientierte Architekturen, in denen Softwarekomponenten in Form von Diensten zur Verfügung stehen. Ein Dienst bietet über seine Schnittstelle lokalen wie entfernten Applikationen einen Zugang zu seiner Funktionalität. Wir betrachten im Folgenden nur solche dienstorientierte Architekturen, in denen Dienste zur Laufzeit dynamisch entdeckt, gebunden, komponiert, verhandelt und adaptiert werden können. Eine Applikation kann mit unterschiedlichen Diensten arbeiten, wenn beispielsweise Dienste ausfallen oder ein neuer Dienst die Anforderungen der Applikation besser erfüllt. Eine unserer Grundvoraussetzungen lautet somit, dass sowohl das Dienstangebot als auch die Nachfrageseite variabel sind. Dienstorientierte Architekturen haben besonderes Gewicht in der Implementierung von Geschäftsprozessen. Im Rahmen des Paradigmas Enterprise Integration Architecture werden einzelne Arbeitsschritte als Dienste implementiert und ein Geschäftsprozess als Workflow von Diensten ausgeführt. Eine solche Dienstkomposition wird auch Orchestration genannt. Insbesondere für die so genannte B2B-Integration (Business-to-Business) sind Dienste das probate Mittel, um die Kommunikation über die Unternehmensgrenzen hinaus zu unterstützen. Dienste werden hier in der Regel als Web Services realisiert, welche vermöge BPEL4WS orchestriert werden. Der XML-basierte Nachrichtenverkehr und das http-Protokoll sorgen für eine Verträglichkeit zwischen heterogenen Systemen und eine Transparenz des Nachrichtenverkehrs. Anbieter dieser Dienste versprechen sich einen hohen Nutzen durch ihre öffentlichen Dienste. Zum einen hofft man auf eine vermehrte Einbindung ihrer Dienste in Softwareprozesse. Zum anderen setzt man auf das Entwickeln neuer Software auf Basis ihrer Dienste. In der Zukunft werden hunderte solcher Dienste verfügbar sein und es wird schwer für den Entwickler passende Dienstangebote zu finden. Das Projekt ADDO hat in diesem Umfeld wichtige Ergebnisse erzielt. Im Laufe des Projektes wurde erreicht, dass der Einsatz semantischer Spezifikationen es ermöglicht, Dienste sowohl im Hinblick auf ihre funktionalen als auch ihre nicht-funktionalen Eigenschaften, insbesondere die Dienstgüte, automatisch zu sichten und an Dienstaggregate zu binden [15]. Dazu wurden Ontologie-Schemata [10, 16], Abgleichalgorithmen [16, 9] und Werkzeuge entwickelt und als Framework implementiert [16]. Der in diesem Rahmen entwickelte Abgleichalgorithmus für Dienstgüte beherrscht die automatische Aushandlung von Verträgen für die Dienstnutzung, um etwa kostenpflichtige Dienste zur Dienstnutzung einzubinden. ADDO liefert einen Ansatz, Schablonen für Dienstaggregate in BPEL4WS zu erstellen, die zur Laufzeit automatisch verwaltet werden. Das Vorgehen konnte seine Effektivität beim internationalen Wettbewerb Web Service Challenge 2006 in San Francisco unter Beweis stellen: Der für ADDO entwickelte Algorithmus zur semantischen Dienstkomposition erreichte den ersten Platz. Der Algorithmus erlaubt es, unter einer sehr großenMenge angebotener Dienste eine geeignete Auswahl zu treffen, diese Dienste zu Dienstaggregaten zusammenzufassen und damit die Funktionalität eines vorgegebenen gesuchten Dienstes zu leisten. Weitere Ergebnisse des Projektes ADDO wurden auf internationalen Workshops und Konferenzen veröffentlicht. [12, 11]