811 resultados para android, web, service, REST, wearable, computing, bluetooth, activity, recognition
Resumo:
随着Internet上异构应用系统的大量增加和SOA技术的空前发展,Web服务技术变得越来越重要,已经成为了学术界和工业界关注的热点。在Web服务技术中,服务发现为Web服务消费者调用Web服务提供者提供的服务提供了桥梁,起到非常重要的承接作用,成为了Web服务技术中的重点。目前的Web服务发现机制主要有两种,第一种是传统的Web服务发现方式,主要基于UDDI(Universal Description, Discovery and Integration)的纯粹关键字查找;第二种是基于Web服务的语义信息,进行Web服务间的语义匹配。 第一种方法的基础UDDI是国际标准,而且应用也最为广泛,但UDDI中对于Web服务的描述是基于语法的,而且缺乏Web服务所特有的I/O属性和服务质量属性等信息。第二种方法基于Web服务的语义信息,包括Web服务所特有的I/O属性,但因为缺乏灵活有效的Web服务匹配方法和与之对应的Web服务匹配框架,限制了其应用。 基于此,本文在对当前语义Web服务匹配技术分析和研究的基础上,对当前的语义Web服务匹配方法进行了改进,同时提出了基于过滤器(filter)的语义Web服务匹配框架模型。本文的主要工作有: 1)对目前的语义Web服务匹配技术进行了较为全面深入的探讨和综述。 2)对当前的语义Web服务匹配的各个阶段进行了详细的分析,对其中的匹配方法进行了改进。提出了基于向量空间模型(VSM)和TF-IDF(Term Frequency–Inverse Document Frequency)思想的本体权重的计算方法,本体层次关系图中边的权重的计算方法和本体之间相似度的计算方法。 3)提出了基于web服务黑盒属性的语义。 4)语义Web服务匹配框架方面,本文提出了基于filter的语义Web服务匹配框架,并将其延伸到非语义Web服务系统中。
Resumo:
形式化定义了Web服务组合过程中的5种基本逻辑结构,并采用有色Petri网表示,然后将其抽象为服务的代数运算;在此基础上,提出了经过服务运算后得到的服务的性质及组合服务的构造方法;最后通过实例分析,说明该建模方法可以保证组合的服务是正确且可终止的。
Resumo:
Web服务组合的正确性验证对提高软件开发效率、实现服务增值具有重要意义。为从高层抽象层次研究Web服务组合的正确性及其形式化验证方法,考虑到Web服务组合中的实时特征,在采用软件体系结构描述语言XYZ/ADL对Web服务组合进行描述的基础上,将其实时描述部分XYZ/RE转换至时间自动机模型,组合后系统应满足的性质用分支时序逻辑CTL公式表示,最后应用模型检测工具UPPAAL实现了Web服务组合正确性的自动化验证。
Resumo:
随着网络技术、特别是Internet技术的发展,分布式系统的高睦能、高可靠性、高灵敏度、可扩展性和系统透明性使得基于分布式系统的应用越来越广泛。在分布式坏境下,信息系统的集成是必须认真考虑的问题。研究如何集成和统一访问分布的、异构的数据资源,实现各种数据的转换、交流与共享;以及如何建立一个开放、可扩展和异构兼容的新一代信息管理系统成为当今网络应用的重要研究方向。本文针对建立月球探测数据管理和集成系统的数据集成与操作集成两方面的内容,重点分析和研究了数据存储管理和系统集成技术,结合月球探测数据管理的应用集成需求与特点提出了基于XML技术的数据存储与管理和基于Web Service技术的系统集成方案。并利用XML/WebService技术基于Net平台开发了一个月球探测数据管理与集成系统实例。研究内容主要包括以下四个方面: (1)基于XML的数据模型与数据库存储。利用XML的自描述性、独立于平台和应用、半结构化,机器可处理、可扩展性以及便于网络传输和广泛支持的特性,实现了基于关系数据库的XML数据存储。对系统中的结构化和非结构化数据信息、都进行了XML标记定义,实现了信息存储和查询的小粒度,增强了数据的表示、查询、插入和删除等数据处理能力和效率。同时,利用XML作为月球探测数据交换和信息传输的格式,也为实现与异构系统数据的互操作提供了理想的角军决方案。(2)基于WebServices的分布式信息系统集成的体系结构。在对XML、SOAP、WSDL和UDDI关键技术和标准规范进行研究与探讨基础上,针对绕月探测数据管理的需求提出了基于XML/Web Service的三层分布式结构模型。分别是表示层、应用逻辑层、服务端数据层。实现了系统功能的可迁移性和可装配性、各层间传输过程中数据流的XML化、接口定义的动态性。与传统技术开发的耦合的分布式应用系统相比,系统在跨平台性、可配置性、可伸缩性、可维护性等方面都有了大幅度的提高。(3)基于,NET平台系统的开发与实现。深入分析和研究了Mic1’osoft.NET平台的核心技术与整体技术框架,在VisLla1Sttldio.NET开发环境中利用C#、ASP.NET、ADO.NET基于关系型数据库Oracle9i开发实现了统一身份认证系统和月球探测数据管理与集成系统。统一身份认证系统是一个通用的统一用户身份认证管理系统,包括用户管理、身份验证、实体管理、日志监控和消息、管理等功能,达到了一次登录,所有系统共用的目的。月球探测数据管理与集成系统包括数据管理、信息发布、系统管理、综合查询和应用集成五大功能模块,相对于原来的紧祸合的应用系统而言,系统的开发效率、重用性、祸合度、灵活性和自适 应性都有了很大的提高。(4)基于XML/WebServices的动态系统集成。分析了传统分布式对象模型在异构环境集成的弊端,基于XML、SOAP和WSDL等技术规范基础上,实现了统一身份认证系统、月球探测数据管理与集成系统、小空间碎片数据库系统以及其它语言开发的应用系统的信息集成、实现了资源时空的有效整合。通过这些集成应用实例,充分体现和说明了Web Services技术在应用系统集成方面的优越性。本研究的成果,也将为地球化学研究领域涉及海量数据的处理、管理和系统集成提供示范实例,推动地球化学数据的融合和综合应用。
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
The open service network for marine environmental data (NETMAR) project uses semantic web technologies in its pilot system which aims to allow users to search, download and integrate satellite, in situ and model data from open ocean and coastal areas. The semantic web is an extension of the fundamental ideas of the World Wide Web, building a web of data through annotation of metadata and data with hyperlinked resources. Within the framework of the NETMAR project, an interconnected semantic web resource was developed to aid in data and web service discovery and to validate Open Geospatial Consortium Web Processing Service orchestration. A second semantic resource was developed to support interoperability of coastal web atlases across jurisdictional boundaries. This paper outlines the approach taken to producing the resource registry used within the NETMAR project and demonstrates the use of these semantic resources to support user interactions with systems. Such interconnected semantic resources allow the increased ability to share and disseminate data through the facilitation of interoperability between data providers. The formal representation of geospatial knowledge to advance geospatial interoperability is a growing research area. Tools and methods such as those outlined in this paper have the potential to support these efforts.
Resumo:
A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable. Uncertainty profiles are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum angel-daemon game with two competing players: an angel a whose objective is to minimize damage to an orchestration and a daemon d who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.
Resumo:
This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.
Resumo:
When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
In the last two decades, there was a proliferation of programming exercise formats that hinders interoperability in automatic assessment. In the lack of a widely accepted standard, a pragmatic solution is to convert content among the existing formats. BabeLO is a programming exercise converter providing services to a network of heterogeneous e-learning systems such as contest management systems, programming exercise authoring tools, evaluation engines and repositories of learning objects. Its main feature is the use of a pivotal format to achieve greater extensibility. This approach simplifies the extension to other formats, just requiring the conversion to and from the pivotal format. This paper starts with an analysis of programming exercise formats representative of the existing diversity. This analysis sets the context for the proposed approach to exercise conversion and to the description of the pivotal data format. The abstract service definition is the basis for the design of BabeLO, its components and web service interface. This paper includes a report on the use of BabeLO in two concrete scenarios: to relocate exercises to a different repository, and to use an evaluation engine in a network of heterogeneous systems.
Resumo:
OWL-S is an application of OWL, the Web Ontology Language, that describes the semantics of Web Services so that their discovery, selection, invocation and composition can be automated. The research literature reports the use of UML diagrams for the automatic generation of Semantic Web Service descriptions in OWL-S. This paper demonstrates a higher level of automation by generating complete complete Web applications from OWL-S descriptions that have themselves been generated from UML. Previously, we proposed an approach for processing OWL-S descriptions in order to produce MVC-based skeletons for Web applications. The OWL-S ontology undergoes a series of transformations in order to generate a Model-View-Controller application implemented by a combination of Java Beans, JSP, and Servlets code, respectively. In this paper, we show in detail the documents produced at each processing step. We highlight the connections between OWL-S specifications and executable code in the various Java dialects and show the Web interfaces that result from this process.