452 resultados para QoS
Resumo:
在性能变化不确定的系统内,不同的应用处于竞争和共享有限的系统资源、并受其变化影响的环境中·在系统运行时,对于需要QoS保证的应用,为保证其QoS属性,应能适应于这种环境·考虑到系统资源的全局管理特性,仅从应用中增加适应机制是不够的,还需在系统层中增加QoS管理机制·为此,给出了一种面向构件系统的QoS管理模型———QuCOM(qualitycomponent)及其集成到系统构件框架的方法,使基于QuCOM开发的构件应用能够适应于变化的系统环境,并为了验证QuCOM的有效性,以一个视频流应用为例,给出了相关实验数据分析·
Resumo:
网络分布计算环境下应用系统的需求多样化和复杂性的增长,要求位于中间件层次的Web应用服务器(web application server,简称WAS)从原来的尽力而为服务模型转变为服务质量(quality of service,简称QoS)保障模型,为具有不同需求的应用分别提供适宜的服务质量保障.目前的WAS系统在此方面仍然比较薄弱.OnceAS/Q是一个面向QoS的WAS系统,它以QoS规约为基础,为不同应用提供不同的QoS保障能力.OnceAS/Q实现了一个应用QoS保障框架,提供了一组QoS服务组件支持具有QoS需求的应用开发和运行.介绍了OnceAS/Q的体系结构和主要组件,详细阐述了两个关键问题,一是QoS规约的定义及其映射,另一个是面向QoS的服务组件和资源的动态重配.OnceAS/Q原型在Ecperf测试基准下,对其QoS保障能力进行了实验.实验数据表明,在较大规模的应用环境下,OnceAS/Q能够提供更好的服务质量,并且开销是可接受的.
Resumo:
Web应用服务器目前普遍采用的先到先得式(FCFS)的调度框架在过载时难以保障应用的服务质量(QoS)需求.QoS获益驱动(QBD)的调度框架是一种针对这些不足而提出的请求调度解决方案.QoS获益根据应用的QoS需求得到,用于评价QoS保障对应用需求的满足情况.QBD调度框架包含了多个用于保障应用QoS需求的组件,实现了基于QoS获益的资源规划算法,能够提高服务器对应用QoS需求的保障能力.在OnceAS平台上的实验结果验证了QBD调度框架的有效性.
Resumo:
Web 服务技术作为面向服务计算范型的主要实现技术,有效提高了异构环境下分布式应 用的开发效率,降低了其开发成本。服务发现与选择作为Web 服务技术体系中的关键技术, 提高了软件复用程度,从而进一步促进了企业间应用集成和大规模资源共享。一方面随着服 务计算技术的发展,Web 服务的数目日益增多;另一方面,企业业务敏捷性需求日益提高, 如何在大量候选服务中为用户选择出合适的候选服务,保证服务有效复用,相容组合成为服 务计算领域面临的一个重大挑战。目前的Web 服务选择技术缺乏对非功能属性的有效支持, 服务选择的精度不佳。针对该问题,论文使用一种基于主动监控反馈的QoS 感知的服务选 择机制,在对服务质量状态可信监控、准确预测的基础之上设计了一种QoS 感知的服务选 择算法,从而有效改进了服务选择的精度。 论文首先研究了Web 服务质量(QoS)建模问题,设计了一种轻量级的服务元数据模 型,用于描述Web 服务的服务质量。在此基础上分别设计并实现了基于有序数据结构的QoS 约束匹配算法和综合考虑功能属性匹配程度、QoS 保障能力、过往信誉以及用户偏好的服务 排名算法。在提高服务查准率与查全率的基础上,简化了服务选择过程中用户的负担,有利 于服务选择过程自动化完成。 本文还设计了一种客户端监控反馈方案,通过对服务历史状态的可信监控和有效预测, 为服务选择提供更加准确的QoS 数据。论文分别使用一种基于AOP 的可信监控方案,基于 低通滤波器和基于自适应最小二乘法的两个预测算法,有效保证监控的实时,可信,非侵入 和QoS 预测的准确性。在减少用户参与、提高服务选择自动化的同时进一步提高了服务选 择的精度。 最后论文探讨了网驰服务选择系统OnceSC 的设计与实现,并将前面提到的研究成果引 入其中。通过实验对系统的功能属性和非功能属性进行评估,结果表明OnceSC 具有QoS 感知,服务查准率、查全率高,无需用户参与等特点。 关
Resumo:
针对基于 Internet机器人遥操作中存在的问题 ,结合 Internet网络技术的最新发展 ,借助 IP Qo S技术的特点和优势 ,本文研究并设计了基于集成业务体系结构的网络机器人遥操作系统 .通过分析 IP Qo S技术和机器人遥操作技术相互结合的可行性与合理性 ,表明该系统能够克服目前在 Internet遥操作过程中存在的问题并可在未来支持 IP Qo S技术的 Internet中发挥作用 .本文提出了该系统的设计原型及实现方法 .
Resumo:
基于Internet的机器人遥操作系统提供了进行网络控制系统(NCS和多媒体通信等学科交叉领域研究的良好平台和契机。服务质量(QoS)的概念起源于多媒体和远程通信领域,大量的文献表明,将NCS和QOS结合起来的研究和应用还很少。本文分析、研究了基于Internet的机器人遥操作系统作为NCS, Internet上的多媒体应用和实时系统等多学科的交叉领域所具有的特点,对该系统同步和协调问题及具有QOS意识的网络机器人遥操作系统体系结构进行了深入、广泛的研究和探讨。就现有的基于Internet的机器人遥操作系统存在的缺乏对网络可用带宽的适应、缺乏多数据流协调等问题,提出了一种针对网络机器人遥操作系统的端到端QOS自适应体系结构AeQTA o在Internet QoS整体工程尚未完全启动的情况下,AeQTA的目的是将QOS的方法和策略尽量移植到端系统上,在端系统上提供QOS配置接口,实施QOS驱动的控制和管理策略,实现最大的网络效率、最可能好的应用性能和合理的业务流间资源分配的和谐统一。从时钟同步、速率控制、拥塞控制、多传感器信息同步和端到端的调度等几个方面剖析了基于Internet的机器人遥操作系统的协调和同步问题。针对NC S系统的同步容限的量化问题,提出了NCS多传感器反馈中的同步距离的概念和定义。然后,根据基于公式的、TCP-友好的速率控制的基本思路,结合使用应用需求QOS和网络QOS两种尺度调节的基于主媒体流的表象同步方法,将多传感器信息同步和速率控制统一起来,提出了一种速率控制方法TTFRC,提高了系统的实时性和TCP-友好性。为了给基于Internet的机器人遥操作系统研究提供一个真实的实验环境,为相关的策略和算法提供验证平台,我们建立了一个开放的、灵活的、可移植的、可裁减的且成本低的MOMR原型系统。目前,该原型系统已经为基于Internet的机器人遥操作系统深入的理论研究和实际经验积累做出了很大贡献。并且,在此基础上,由中国科学院沈阳自动化研究所和香港中文大学合作,己于2402年1月通过Internet实现了沈阳—香港—密西根三地的MONM远程协作。力求控制工程和计算机网络工程等多学科的结合是本论文工作的努力方向。
Resumo:
Current research on Internet-based distributed systems emphasizes the scalability of overlay topologies for efficient search and retrieval of data items, as well as routing amongst peers. However, most existing approaches fail to address the transport of data across these logical networks in accordance with quality of service (QoS) constraints. Consequently, this paper investigates the use of scalable overlay topologies for routing real-time media streams between publishers and potentially many thousands of subscribers. Specifically, we analyze the costs of using k-ary n-cubes for QoS-constrained routing. Given a number of nodes in a distributed system, we calculate the optimal k-ary n-cube structure for minimizing the average distance between any pair of nodes. Using this structure, we describe a greedy algorithm that selects paths between nodes in accordance with the real-time delays along physical links. We show this method improves the routing latencies by as much as 67%, compared to approaches that do not consider physical link costs. We are in the process of developing a method for adaptive node placement in the overlay topology, based upon the locations of publishers, subscribers, physical link costs and per-subscriber QoS constraints. One such method for repositioning nodes in logical space is discussed, to improve the likelihood of meeting service requirements on data routed between publishers and subscribers. Future work will evaluate the benefits of such techniques more thoroughly.
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.
Resumo:
In this position paper, we review basic control strategies that machines acting as "traffic controllers" could deploy in order to improve the management of Internet services. Such traffic controllers are likely to spur the widespread emergence of advanced applications, which have (so far) been hindered by the inability of the networking infrastructure to deliver on the promise of Quality-of-Service (QoS).
Resumo:
Research on the construction of logical overlay networks has gained significance in recent times. This is partly due to work on peer-to-peer (P2P) systems for locating and retrieving distributed data objects, and also scalable content distribution using end-system multicast techniques. However, there are emerging applications that require the real-time transport of data from various sources to potentially many thousands of subscribers, each having their own quality-of-service (QoS) constraints. This paper primarily focuses on the properties of two popular topologies found in interconnection networks, namely k-ary n-cubes and de Bruijn graphs. The regular structure of these graph topologies makes them easier to analyze and determine possible routes for real-time data than complete or irregular graphs. We show how these overlay topologies compare in their ability to deliver data according to the QoS constraints of many subscribers, each receiving data from specific publishing hosts. Comparisons are drawn on the ability of each topology to route data in the presence of dynamic system effects, due to end-hosts joining and departing the system. Finally, experimental results show the service guarantees and physical link stress resulting from efficient multicast trees constructed over both kinds of overlay networks.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
Closing feedback loops using an IEEE 802.11b ad hoc wireless communication network incurs many challenges sensitivity to varying channel conditions and lower physical transmission rates tend to limit the bandwidth of the communication channel. Given that the bandwidth usage and control performance are linked, a method of adapting the sampling interval based on an 'a priori', static sampling policy has been proposed and, more significantly, assuring stability in the mean square sense using discrete-time Markov jump linear system theory. Practical issues including current limitations of the 802.11 b protocol, the sampling policy and stability are highlighted. Simulation results on a cart-mounted inverted pendulum show that closed-loop stability can be improved using sample rate adaptation and that the control design criteria can be met in the presence of channel errors and severe channel contention.
Resumo:
The performance of a new pointer-based medium-access control protocol that was designed to significantly improve the energy efficiency of user terminals in quality-of-service-enabled wireless local area networks was analysed. The new protocol, pointer-controlled slot allocation and resynchronisation protocol (PCSARe), is based on the hybrid coordination function-controlled channel access mode of the IEEE 802.11e standard. PCSARe reduces energy consumption by removing the need for power-saving stations to remain awake for channel listening. Discrete event network simulations were performed to compare the performance of PCSARe with the non-automatic power save delivery (APSD) and scheduled-APSD power-saving modes of IEEE 802.11e. The simulation results show a demonstrable improvement in energy efficiency without significant reduction in performance when using PCSARe. For a wireless network consisting of an access point and eight stations in power-saving mode, the energy saving was up to 39% when using PCSARe instead of IEEE 802.11e non-APSD. The results also show that PCSARe offers significantly reduced uplink access delay over IEEE 802.11e non-APSD, while modestly improving the uplink throughput. Furthermore, although both had the same energy consumption, PCSARe gave a 25% reduction in downlink access delay compared with IEEE 802.11e S-APSD.