452 resultados para QoS
Resumo:
随着计算能力和通信能力的增强,计算设备所占用的体积也越来越小,各种新形态的传感器、移动设备及无线网络设备日趋普及。这极大地促进了以无线、移动和嵌入式设备为基础的普适计算模式的形成和发展。普适环境下的服务发现机制可使得用户能够使用各种移动设备无缝的、随时随地的共享和访问各种服务信息。 普适环境中的网络异构性和动态性,以及服务的多样性和异构性,均对服务发现提出了新的挑战。目前学术界和工业界在服务发现方面进行了不少的探索和研究,当前的一些服务发现协议或系统比如SLP、UPnP、INS等,一般基于语法的方法描述服务,主要关注服务的功能性需求,但基于关键字来匹配请求和服务描述,在普适环境下常常会导致较差的匹配结果。 针对服务发现在普适环境中的新需求,本文基于XML定义了一种轻量级的服务语义建模语言SML,SML定义了丰富的数据类型,以模板和属性的方式定义各领域的实体,可以表达丰富的语义知识;同时,本文以轻量级的推理引擎Jess为依托,将用服务建模语言定义的各应用领域的服务模板和语义知识自动转换成Jess的推理规则和事实。本文定义了一种类似XPath的服务查询语言规范,并在支持精确匹配的基础上,提出了服务的近似匹配策略,提供了各种的近似计算规则。鉴于不同用户对服务的各属性有不同的偏好程度,还提出了基于用户偏好的服务匹配策略。动态的上下文信息是服务匹配过程的重要考虑因素。本文以Jess规则来匹配服务和用户的上下文,选择适合于用户当前情况的服务。对服务各种QoS的描述支持也是服务语义建模语言提供的功能之一,为此,本文还提出了一种基于Pareto最优的服务选择策略,根据服务的QoS以及服务与请求的匹配程度来选择Pareto最优的服务。本文的上述工作已实现到服务发现系统Service CatalogNet Extended中。
Resumo:
随着信息处理技术在通信、金融、工业生产等领域的广泛应用,数据已经不 仅仅拘泥于文件、数据表等传统形式。大量连续、变化的流式数据在越来越多 的现代应用中出现,例如军事指挥、交通控制、传感器数据处理、网络监控、金 融数据分析等。在这些应用中,数据以流的形式不断到达,系统需要对这些数据 进行连续、及时的处理。虽然现有的数据流应用已经收集了大量的流数据,但 是其中用户所关心的事件通常是那些异常事件,因为异常事件往往隐藏着更多 值得关注的信息。为了能够将单个数据流上的异常事件及时、准确的分发到复 合事件检测模块,具有QoS自适应能力的事件通知模型是一个理想的选择。复 合事件的产生,迎合了实际应用中的复杂需求,它通常是由原子事件通过逻辑 连接符和各种操作符连接组合而成。另外,为了能够及时的做出响应,预先定 义的动作应该被触发,例如:发出警报或在该异常事件发生地进行拍照,这就 要求检测系统具有一定的主动性。ECA规则作为“发现-响应”模型的基石,可 以满足上述主动性需求。目前大多数研究集中于对正常事件的离线挖掘与分析, 而对于实时数据流原子异常事件检测技术、具有QoS自适应的事件通知模型和 面向任意顺序数据流的复合事件检测技术研究则比较少,因此本文重点讨论上 述三个方面的内容。 首先,本文系统地分析了数据流应用的需求和特点,并提出一个面向实时 数据流的异常事件检测框架HAPS。HAPS共分为四层,分别为:数据流原子异 常事件检测层、QoS自适应的实时事件通知服务、复合事件检测层和动作执行 层。其次,本文对现有上述四个方面的研究工作进行了全面、详细的综述,分析 了现有研究的不足之处。然后针对上述四个方面的不足之处分别进行了深入的 研究,取得了部分创新性的研究成果。最后实现了框架的原型系统,并使用大 量仿真数据流和真实数据流进行了实验,实验结果表明在这四个方面的研究均 达到了预期的目标。 本文的主要创新点为: 1. 基于局部相关指数,提出一种增量式数据流异常事件检测算法(简称 为incLOCI),时间复杂度仅为O(NlogN)。证明了无论是事件的新增还是过时事件的删除都仅只影响其有限个近邻。 2. 提出了一个“近似”top-k实时事件通知模型,它使用事件内容与订阅 要求之间的相关程度作为匹配标准,在截止期内,自适应的选择“近 似”top-k相关数据。 3. 提出了一个面向任意顺序数据流的复合事件检测模型,该模型支持三种典 型的事件语境,同时还支持聚集函数,设计并实现了模型中使用的数据结 构和算法。 4. 实现了一个面向实时数据流的异常事件检测原型系统。它不仅可以检测实 时数据流中的原子异常事件,还能够通过QoS自适应实时事件通知服务将 原子事件分发到复合事件层,生成复合事件,最后利用RECA规则对异常 情况做出及时响应。 面向实时数据流的异常事件检测技术研究具有较高的应用价值和广阔的应 用前景。本文的研究成果为进一步探讨实时数据流上原子异常事件检测技术和 复合事件检测技术提供了良好的基础。
Resumo:
流程企业综合自动化是实时生产管理集成优化的核心,而综合自动化的基础是企业生产过程数据的有效集成。企业生产过程数据主要包括生产运行与管理涉及到的实时和历史数据、事件、消息等。与传统企业集成不同的是过程数据具有不同的时间周期、不同的概念外延、生产工艺知识约束以及实时性要求。此外,企业集成环境的复杂性,如传感器数据高噪音、异步采样等也增加了过程数据集成的难度。论文以流程企业过程数据集成为背景,研究模型驱动的过程数据集成技术,重点研究过程数据集成模型、模型驱动的过程数据集成集成框架、QoS自适应的实时发布订阅机制以及基于反馈的多传感器数据融合算法,力图研发一个支持不同尺度的过程数据集成的工具。论文首先分析流程企业生产过程数据集成的特点和传统企业数据集成建模方法的不足,提出了一种基于模型驱动的企业过程数据集成方法,采用领域本体的方法从时态对象、集成过程、语义集成三个角度建立过程数据集成模型,并对这三个模型进行了形式化描述,此外,通过定义映射规则,实现应用本体间关系的映射。在分析分布式事件通知服务体系结构的基础上,提出了基于发布订阅的过程数据集成框架,通过采用事件-条件-动作(ECA规则)来支持模型驱动的企业过程数据集成方法。为了简化过程数据集成模型的建模,提出了一种可视化的ECA规则描述规范,并设计开发了相应的编译工具。针对传统的分布式事件处理不能满足流程企业中过程数据集成的实时性要求和QoS保障的问题,论文在传统分布式事件处理之上扩展设计了QoS保障策略和带截止期的ECA规则(RECA),提出了一种自适应发布订阅的机制,该机制通过动态调整系统参数,可以同时提供多层次的服务质量。实验数据验证了该机制能够提高多服务请求并发情况下,不同QoS等级的响应处理和可预测性。进一步,针对企业过程数据集成中面临的传感器采集高噪音、异步采样等问题,论文给出了多传感器数据融合的一种数学描述,在比较分析典型的多传感器数据融合的算法的基础上,提出了一种基于反馈控制原理的多传感器数据融合算法,并给出了算法实现及实验数据验证。最后,基于上述研究成果,论文设计并开发实现了一个适应大规模分布式流程企业生产过程数据集成的自适应实时发布订阅服务系统,并作为流程企业生产执行系统(SMES)的核心构件,在多家石化企业得到了成功应用。
Resumo:
信息技术与机器人遥操作技术相融合,产生了基于网络的机器人遥操作技术,它是遥科学发展的最新成果,是遥操作技术发展的新方向,对闭环网络控制系统的研究也有借鉴意义。基于网络的机器人遥操作是一项综合性网络应用,在底层控制上属于闭环网络控制系统研究的范畴,在体系结构上体现了分布式控制的特点,在遥操作的策略上需要对网络性能和控制器设计进行综合考虑。本文围绕在网络环境下进行机器人遥操作时所面临的网络时延特性分析、控制系统性能与网络状态的关系、稳定性控制等问题开展研究,并探索所涉及的具有普遍意义的理论与技术问题,主要进行了以下五项有价值和有创造性的工作:(1) 基于网络的机器人遥操作系统体系结构研究。针对网络遥操作对系统开放性、扩展性、独立性等方面的要求,提出以客户/服务器模式构造网络机器人遥操作系统的思想,研究了系统单元体系结构和功能模块。(2) 网络数据传输特性的测试、分析和时延预测研究。通过一系列网络传输时延的测试,分析了当前我国Internet时延特性,并与国内外相关研究进行了对比分析。提出了两种网络时延预测方法并进行了验证和分析。(3) 研究了网络服务质量QoS在网络遥操作中的应用。提出了以QoS把控制器的设计、调节和网络实时状态联系在一起的自适应遥操作策略,研究了网络遥操作中的QoS描述、QoS映射、QoS保障和QoS应用问题。(4) 基于Internet的机器人遥操作系统的设计和实现。建立了一套以力反馈游戏杆实时控制远端全方位移动机器人的实验系统。(5) 实验研究。使用上述系统进行了Internet条件下机器人遥操作实验,实验内容包括遥操作中的信息交互、位置反馈控制等。这五项工作之间的关联在于:客户/服务器体系结构是建立网络遥操作系统的指导思想;通过网络时延测试和预测认识了网络数据传输规律;以QoS将遥操作系统的稳定性分析、网络实时状态动态联系在一起;仿真实验和移动机器人遥操作实验表明本文提出的系统体系结构和方法以及所设计实现的控制器软件系统是有效而且可用的。
Resumo:
The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.
Resumo:
The Science of Network Service Composition has clearly emerged as one of the grand themes driving many of our research questions in the networking field today [NeXtworking 2003]. This driving force stems from the rise of sophisticated applications and new networking paradigms. By "service composition" we mean that the performance and correctness properties local to the various constituent components of a service can be readily composed into global (end-to-end) properties without re-analyzing any of the constituent components in isolation, or as part of the whole composite service. The set of laws that would govern such composition is what will constitute that new science of composition. The combined heterogeneity and dynamic open nature of network systems makes composition quite challenging, and thus programming network services has been largely inaccessible to the average user. We identify (and outline) a research agenda in which we aim to develop a specification language that is expressive enough to describe different components of a network service, and that will include type hierarchies inspired by type systems in general programming languages that enable the safe composition of software components. We envision this new science of composition to be built upon several theories (e.g., control theory, game theory, network calculus, percolation theory, economics, queuing theory). In essence, different theories may provide different languages by which certain properties of system components can be expressed and composed into larger systems. We then seek to lift these lower-level specifications to a higher level by abstracting away details that are irrelevant for safe composition at the higher level, thus making theories scalable and useful to the average user. In this paper we focus on services built upon an overlay management architecture, and we use control theory and QoS theory as example theories from which we lift up compositional specifications.
Resumo:
High-speed networks, such as ATM networks, are expected to support diverse Quality of Service (QoS) constraints, including real-time QoS guarantees. Real-time QoS is required by many applications such as those that involve voice and video communication. To support such services, routing algorithms that allow applications to reserve the needed bandwidth over a Virtual Circuit (VC) have been proposed. Commonly, these bandwidth-reservation algorithms assign VCs to routes using the least-loaded concept, and thus result in balancing the load over the set of all candidate routes. In this paper, we show that for such reservation-based protocols|which allow for the exclusive use of a preset fraction of a resource's bandwidth for an extended period of time-load balancing is not desirable as it results in resource fragmentation, which adversely affects the likelihood of accepting new reservations. In particular, we show that load-balancing VC routing algorithms are not appropriate when the main objective of the routing protocol is to increase the probability of finding routes that satisfy incoming VC requests, as opposed to equalizing the bandwidth utilization along the various routes. We present an on-line VC routing scheme that is based on the concept of "load profiling", which allows a distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. We show the effectiveness of our load-profiling approach when compared to traditional load-balancing and load-packing VC routing schemes.
Resumo:
To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.
Resumo:
The development and deployment of distributed network-aware applications and services over the Internet require the ability to compile and maintain a model of the underlying network resources with respect to (one or more) characteristic properties of interest. To be manageable, such models must be compact, and must enable a representation of properties along temporal, spatial, and measurement resolution dimensions. In this paper, we propose a general framework for the construction of such metric-induced models using end-to-end measurements. We instantiate our approach using one such property, packet loss rates, and present an analytical framework for the characterization of Internet loss topologies. From the perspective of a server the loss topology is a logical tree rooted at the server with clients at its leaves, in which edges represent lossy paths between a pair of internal network nodes. We show how end-to-end unicast packet probing techniques could b e used to (1) infer a loss topology and (2) identify the loss rates of links in an existing loss topology. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. We report on simulation, implementation, and Internet deployment results that show the effectiveness of our approach and its robustness in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
The congestion control mechanisms of TCP make it vulnerable in an environment where flows with different congestion-sensitivity compete for scarce resources. With the increasing amount of unresponsive UDP traffic in today's Internet, new mechanisms are needed to enforce fairness in the core of the network. We propose a scalable Diffserv-like architecture, where flows with different characteristics are classified into separate service queues at the routers. Such class-based isolation provides protection so that flows with different characteristics do not negatively impact one another. In this study, we examine different aspects of UDP and TCP interaction and possible gains from segregating UDP and TCP into different classes. We also investigate the utility of further segregating TCP flows into two classes, which are class of short and class of long flows. Results are obtained analytically for both Tail-drop and Random Early Drop (RED) routers. Class-based isolation have the following salient features: (1) better fairness, (2) improved predictability for all kinds of flows, (3) lower transmission delay for delay-sensitive flows, and (4) better control over Quality of Service (QoS) of a particular traffic type.
Resumo:
The objective of unicast routing is to find a path from a source to a destination. Conventional routing has been used mainly to provide connectivity. It lacks the ability to provide any kind of service guarantees and smart usage of network resources. Improving performance is possible by being aware of both traffic characteristics and current available resources. This paper surveys a range of routing solutions, which can be categorized depending on the degree of the awareness of the algorithm: (1) QoS/Constraint-based routing solutions are aware of traffic requirements of individual connection requests; (2) Traffic-aware routing solutions assume knowledge of the location of communicating ingress-egress pairs and possibly the traffic demands among them; (3) Routing solutions that are both QoS-aware as (1) and traffic-aware as (2); (4) Best-effort solutions are oblivious to both traffic and QoS requirements, but are adaptive only to current resource availability. The best performance can be achieved by having all possible knowledge so that while finding a path for an individual flow, one can make a smart choice among feasible paths to increase the chances of supporting future requests. However, this usually comes at the cost of increased complexity and decreased scalability. In this paper, we discuss such cost-performance tradeoffs by surveying proposed heuristic solutions and hybrid approaches.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
In this paper, we present Slack Stealing Job Admission Control (SSJAC)---a methodology for scheduling periodic firm-deadline tasks with variable resource requirements, subject to controllable Quality of Service (QoS) constraints. In a system that uses Rate Monotonic Scheduling, SSJAC augments the slack stealing algorithm of Thuel et al with an admission control policy to manage the variability in the resource requirements of the periodic tasks. This enables SSJAC to take advantage of the 31\% of utilization that RMS cannot use, as well as any utilization unclaimed by jobs that are not admitted into the system. Using SSJAC, each task in the system is assigned a resource utilization threshold that guarantees the minimal acceptable QoS for that task (expressed as an upper bound on the rate of missed deadlines). Job admission control is used to ensure that (1) only those jobs that will complete by their deadlines are admitted, and (2) tasks do not interfere with each other, thus a job can only monopolize the slack in the system, but not the time guaranteed to jobs of other tasks. We have evaluated SSJAC against RMS and Statistical RMS (SRMS). Ignoring overhead issues, SSJAC consistently provides better performance than RMS in overload, and, in certain conditions, better performance than SRMS. In addition, to evaluate optimality of SSJAC in an absolute sense, we have characterized the performance of SSJAC by comparing it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.