885 resultados para Quality of Service- QoS
Resumo:
We have developed SmartConnect, a tool that addresses the growing need for the design and deployment of multihop wireless relay networks for connecting sensors to a control center. Given the locations of the sensors, the traffic that each sensor generates, the quality of service (QoS) requirements, and the potential locations at which relays can be placed, SmartConnect helps design and deploy a low-cost wireless multihop relay network. SmartConnect adopts a field interactive, iterative approach, with model based network design, field evaluation and relay augmentation performed iteratively until the desired QoS is met. The design process is based on approximate combinatorial optimization algorithms. In the paper, we provide the design choices made in SmartConnect and describe the experimental work that led to these choices. Finally, we provide results from some experimental deployments.
Resumo:
We consider the problem of wireless channel allocation (whenever the channels are free) to multiple cognitive radio users in a Cognitive Radio Network (CRN) so as to satisfy their Quality of Service (QoS) requirements efficiently. The CRN base station may not know the channel states of all the users. The multiple channels are available at random times. In this setup Opportunistic Splitting can be an attractive solution. A disadvantage of this algorithm is that it requires the metrics of all users to be an independent, identically distributed sequence. However we use a recently generalized version of this algorithm in which the optimal parameters are learnt on-line through stochastic approximation and metrics can be Markov. We provide scheduling algorithms which maximize weighted-sum system throughput or are throughput or delay optimal. We also consider the scenario when some traffic streams are delay sensitive.
Resumo:
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beaconless IEEE 802.15.4 ( the ``ZigBee'' PHY and MAC), a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, relay nodes which only forward packets, and a data sink (base station). We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain time averaged unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling the analyses at various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. The model incorporates packet generation at the sensor nodes and queues at the sensor nodes and relay nodes. We demonstrate the accuracy of our model by an extensive comparison with simulations. As an additional assessment of the accuracy of the model, we utilize it in an algorithm for sensor network design with quality-of-service (QoS) objectives, and show that designs obtained using our model actually satisfy the QoS constraints (as validated by simulating the networks), and the predictions are accurate to well within 10% as compared to the simulation results in a regime where the packet discard probability is low. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
[EN]The present research work, based on some of the components of the Common Assessment Framework, sets to analyse the influence held by leadership in specific factors that constitute the organisational climate, and also the impact that these factors have on the quality of municipal public services. For the purposes of this study, we propose Likert’s exploitative autocratic and participative leadership styles to explain the genesis, structure and workflow. As far as the organisational climate is concerned, the variables used are motivation, satisfaction, empowerment, conflict and stress. The main conclusions that arose was that a participative leader confers higher relevance to the quality of service, through motivation, satisfaction, empowerment and human resources positive results, than an exploitative autocratic leader. Performed contributions are based on the empiric research hereby presented, and new research guidelines are proposed. The research methodology used was qualitative, based on the case study.
Resumo:
[ES]En este documento se presenta un estudio sobre distintas metodologías para la medición de varios parámetros de calidad de servicio (QoS). El estudio se realiza de cara a una futura implementación de dichas metodologías en la plataforma QoSMeter del grupo NQaS, y a una posible contribución para la estandarización a nivel internacional de las mismas. El documento también recoge un análisis de la última contribución presentada en este campo por parte de la administración de la Federación Rusa, así como una contribución de la Universidad del País Vasco[6], como resultado en parte del estudio que se presenta, en el que se proponen mejoras a la contribución antes mencionada.
Resumo:
[ES]Este proyecto tiene como objetivo el diseño e implementación de una herramienta para la integración de los datos de calidad de servicio (QoS) en Internet publicados por el regulador español. Se trata de una herramienta que pretende, por una parte, unificar los diferentes formatos en que se publican los datos de QoS y, por otra, facilitar la conservación de los datos favoreciendo la obtención de históricos, datos estadísticos e informes. En la página del regulador sólo se puede acceder a los datos de los 5 últimos trimestres y los datos anteriormente publicados no permanecen accesibles si no que son sustituidos por los más recientes por lo que, desde el punto de vista del usuario final, estos datos se pierden. La herramienta propuesta en este trabajo soluciona este problema además de unificar formatos y facilitar el acceso a los datos de interés. Para el diseño del sistema se han usado las últimas tecnologías en desarrollo de aplicaciones web con lo que la potencia y posibilidad de futuras ampliaciones son elevadas.
Resumo:
[ES]En este documento se presenta el trabajo realizado para la integración de las distintas herramientas disponibles para la medición de la Calidad de Servicio (QoS) con el Contenedor que les da soporte dentro de la infraestructura QoSMETER desarrollada por el grupo de investigación NQaS de la UPV/EHU. Se analizan las distintas alternativas disponibles para resolver el problema y se plantea el diseño en base a la mejor de ellas.
Resumo:
网络分布计算环境下应用系统的需求多样化和复杂性的增长,要求位于中间件层次的Web应用服务器(web application server,简称WAS)从原来的尽力而为服务模型转变为服务质量(quality of service,简称QoS)保障模型,为具有不同需求的应用分别提供适宜的服务质量保障.目前的WAS系统在此方面仍然比较薄弱.OnceAS/Q是一个面向QoS的WAS系统,它以QoS规约为基础,为不同应用提供不同的QoS保障能力.OnceAS/Q实现了一个应用QoS保障框架,提供了一组QoS服务组件支持具有QoS需求的应用开发和运行.介绍了OnceAS/Q的体系结构和主要组件,详细阐述了两个关键问题,一是QoS规约的定义及其映射,另一个是面向QoS的服务组件和资源的动态重配.OnceAS/Q原型在Ecperf测试基准下,对其QoS保障能力进行了实验.实验数据表明,在较大规模的应用环境下,OnceAS/Q能够提供更好的服务质量,并且开销是可接受的.
Resumo:
To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.
Resumo:
The congestion control mechanisms of TCP make it vulnerable in an environment where flows with different congestion-sensitivity compete for scarce resources. With the increasing amount of unresponsive UDP traffic in today's Internet, new mechanisms are needed to enforce fairness in the core of the network. We propose a scalable Diffserv-like architecture, where flows with different characteristics are classified into separate service queues at the routers. Such class-based isolation provides protection so that flows with different characteristics do not negatively impact one another. In this study, we examine different aspects of UDP and TCP interaction and possible gains from segregating UDP and TCP into different classes. We also investigate the utility of further segregating TCP flows into two classes, which are class of short and class of long flows. Results are obtained analytically for both Tail-drop and Random Early Drop (RED) routers. Class-based isolation have the following salient features: (1) better fairness, (2) improved predictability for all kinds of flows, (3) lower transmission delay for delay-sensitive flows, and (4) better control over Quality of Service (QoS) of a particular traffic type.
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.
Resumo:
Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.