949 resultados para Bottleneck bandwidth
Resumo:
Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.
Resumo:
The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.
Resumo:
The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to ste-reo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences.
Resumo:
Technology is continually changing, and evolving, throughout the entire construction industry; and particularly in the design process. One of the principal manifestations of this is a move away from team working in a shared work space to team working in a virtual space, using increasingly sophisticated electronic media. Due to the significant operating differences when working in shared and virtual spaces adjustments to generic skills utilised by members is a necessity when moving between the two conditions. This paper reports an aspect of a CRC-CI research project based on research of ‘generic skills’ used by individuals and teams when engaging with high bandwidth information and communication technologies (ICT). It aligns with the project’s other two aspects of collaboration in virtual environments: ‘processes’ and ‘models’. The entire project focuses on the early stages of a project (i.e. design) in which models for the project are being developed and revised. The paper summarises the first stage of the research project which reviews literature to identify factors of virtual teaming which may affect team member skills. It concludes that design team participants require ‘appropriate skills’ to function efficiently and effectively, and that the introduction of high band-width technologies reinforces the need for skills mapping and measurement.
Resumo:
In today’s global design world, architectural and other related design firms design across time zones and geographically distant locations. High bandwidth virtual environments have the potential to make a major impact on these global design teams. However, there is insufficient evidence about the way designers collaborate in their normal working environments using traditional and/or digital media. This paper presents a method to study the impact of communication and information technologies on collaborative design practice by comparing design tasks done in a normal working environment with design tasks done in a virtual environment. Before introducing high bandwidth collaboration technology to the work environment, a baseline study is conducted to observe and analyze the existing collaborative process. Designers currently rely on phone, fax, email, and image files for communication and collaboration. Describing the current context is important for comparison with the following phases. We developed the coding scheme that will be used in analyzing three stages of the collaborative design activity. The results will establish the basis for measures of collaborative design activity when a new technology is introduced later to the same work environment – for example, designers using electronic whiteboards, 3D virtual worlds, webcams, and internet phone. The results of this work will form the basis of guidelines for the introduction of technology into global design offices
Resumo:
We describe the design and evaluation of a platform for networks of cameras in low-bandwidth, low-power sensor networks. In our work to date we have investigated two different DSP hardware/software platforms for undertaking the tasks of compression and object detection and tracking. We compare the relative merits of each of the hardware and software platforms in terms of both performance and energy consumption. Finally we discuss what we believe are the ongoing research questions for image processing in WSNs.
Resumo:
This paper presents an overview of our demonstration of a low-bandwidth, wireless camera network where image compression is undertaken at each node. We briefly introduce the Fleck hardware platform we have developed as well as describe the image compression algorithm which runs on individual nodes. The demo will show real-time image data coming back to base as individual camera nodes are added to the network. Copyright 2007 ACM.
Resumo:
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.
Resumo:
In the rate-based flow control for ATM Available Bit Rate service, fairness is an important requirement, i.e. each flow should be allocated a fair share of the available bandwidth in the network. Max–min fairness, which is widely adopted in ATM, is appropriate only when the minimum cell rates (MCRs) of the flows are zero or neglected. Generalised max–min (GMM) fairness extends the principle of the max–min fairness to accommodate MCR. In this paper, we will discuss the formulation of the GMM fair rate allocation, propose a centralised algorithm, analyse its bottleneck structure and develop an efficient distributed explicit rate allocation algorithm to achieve the GMM fairness in an ATM network. The study in this paper addresses certain theoretical and practical issues of the GMM fair rate allocation.
Resumo:
In this paper, weighted fair rate allocation for ATM available bit rate (ABR) service is discussed with the concern of the minimum cell rate (MCR). Weighted fairness with MCR guarantee has been discussed recently in the literature. In those studies, each ABR virtual connection (VC) is first allocated its MCR, then the remaining available bandwidth is further shared among ABR VCs according to their weights. For the weighted fairness defined in this paper, the bandwidth is first allocated according to each VC's weight; if a VC's weighted share is less than its MCR, it should be allocated its MCR instead of the weighted share. This weighted fairness with MCR guarantee is referred to as extended weighted (EXW) fairness. Certain theoretical issues related to EXW, such as its global solution and bottleneck structure, are first discussed in the paper. A distributed explicit rate allocation algorithm is then proposed to achieve EXW fairness in ATM networks. The algorithm is a general-purpose explicit rate algorithm in the sense that it can realise almost all the fairness principles proposed for ABR so far whilst only minor modifications may be needed.
Resumo:
In this paper, two different high bandwidth converter control strategies are discussed. One of the strategies is for voltage control and the other is for current control. The converter, in each of the cases, is equipped with an output passive filter. For the voltage controller, the converter is equipped with an LC filter, while an output has an LCL filter for current controller. The important aspect that has been discussed the paper is to avoid computation of unnecessary references using high-pass filters in the feedback loop. The stability of the overall system, including the high-pass filters, has been analyzed. The choice of filter parameters is crucial for achieving desirable system performance. In this paper, the bandwidth of achievable performance is presented through frequency (Bode) plot of the system gains. It has been illustrated that the proposed controllers are capable of tracking fundamental frequency components along with low-order harmonic components. Extensive simulation results are presented to validate the control concepts presented in the paper.