43 resultados para available bandwidth

em Indian Institute of Science - Bangalore - Índia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The integration of different wireless networks, such as GSM and WiFi, as a two-tier hybrid wireless network is more popular and economical. Efficient bandwidth management, call admission control strategies and mobility management are important issues in supporting multiple types of services with different bandwidth requirements in hybrid networks. In particular, bandwidth is a critical commodity because of the type of transactions supported by these hybrid networks, which may have varying bandwidth and time requirements. In this paper, we consider such a problem in a hybrid wireless network installed in a superstore environment and design a bandwidth management algorithm based on the priority level, classification of the incoming transactions. Our scheme uses a downlink transaction scheduling algorithm, which decides how to schedule the outgoing transactions based on their priority level with efficient use of available bandwidth. The transaction scheduling algorithm is used to maximize the number of transaction-executions. The proposed scheme is simulated in a superstore environment with multi Rooms. The performance results describe that the proposed scheme can considerably improve the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Orthogonal frequency-division multiple access (OFDMA) systems divide the available bandwidth into orthogonal subchannels and exploit multiuser diversity and frequency selectivity to achieve high spectral efficiencies. However, they require a significant amount of channel state feedback for scheduling and rate adaptation and are sensitive to feedback delays. We develop a comprehensive analysis for OFDMA system throughput in the presence of feedback delays as a function of the feedback scheme, frequency-domain scheduler, and rate adaptation rule. Also derived are expressions for the outage probability, which captures the inability of a subchannel to successfully carry data due to the feedback scheme or feedback delays. Our model encompasses the popular best-n and threshold-based feedback schemes and the greedy, proportional fair, and round-robin schedulers that cover a wide range of throughput versus fairness tradeoff. It helps quantify the different robustness of the schedulers to feedback overhead and delays. Even at low vehicular speeds, it shows that small feedback delays markedly degrade the throughput and increase the outage probability. Further, given the feedback delay, the throughput degradation depends primarily on the feedback overhead and not on the feedback scheme itself. We also show how to optimize the rate adaptation thresholds as a function of feedback delay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present Bi-Modal Cache - a flexible stacked DRAM cache organization which simultaneously achieves several objectives: (i) improved cache hit ratio, (ii) moving the tag storage overhead to DRAM, (iii) lower cache hit latency than tags-in-SRAM, and (iv) reduction in off-chip bandwidth wastage. The Bi-Modal Cache addresses the miss rate versus off-chip bandwidth dilemma by organizing the data in a bi-modal fashion - blocks with high spatial locality are organized as large blocks and those with little spatial locality as small blocks. By adaptively selecting the right granularity of storage for individual blocks at run-time, the proposed DRAM cache organization is able to make judicious use of the available DRAM cache capacity as well as reduce the off-chip memory bandwidth consumption. The Bi-Modal Cache improves cache hit latency despite moving the metadata to DRAM by means of a small SRAM based Way Locator. Further by leveraging the tremendous internal bandwidth and capacity that stacked DRAM organizations provide, the Bi-Modal Cache enables efficient concurrent accesses to tags and data to reduce hit time. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement (in terms of Average Normalized Turnaround Time (ANTT)) of 10.8%, 13.8% and 14.0% in 4-core, 8-core and 16-core workloads respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new technique is presented to increase the bandwidth for a single stage amplifier. Usually, -3 dB bandwidth of single stage amplifier is in few MHz. High output impedance and subsequent capacitive loading decrease the bandwidth of amplifier. The presented technique uses a load which itself acts as bandwidth enhancer. This high speed amplifier is designed on 180 nm CMOS technology, operates at 2.5 V power supply. This amplifier is succeeded by an output buffer to achieve a better linearity, high output swing and required output impedance for matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and efficient algorithm for the bandwidth reduction of sparse symmetric matrices is proposed. It involves column-row permutations and is well-suited to map onto the linear array topology of the SIMD architectures. The efficiency of the algorithm is compared with the other existing algorithms. The interconnectivity and the memory requirement of the linear array are discussed and the complexity of its layout area is derived. The parallel version of the algorithm mapped onto the linear array is then introduced and is explained with the help of an example. The optimality of the parallel algorithm is proved by deriving the time complexities of the algorithm on a single processor and the linear array.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The convective available potential energy (CAFE) based on monthly mean sounding has been shown to be relevant to deep convection in the tropics. The variation of CAFE with SST has been found to be similar to the variation of the frequency of deep convection at one station each in the tropical Atlantic and W. Pacific oceans. This suggests a strong link between the frequency of tropical convection and CAFE. It has been shown that CAFE so derived can be interpreted as the work potential of the atmosphere above the boundary layer with ascent in the convective region and subsidence in the surrounding cloud-free region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The half-duplex constraint, which mandates that a cooperative relay cannot transmit and receive simultaneously, considerably simplifies the demands made on the hardware and signal processing capabilities of a relay. However, the very inability of a relay to transmit and receive simultaneously leads to a potential under-utilization of time and bandwidth resources available to the system. We analyze the impact of the half-duplex constraint on the throughput of a cooperative relay system that uses rateless codes to harness spatial diversity and efficiently transmit information from a source to a destination. We derive closed-form expressions for the throughput of the system, and show that as the number of relays increases, the throughput approaches that of a system that uses more sophisticated full-duplex nodes. Thus, half-duplex nodes are well suited for cooperation using rateless codes despite the simplicity of both the cooperation protocol and the relays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distributed storage setting is considered where a file of size B is to be stored across n storage nodes. A data collector should be able to reconstruct the entire data by downloading the symbols stored in any k nodes. When a node fails, it is replaced by a new node by downloading data from some of the existing nodes. The amount of download is termed as repair bandwidth. One way to implement such a system is to store one fragment of an (n, k) MDS code in each node, in which case the repair bandwidth is B. Since repair of a failed node consumes network bandwidth, codes reducing repair bandwidth are of great interest. Most of the recent work in this area focuses on reducing the repair bandwidth of a set of k nodes which store the data in uncoded form, while the reduction in the repair bandwidth of the remaining nodes is only marginal. In this paper, we present an explicit code which reduces the repair bandwidth for all the nodes to approximately B/2. To the best of our knowledge, this is the first explicit code which reduces the repair bandwidth of all the nodes for all feasible values of the system parameters.