957 resultados para bandwidth
Resumo:
The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.
Resumo:
We provide a filterbank precoding framework (FBP) for frequency selective channels using the minimum mean squared error (MMSE) criterion. The design obviates the need for introducing a guard interval between successive blocks, and hence can achieve the maximum possible bandwidth efficiency. This is especially useful in cases where the channel is of a high order. We treat both the presence and the absence of channel knowledge at the transmitter. In the former case, we obtain the jointly optimal precoder-equalizer pair of the specified order. In the latter case, we use a zero padding precoder, and obtain the MMSE equalizer. No restriction on the dimension or nature of the channel matrix is imposed. Simulation results indicate that the filterbank approach outperforms block based methods like OFDM and eigenmode precoding.
Resumo:
The ability of a metal to resist strain localisation and hence reduction in local thickness, is a most important forming property upon stretching. The uniform strain represents in this regard a critical factor to describe stretching ability - especially when the material under consideration exhibits negative strain rate sensitivity and dynamic strain ageing (DSA). A newly developed Laser Speckle Technique (LST), e.g. see [1], was used in-situ during tensile testing with two extensometers. The applied technique facilitates quantitative information on the propagating plasticity (i.e. the so-called PLC bands) known to take place during deformation where DSA is active. The band velocity (V-band), and the bandwidth (W-band) were monitored upon increasing accumulated strain. The knowledge obtained with the LST was useful for understanding the underlying mechanisms for the formability limit when DSA and negative strain rate sensitivity operate. The goal was to understand the relationship between PLC/DSA phenomena and the formability limit physically manifested as shear band formation. Two principally different alloys were used to discover alloying effects.
Resumo:
The half-duplex constraint, which mandates that a cooperative relay cannot transmit and receive simultaneously, considerably simplifies the demands made on the hardware and signal processing capabilities of a relay. However, the very inability of a relay to transmit and receive simultaneously leads to a potential under-utilization of time and bandwidth resources available to the system. We analyze the impact of the half-duplex constraint on the throughput of a cooperative relay system that uses rateless codes to harness spatial diversity and efficiently transmit information from a source to a destination. We derive closed-form expressions for the throughput of the system, and show that as the number of relays increases, the throughput approaches that of a system that uses more sophisticated full-duplex nodes. Thus, half-duplex nodes are well suited for cooperation using rateless codes despite the simplicity of both the cooperation protocol and the relays.
Resumo:
This paper presents the design of a broadband antenna suitable for wireless communications operating over the frequency range of 3.1-10.6 GHz. Parametric studies on the effect of stub and elliptic slot have been carried out to arrive at optimum dimensions to achieve enhanced bandwidth of the proposed antenna. An experimental antenna has been designed and tested to validate the proposed design. Measured return loss characteristics have been compared against the simulation results. Simulated radiation patterns at 3.1 GHz, 6.85 GHz and 10.6 GHz have also been presented in this paper.
Resumo:
Powder-neutron diffraction study has been carried out at 300 and 10 K in La0.85Pb0.15Mn1-xTixO3 (0 less than or equal to x less than or equal to 0.15). The samples crystallize in the rhombohedral phase. The magnetic moment reduces nonlinearly with increase in Ti and correlates well with the reported behavior of T-C. The change in the moment and T-C could not be related to change in the one electron bandwidth, W. The reduction is attributed to the effect of dilution and thereby reducing the double exchange ferromagnetic interaction. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Hybrid wireless networks are extensively used in the superstores, market places, malls, etc. and provide high QoS (Quality of Service) to the end-users has become a challenging task. In this paper, we propose a policy-based transaction-aware QoS management architecture in a hybrid wireless superstore environment. The proposed scheme operates at the transaction level, for the downlink QoS management. We derive a policy for the estimation of QoS parameters, like, delay, jitter, bandwidth, availability, packet loss for every transaction before scheduling on the downlink. We also propose a QoS monitor which monitors the specified QoS and automatically adjusts the QoS according to the requirement. The proposed scheme has been simulated in hybrid wireless superstore environment and tested for various superstore transactions. The results shows that the policy-based transaction QoS management is enhance the performance and utilize network resources efficiently at the peak time of the superstore business.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
In the distributed storage coding problem we consider, data is stored across n nodes in a network, each capable of storing � symbols. It is required that the complete data can be reconstructed by downloading data from any k nodes. There is also the key additional requirement that a failed node be regenerated by connecting to any d nodes and downloading �symbols from each of them. Our goal is to minimize the repair bandwidth d�. In this paper we provide explicit constructions for several parameter sets of interest.
Resumo:
Conventional hardware implementation techniques for FIR filters require the computation of filter coefficients in software and have them stored in memory. This approach is static in the sense that any further fine tuning of the filter requires computation of new coefficients in software. In this paper, we propose an alternate technique for implementing FIR filters in hardware. We store a considerably large number of impulse response coefficients of the ideal filter (having box type frequency response) in memory. We then do the windowing process, on these coefficients, in hardware using integer sequences as window functions. The integer sequences are also generated in hardware. This approach offers the flexibility in fine tuning the filter, like varying the transition bandwidth around a particular cutoff frequency.
Resumo:
The prevalent virtualization technologies provide QoS support within the software layers of the virtual machine monitor(VMM) or the operating system of the virtual machine(VM). The QoS features are mostly provided as extensions to the existing software used for accessing the I/O device because of which the applications sharing the I/O device experience loss of performance due to crosstalk effects or usable bandwidth. In this paper we examine the NIC sharing effects across VMs on a Xen virtualized server and present an alternate paradigm that improves the shared bandwidth and reduces the crosstalk effect on the VMs. We implement the proposed hardwaresoftware changes in a layered queuing network (LQN) model and use simulation techniques to evaluate the architecture. We find that simple changes in the device architecture and associated system software lead to application throughput improvement of up to 60%. The architecture also enables finer QoS controls at device level and increases the scalability of device sharing across multiple virtual machines. We find that the performance improvement derived using LQN model is comparable to that reported by similar but real implementations.
Resumo:
Antenna selection allows multiple-antenna systems to achieve most of their promised diversity gain, while keeping the number of RF chains and, thus, cost/complexity low. In this paper we investigate antenna selection for fourth-generation OFDMA- based cellular communications systems, in particular, 3GPP LTE (long-term evolution) systems. We propose a training method for antenna selection that is especially suitable for OFDMA. By means of simulation, we evaluate the SNR-gain that can be achieved with our design. We find that the performance depends on the bandwidth assigned to each user, the scheduling method (round-robin or frequency-domain scheduling), and the Doppler spread. Furthermore, the signal-to-noise ratio of the training sequence plays a critical role. Typical SNR gains are around 2 dB, with larger values obtainable in certain circumstances.
Resumo:
A built-in-self-test (BIST) subsystem embedded in a 65-nm mobile broadcast video receiver is described. The subsystem is designed to perform analog and RF measurements at multiple internal nodes of the receiver. It uses a distributed network of CMOS sensors and a low bandwidth, 12-bit A/D converter to perform the measurements with a serial bus interface enabling a digital transfer of measured data to automatic test equipment (ATE). A perturbation/correlation based BIST method is described, which makes pass/fail determination on parts, resulting in significant test time and cost reduction.
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].