993 resultados para Relational complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complexity theory is an important and growing area in computer science that has caught the imagination of many researchers in mathematics, physics and biology. In order to reach out to a large section of scientists and engineers, the paper introduces elementary concepts in complexity theory in a informal manner, motivating the reader with many examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, Guo and Xia gave sufficient conditions for an STBC to achieve full diversity when a PIC (Partial Interference Cancellation) or a PIC-SIC (PIC with Successive Interference Cancellation) decoder is used at the receiver. In this paper, we give alternative conditions for an STBC to achieve full diversity with PIC and PIC-SIC decoders, which are equivalent to Guo and Xia's conditions, but are much easier to check. Using these conditions, we construct a new class of full diversity PIC-SIC decodable codes, which contain the Toeplitz codes and a family of codes recently proposed by Zhang, Xu et. al. as proper subclasses. With the help of the new criteria, we also show that a class of PIC-SIC decodable codes recently proposed by Zhang, Shi et. al. can be decoded with much lower complexity than what is reported, without compromising on full diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Zero padded systems with linear receivers are shown to be robust and amenable to fast implementations in single antenna scenarios. In this paper, properties of such systems are investigated when multiple antennas are present at both ends of the communication link. In particular, their diversity and complexity are evaluated for precoded transmissions. The linear receivers are shown to exploit multipath and receive diversities, even in the absence of any coding at the transmitter. Use of additional redundancy to improve performance is considered and the effect of transmission rate on diversity order is analyzed. Low complexity implementations of Zero Forcing receivers are devised to further enhance their applicability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a time division duplex multiple-input multiple-output (nt × nr MIMO). Using channel state information (CSI) at the transmitter, singular value decomposition (SVD) of the channel matrix is performed. This transforms the MIMO channel into parallel subchannels, but has a low overall diversity order. Hence, we propose X-Codes which achieve a higher diversity order by pairing the subchannels, prior to SVD preceding. In particular, each pair of information symbols is encoded by a fixed 2 × 2 real rotation matrix. X-Codes can be decoded using nr very low complexity two-dimensional real sphere decoders. Error probability analysis for X-Codes enables us to choose the optimal pairing and the optimal rotation angle for each pair. Finally, we show that our new scheme outperforms other low complexity precoding schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Very Long Instruction Word (VLIW) architectures exploit instruction level parallelism (ILP) with the help of the compiler to achieve higher instruction throughput with minimal hardware. However, control and data dependencies between operations limit the available ILP, which not only hinders the scalability of VLIW architectures, but also result in code size expansion. Although speculation and predicated execution mitigate ILP limitations due to control dependencies to a certain extent, they increase hardware cost and exacerbate code size expansion. Simultaneous multistreaming (SMS) can significantly improve operation throughput by allowing interleaved execution of operations from multiple instruction streams. In this paper we study SMS for VLIW architectures and quantify the benefits associated with it using a case study of the MPEG-2 video decoder. We also propose the notion of virtual resources for VLIW architectures, which decouple architectural resources (resources exposed to the compiler) from the microarchitectural resources, to limit code size expansion. Our results for a VLIW architecture demonstrate that: (1) SMS delivers much higher throughput than that achieved by speculation and predicated execution, (2) the increase in performance due to the addition of speculation and predicated execution support over SMS averages around 12%. The minor increase in performance might not warrant the additional hardware complexity involved, and (3) the notion of virtual resources is very effective in reducing no-operations (NOPs) and consequently reduce code size with little or no impact on performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a low-ML-decoding-complexity, full-rate, full-diversity space-time block code (STBC) for a 2 transmit antenna, 2 receive antenna multiple-input multipleoutput (MIMO) system, with coding gain equal to that of the best and well known Golden code for any QAM constellation.Recently, two codes have been proposed (by Paredes, Gershman and Alkhansari and by Sezginer and Sari), which enjoy a lower decoding complexity relative to the Golden code, but have lesser coding gain. The 2 × 2 STBC presented in this paper has lesser decoding complexity for non-square QAM constellations,compared with that of the Golden code, while having the same decoding complexity for square QAM constellations. Compared with the Paredes-Gershman-Alkhansari and Sezginer-Sari codes, the proposed code has the same decoding complexity for nonrectangular QAM constellations. Simulation results, which compare the codeword error rate (CER) performance, are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a Gaussian mixture model (GMM) based vector quantization (VQ) method for coding wideband speech line spectrum frequency (LSF) parameters at low complexity. The PDF of LSF source vector is modeled using the Gaussian mixture (GM) density with higher number of uncorrelated Gaussian mixtures and an optimum scalar quantizer (SQ) is designed for each Gaussian mixture. The reduction of quantization complexity is achieved using the relevant subset of available optimum SQs. For an input vector, the subset of quantizers is chosen using nearest neighbor criteria. The developed method is compared with the recent VQ methods and shown to provide high quality rate-distortion (R/D) performance at lower complexity. In addition, the developed method also provides the advantages of bitrate scalability and rate-independent complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have developed two reduced complexity bit-allocation algorithms for MP3/AAC based audio encoding, which can be useful at low bit-rates. One algorithm derives optimum bit-allocation using constrained optimization of weighted noise-to-mask ratio and the second algorithm uses decoupled iterations for distortion control and rate control, with convergence criteria. MUSHRA based evaluation indicated that the new algorithm would be comparable to AAC but requiring only about 1/10 th the complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To effectively support today’s global economy, database systems need to manage data in multiple languages simultaneously. While current database systems do support the storage and management of multilingual data, they are not capable of querying across different natural languages. To address this lacuna, we have recently proposed two cross-lingual functionalities, LexEQUAL[13] and SemEQUAL[14], for matching multilingual names and concepts, respectively. In this paper, we investigate the native implementation of these multilingual functionalities as first-class operators on relational engines. Specifically, we propose a new multilingual storage datatype, and an associated algebra of the multilingual operators on this datatype. These components have been successfully implemented in the PostgreSQL database system, including integration of the algebra with the query optimizer and inclusion of a metric index in the access layer. Our experiments demonstrate that the performance of the native implementation is up to two orders-of-magnitude faster than the corresponding outsidethe- server implementation. Further, these multilingual additions do not adversely impact the existing functionality and performance. To the best of our knowledge, our prototype represents the first practical implementation of a crosslingual database query engine.