94 resultados para Textual complexity for Romanian language


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A space-time block code (STBC) is said to be multigroup decodable if the information symbols encoded by it can be partitioned into two or more groups such that each group of symbols can be maximum-likelihood (ML) decoded independently of the other symbol groups. In this paper, we show that the upper triangular matrix encountered during the sphere decoding of a linear dispersion STBC can be rank-deficient even when the rate of the code is less than the minimum of the number of transmit and receive antennas. We then show that all known families of high-rate (rate greater than 1) multigroup decodable codes have rank-deficient matrix even when the rate is less than the number of transmit and receive antennas, and this rank-deficiency problem arises only in asymmetric MIMO systems when the number of receive antennas is strictly less than the number of transmit antennas. Unlike the codes with full-rank matrix, the complexity of the sphere decoding-based ML decoder for STBCs with rank-deficient matrix is polynomial in the constellation size, and hence is high. We derive the ML sphere decoding complexity of most of the known high-rate multigroup decodable codes, and show that for each code, the complexity is a decreasing function of the number of receive antennas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Construction of high rate Space Time Block Codes (STBCs) with low decoding complexity has been studied widely using techniques such as sphere decoding and non Maximum-Likelihood (ML) decoders such as the QR decomposition decoder with M paths (QRDM decoder). Recently Ren et al., presented a new class of STBCs known as the block orthogonal STBCs (BOSTBCs), which could be exploited by the QRDM decoders to achieve significant decoding complexity reduction without performance loss. The block orthogonal property of the codes constructed was however only shown via simulations. In this paper, we give analytical proofs for the block orthogonal structure of various existing codes in literature including the codes constructed in the paper by Ren et al. We show that codes formed as the sum of Clifford Unitary Weight Designs (CUWDs) or Coordinate Interleaved Orthogonal Designs (CIODs) exhibit block orthogonal structure. We also provide new construction of block orthogonal codes from Cyclic Division Algebras (CDAs) and Crossed-Product Algebras (CPAs). In addition, we show how the block orthogonal property of the STBCs can be exploited to reduce the decoding complexity of a sphere decoder using a depth first search approach. Simulation results of the decoding complexity show a 30% reduction in the number of floating point operations (FLOPS) of BOSTBCs as compared to STBCs without the block orthogonal structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) was introduced by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain an optimal ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider signal detection in nt × nr underdetermined MIMO (UD-MIMO) systems, where i) nt >; nr with a overload factor α = nt over nr >; 1, ii) nt symbols are transmitted per channel use through spatial multiplexing, and iii) nt, nr are large (in the range of tens). A low-complexity detection algorithm based on reactive tabu search is considered. A variable threshold based stopping criterion is proposed which offers near-optimal performance in large UD-MIMO systems at low complexities. A lower bound on the maximum likelihood (ML) bit error performance of large UD-MIMO systems is also obtained for comparison. The proposed algorithm is shown to achieve BER performance close to the ML lower bound within 0.6 dB at an uncoded BER of 10-2 in 16 × 8 V-BLAST UD-MIMO system with 4-QAM (32 bps/Hz). Similar near-ML performance results are shown for 32 × 16, 32 × 24 V-BLAST UD-MIMO with 4-QAM/16-QAM as well. A performance and complexity comparison between the proposed algorithm and the λ-generalized sphere decoder (λ-GSD) algorithm for UD-MIMO shows that the proposed algorithm achieves almost the same performance of λ-GSD but at a significantly lesser complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low complexity joint estimation of synchronization impairments and channel in a single-user MIMO-OFDM system is presented in this paper. Based on a system model that takes into account the effects of synchronization impairments such as carrier frequency offset, sampling frequency offset, and symbol timing error, and channel, a Maximum Likelihood (ML) algorithm for the joint estimation is proposed. To reduce the complexity of ML grid search, the number of received signal samples used for estimation need to be reduced. The conventional channel estimation techniques using Least-Squares (LS) or Maximum a posteriori (MAP) methods fail for the reduced sample under-determined system, which results in poor performance of the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS) based channel estimation method in a sparse fading scenario, where the received samples used for estimation are less than that required for an LS or MAP based estimation. The performance of the estimation method is studied through numerical simulations, and it is observed that CS based joint estimator performs better than LS and MAP based joint estimator. (C) 2013 Elsevier GmbH. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Communication complexity refers to the minimum rate of public communication required for generating a maximal-rate secret key (SK) in the multiterminal source model of Csiszar and Narayan. Tyagi recently characterized this communication complexity for a two-terminal system. We extend the ideas in Tyagi's work to derive a lower bound on communication complexity in the general multiterminal setting. In the important special case of the complete graph pairwise independent network (PIN) model, our bound allows us to determine the exact linear communication complexity, i.e., the communication complexity when the communication and SK are restricted to be linear functions of the randomness available at the terminals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QR decomposition (QRD) is a widely used Numerical Linear Algebra (NLA) kernel with applications ranging from SONAR beamforming to wireless MIMO receivers. In this paper, we propose a novel Givens Rotation (GR) based QRD (GR QRD) where we reduce the computational complexity of GR and exploit higher degree of parallelism. This low complexity Column-wise GR (CGR) can annihilate multiple elements of a column of a matrix simultaneously. The algorithm is first realized on a Two-Dimensional (2 D) systolic array and then implemented on REDEFINE which is a Coarse Grained run-time Reconfigurable Architecture (CGRA). We benchmark the proposed implementation against state-of-the-art implementations to report better throughput, convergence and scalability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .