858 resultados para Sparse Coding
A Low ML-Decoding Complexity, High Coding Gain, Full-Rate, Full-Diversity STBC for 4 x 2 MIMO System
Resumo:
This paper proposes a full-rate, full-diversity space-time block code(STBC) with low maximum likelihood (ML) decoding complexity and high coding gain for the 4 transmit antenna, 2 receive antenna (4 x 2) multiple-input multiple-output (MIMO) system that employs 4/16-QAM. For such a system, the best code known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed another STBC (BHV code) for 4-QAM which has lower ML-decoding complexity than the DjABBA code but does not have full-diversity like the DjABBA code. The code proposed in this paper has the same ML-decoding complexity as the BHV code for any square M-QAM but has full-diversity for 4- and 16-QAM. Compared with the DjABBA code, the proposed code has lower ML-decoding complexity for square M-QAM constellation, higher coding gain for 4- and 16-QAM, and hence a better codeword error rate (CER) performance. Simulation results confirming this are presented.
Resumo:
This study discusses the scope of historical earthquake analysis in low-seismicity regions. Examples of non-damaging earthquake reports are given from the Eastern Baltic (Fennoscandian) Shield in north-eastern Europe from the 16th to the 19th centuries. The information available for past earthquakes in the region is typically sparse and cannot be increased through a careful search of the archives. This study applies recommended rigorous methodologies of historical seismology developed using ample data to the sparse reports from the Eastern Baltic Shield. Attention is paid to the context of reporting, the identity and role of the authors, the circumstances of the reporting, and the opportunity to verify the available information by collating the sources. We evaluate the reliability of oral earthquake recollections and develop criteria for cases when a historical earthquake is attested to by a single source. We propose parametric earthquake scenarios as a way to deal with sparse macroseismic reports and as an improvement to existing databases.
Resumo:
Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised learning tasks. In this paper, we propose a new algorithm for solving semi-supervised binary classification problem using sparse GP regression (GPR) models. It is closely related to semi-supervised learning based on support vector regression (SVR) and maximum margin clustering. The proposed algorithm is simple and easy to implement. It gives a sparse solution directly unlike the SVR based algorithm. Also, the hyperparameters are estimated easily without resorting to expensive cross-validation technique. Use of sparse GPR model helps in making the proposed algorithm scalable. Preliminary results on synthetic and real-world data sets demonstrate the efficacy of the new algorithm.
Resumo:
A simple and efficient algorithm for the bandwidth reduction of sparse symmetric matrices is proposed. It involves column-row permutations and is well-suited to map onto the linear array topology of the SIMD architectures. The efficiency of the algorithm is compared with the other existing algorithms. The interconnectivity and the memory requirement of the linear array are discussed and the complexity of its layout area is derived. The parallel version of the algorithm mapped onto the linear array is then introduced and is explained with the help of an example. The optimality of the parallel algorithm is proved by deriving the time complexities of the algorithm on a single processor and the linear array.
Resumo:
In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant, common decoder over a discrete time multiple access channel under various side information assumptions. In particular, we derive an achievable rate region for this communication problem.
Resumo:
The minimum distance of linear block codes is one of the important parameter that indicates the error performance of the code. When the code rate is less than 1/2, efficient algorithms are available for finding minimum distance using the concept of information sets. When the code rate is greater than 1/2, only one information set is available and efficiency suffers. In this paper, we investigate and propose a novel algorithm to find the minimum distance of linear block codes with the code rate greater than 1/2. We propose to reverse the roles of information set and parity set to get virtually another information set to improve the efficiency. This method is 67.7 times faster than the minimum distance algorithm implemented in MAGMA Computational Algebra System for a (80, 45) linear block code.
Resumo:
A genomic library was constructed from a HindIII digest of Azospirillum lipoferum chromosomal DNA in the HindIII site of pUC19. From the library, a clone, pALH64, which showed strong hybridization with 3' end labeled A. lipoferum total tRNAs and which contains a 2.9 kb insert was isolated and restriction map of the insert established. The nucleotide sequence of a 490 bp HindIII-HincII subfragment containing a cluster of genes coding for 5S rRNA, tRNA(Val)(UAC), tRNA(Thr)(UGA) and tRNA(Lys)(UUU) has been determined. The gene organization is 5S rRNA (115 bp), spacer (10 bp), tRNA(Val) (76 bp), spacer (3 bp), tRNA(Thr) (76 bp), spacer (7 bp) and tRNA(Lys) (76 bp). Hybridization experiments using A. lipoferum total tRNAs and 5S rRNA with the cloned DNA probes revealed that all three tRNA genes and the 5S rRNA gene are expressed in vivo in the bacterial cells.
Resumo:
This paper presents a low-ML-decoding-complexity, full-rate, full-diversity space-time block code (STBC) for a 2 transmit antenna, 2 receive antenna multiple-input multiple-output (MIMO) system, with coding gain equal to that of the best and well known Golden code for any QAM constellation. Recently, two codes have been proposed (by Paredes, Gershman and Alkhansari and by Sezginer and Sari), which enjoy a lower decoding complexity relative to the Golden code, but have lesser coding gain. The 2 x 2 STBC presented in this paper has lesser decoding complexity for non-square QAM constellations, compared with that of the Golden code, while having the same decoding complexity for square QAM constellations. Compared with the Paredes-Gershman-Alkhansari and Sezginer-Sari codes, the proposed code has the same decoding complexity for non-rectangular QAM constellations. Simulation results, which compare the codeword error rate (CER) performance, are presented.
Resumo:
The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.