958 resultados para Information complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the days of Digital Subscriber Links(DSL), Time Domain Equalizers(TEQ's) have been used to combat time dispersive channels in Multicarrier Systems. In this paper, we propose computationally inexpensive techniques to recompute TEQ weights in the presence of changes in the channel, especially over fast fading channels. The techniques use no extra information except the perturbation itself, and provide excellent approximations to the new TEQ weights. The proposed adaptation techniques are shown to perform admirably well for small changes in channels for OFDM systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) has been recently studied by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain a best ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A space-time block code (STBC) is said to be multigroup decodable if the information symbols encoded by it can be partitioned into two or more groups such that each group of symbols can be maximum-likelihood (ML) decoded independently of the other symbol groups. In this paper, we show that the upper triangular matrix encountered during the sphere decoding of a linear dispersion STBC can be rank-deficient even when the rate of the code is less than the minimum of the number of transmit and receive antennas. We then show that all known families of high-rate (rate greater than 1) multigroup decodable codes have rank-deficient matrix even when the rate is less than the number of transmit and receive antennas, and this rank-deficiency problem arises only in asymmetric MIMO systems when the number of receive antennas is strictly less than the number of transmit antennas. Unlike the codes with full-rank matrix, the complexity of the sphere decoding-based ML decoder for STBCs with rank-deficient matrix is polynomial in the constellation size, and hence is high. We derive the ML sphere decoding complexity of most of the known high-rate multigroup decodable codes, and show that for each code, the complexity is a decreasing function of the number of receive antennas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communication complexity refers to the minimum rate of public communication required for generating a maximal-rate secret key (SK) in the multiterminal source model of Csiszar and Narayan. Tyagi recently characterized this communication complexity for a two-terminal system. We extend the ideas in Tyagi's work to derive a lower bound on communication complexity in the general multiterminal setting. In the important special case of the complete graph pairwise independent network (PIN) model, our bound allows us to determine the exact linear communication complexity, i.e., the communication complexity when the communication and SK are restricted to be linear functions of the randomness available at the terminals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider near-optimal policies for a single user transmitting on a wireless channel which minimize average queue length under average power constraint. The power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. Later, we also extend these results to the multiuser case. We show that our policies can be used in a system with energy harvesting sources at the transmitter. Next we consider data users which require minimum rate guarantees. Finally we consider the system which has both data and real time users. Our policies have low computational complexity, closed form expression for mean delays and require only the mean arrival rate with no queue length information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planning in design processes is modeled in terms of connectivities between product developments. Each product development comprises a network of processes. Similarity between processes is analysed by a layered classification ranging from common components to shared design knowledge. The connectivities between products arising from similarities among products are represented by a multidimensional network. Design planning is described by flows or 'traffic' on this network which represents a structural model of complexity. Comparison is made with information based measures of the complexity of designs and processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.