65 resultados para LDPC decoding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multispin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iterative multiuser joint decoding based on exact Belief Propagation (BP) is analyzed in the large system limit by means of the replica method. It is shown that performance can be improved by appropriate power assignment to the users. The optimum power assignment can be found by linear programming in most technically relevant cases. The performance of BP iterative multiuser joint decoding is compared to suboptimum approximations based on Interference Cancellation (IC). While IC receivers show a significant loss for equal-power users, they yield performance close to BP under optimum power assignment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A domain independent ICA-based approach to watermarking is presented. This approach can be used on images, music or video to embed either a robust or fragile watermark. In the case of robust watermarking, the method shows high information rate and robustness against malicious and non-malicious attacks, while keeping a low induced distortion. The fragile watermarking scheme, on the other hand, shows high sensitivity to tampering attempts while keeping the requirement for high information rate and low distortion. The improved performance is achieved by employing a set of statistically independent sources (the independent components) as the feature space and principled statistical decoding methods. The performance of the suggested method is compared to other state of the art approaches. The paper focuses on applying the method to digitized images although the same approach can be used for other media, such as music or video.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security and reliability of LDPC based public-key cryptosystems are discussed and analysed. We study attacks on the cryptosystem when partial knowledge of one or more of the private key components and/or of the plaintext have been acquired.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research the recovery of a DQPSK signal will be demonstrated using a single Mach-Zehnder Interferometer (MZI). By changing the phase delay in one of the arms it will be shown that different delays will produce different output levels. It will also be shown that with a certain level of phase shift the DQPSK signal can be converted into four different equally spaced optical power levels. With each decoded level representing one of the four possible bit permutations. By using this additional phase shift in one of the arms the number of MZIs required for decoding can be reduced from two to one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter we outline a sensory-linguistic approach to the, study of reading skill development. We call this a sensory-linguistic approach because the focus of interest is on the relationship between basic sensory processing skills and the ability to extract efficiently the orthographic and phonological information available in text during reading. Our review discusses how basic sensory processing deficits are associated with developmental dyslexia, and how these impairments may degrade word-decoding skills. We then review studies that demonstrate a more direct relationship between sensitivity to particular types of auditory and visual stimuli and the normal development of literacy skills. Specifically, we suggest that the phonological and orthographic skills engaged while reading are constrained by the ability to detect and discriminate dynamic stimuli in the auditory and visual systems respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a mean field theory of code-division multiple access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dyslexia as a concept is defined and reviewed in a context of psychological, neurological and educational processes. In the present investigation these processes are recognised but emphasis is placed on dyslexia as a phenomenon of a written language system. The type of script system involved in the phenomenon is that of an alphabetic code representing phonological elements of language In script form related to meaning. The nature of this system is viewed In the light of current linguistic and psycholinguistic studies. These studies based as they are on an analysis of underlying written language structures provide a framework for examining the arbitrary and rule-governed system which a young child is expected to acquire. There appear to be fundamental implications for reading, spelling and writing processes; for example an alphabetic system requires recognition of consistent script-phonetic relationships, 'mediated word identification' and in particular uni-directional sensory and motor modes of perceiving. These are critical maturational factors in the young learner. The skills needed by the child for decoding and encoding such a phonemic script are described in a psychological and neuropsychological framework. Evidence for individual differences in these skills is noted and the category of the dyslexic-type learner emerges. Incidence is related to the probabilities of individual differences in lateralisation of brain function not favouring the acquisition of our script system In some cases. Dyslexia is therefore regarded as a primary difficulty consequent upon the incompatibility between:the written language system itself and the intrinsic, developmental skills of an individual's perceptual/motor system. It is recognised that secondary stresses e.g. socio-cultural deprivation, low intellectual potential or emotional trauma can further inhibit the learning process. Symptomology of a dyslexic syndrome is described.. The symptomology is seen by the writer to constitute a clinical entity. a specific category of learning difficulty for which predictive and diagnostic procedure could be devised for classroom use. Consequently an index of relevant test items has been compiled, based upon key clinical experiences and theoretical writings. This instrument knovn as the Aston Index is presented and discussed. The early stages of validation are reported and the proposed longtitudinal studies are described. The aim is to give teachers in the classroom the power and understanding to plan more effectively the earliest stages of teaching and learning; in particular to provide the means of matching the nature of the skill to be acquired with the underlying developmental patterns of each individual learner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.