56 resultados para Decoding
em Aston University Research Archive
Resumo:
We employ two different methods, based on belief propagation and TAP,for decoding corrupted messages encoded by employing Sourlas's method, where the code word comprises products of K bits selected randomly from the original message. We show that the equations obtained by the two approaches are similar and provide the same solution as the one obtained by the replica approach in some cases K=2. However, we also show that for K>=3 and unbiased messages the iterative solution is sensitive to the initial conditions and is likely to provide erroneous solutions; and that it is generally beneficial to use Nishimori's temperature, especially in the case of biased messages.
Resumo:
Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.
Resumo:
The performance of "typical set (pairs) decoding" for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding method, errors occur, either when the information transmission is corrupted by atypical noise, or when multiple typical sequences satisfy the parity check equation as provided by the received corrupted codeword. We show that the average error rate for the second type of error over a given code ensemble can be accurately evaluated using the replica method, including the sensitivity to message length. Our approach generally improves the existing analysis known in the information theory community, which was recently reintroduced in IEEE Trans. Inf. Theory 45, 399 (1999), and is believed to be the most accurate to date. © 2002 The American Physical Society.
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
Developmental dyslexia is associated with deficits in the processing of basic auditory stimuli. Yet it is unclear how these sensory impairments might contribute to poor reading skills. This study better characterizes the relationship between phonological decoding skills, the lack of which is generally accepted to comprise the core deficit in reading disabilities, and auditory sensitivity to amplitude modulation (AM) and frequency modulation (FM). Thirty-eight adult subjects, 17 of whom had a history of developmental dyslexia, completed a battery, of psychophysical measures of sensitivity to FM and AM at different modulation rates, along with a measure of pseudoword reading accuracy and standardized assessments of literacy and cognitive skills. The subjects with a history of dyslexia were significantly less sensitive than controls to 2-Hz FM and 20-Hz AM only. The absence of a significant group difference for 2-Hz AM shows that the dyslexics do not have a general deficit in detecting all slow modulations. Thresholds for detecting 2-Hz and 240-Hz FM and 20-Hz AM correlated significantly with pseudoword reading accuracy. After accounting for various cognitive skills, however, multiple regression analyses showed that detection thresholds for both 2-Hz FM and 20-Hz AM were significant and independent predictors of pseudoword reading ability in the entire sample. Thresholds for 2-Hz AM and 240-Hz FM did not explain significant additional variance in pseudoword reading skill, it is therefore possible that certain components of auditory processing of modulations are related to phonological decoding skills, whereas others are not.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In this paper we will demonstrate the improved BER performance of doubly differential phase shift keying in a coherent optical packet switching scenario while still retaining the benefits of high frequency offset tolerance. © OSA 2014.
Resumo:
A wavelength converter for 10.7 Gbit/s DPSK signals is presented which eliminates the need to use additional encoders or path-dependent encoding/decoding. It has a small penalty of 1.8 dB at an OSNR of 24.8 dB.
Resumo:
A new generalized sphere decoding algorithm is proposed for underdetermined MIMO systems with fewer receive antennas N than transmit antennas M. The proposed algorithm is significantly faster than the existing generalized sphere decoding algorithms. The basic idea is to partition the transmitted signal vector into two subvectors x and x with N - 1 and M - N + 1 elements respectively. After some simple transformations, an outer layer Sphere Decoder (SD) can be used to choose proper x and then use an inner layer SD to decide x, thus the whole transmitted signal vector is obtained. Simulation results show that Double Layer Sphere Decoding (DLSD) has far less complexity than the existing Generalized Sphere Decoding (GSDs).
Resumo:
We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
Abstract Phonological tasks are highly predictive of reading development but their complexity obscures the underlying mechanisms driving this association. There are three key components hypothesised to drive the relationship between phonological tasks and reading; (a) the linguistic nature of the stimuli, (b) the phonological complexity of the stimuli, and (c) the production of a verbal response. We isolated the contribution of the stimulus and response components separately through the creation of latent variables to represent specially designed tasks that were matched for procedure. These tasks were administered to 570 6 to 7-year-old children along with standardised tests of regular word and non-word reading. A structural equation model, where tasks were grouped according to stimulus, revealed that the linguistic nature and the phonological complexity of the stimulus predicted unique variance in decoding, over and above matched comparison tasks without these components. An alternative model, grouped according to response mode, showed that the production of a verbal response was a unique predictor of decoding beyond matched tasks without a verbal response. In summary, we found that multiple factors contributed to reading development, supporting multivariate models over those that prioritize single factors. More broadly, we demonstrate the value of combining matched task designs with latent variable modelling to deconstruct the components of complex tasks.
Resumo:
This paper proposes the use of the 2-D differential decoding to improve the robustness of dual-polarization optical packet receivers and is demonstrated in a wavelength switching scenario for the first time.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.