990 resultados para Forward error correcting code
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In this work, we determine the coset weight spectra of all binary cyclic codes of lengths up to 33, ternary cyclic and negacyclic codes of lengths up to 20 and of some binary linear codes of lengths up to 33 which are distance-optimal, by using some of the algebraic properties of the codes and a computer assisted search. Having these weight spectra the monotony of the function of the undetected error probability after t-error correction P(t)ue (C,p) could be checked with any precision for a linear time. We have used a programm written in Maple to check the monotony of P(t)ue (C,p) for the investigated codes for a finite set of points of p € [0, p/(q-1)] and in this way to determine which of them are not proper.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.
Resumo:
We show the similarity between belief propagation and TAP, for decoding corrupted messages encoded by Sourlas's method. The latter is a special case of the Gallager error- correcting code, where the code word comprises products of K bits selected randomly from the original message. We examine the efficacy of solutions obtained by the two methods for various values of K and show that solutions for K>=3 may be sensitive to the choice of initial conditions in the case of unbiased patterns. Good approximations are obtained generally for K=2 and for biased patterns in the case of K>=3, especially when Nishimori's temperature is being used.
Resumo:
We investigate a 40 Gbit/s all-Raman amplified standard single mode fibre (SMF) transmission system with the mid-range amplifier spacing of 80-90 km. The impact of span configuration on double Rayleigh back scattering (DRBS) was studied. Four different span configurations were compared experimentally. A transmission distance of 1666 km in SMF has been achieved without forward error correcting (FEC) for the first time. The results demonstrate that the detrimental effects associated with high pump power Raman amplification in standard fibre can be minimised by dispersion map optimisation. © 2003 IEEE.
Resumo:
Partial information leakage in deterministic public-key cryptosystems refers to a problem that arises when information about either the plaintext or the key is leaked in subtle ways. Quite a common case is where there are a small number of possible messages that may be sent. An attacker may be able to crack the scheme simply by enumerating all the possible ciphertexts. Two methods are proposed for facing the partial information leakage problem in RSA that incorporate a random element into the encrypted message to increase the number of possible ciphertexts. The resulting scheme is, effectively, an RSA-like cryptosystem which exhibits probabilistic encryption. The first method involves encrypting several similar messages with RSA and then using the Quadratic Residuosity Problem (QRP) to mark the intended one. In this way, an adversary who has correctly guessed two or more of the ciphertexts is still in doubt about which message is the intended one. The cryptographic strength of the combined system is equal to the computational difficulty of factorising a large integer; ideally, this should be feasible. The second scheme uses error-correcting codes for accommodating the random component. The plaintext is processed with an error-correcting code and deliberately corrupted before encryption. The introduced corruption lies within the error-correcting ability of the code, so as to enable the recovery of the original message. The random corruption offers a vast number of possible ciphertexts corresponding to a given plaintext; hence an attacker cannot deduce any useful information from it. The proposed systems are compared to other cryptosystems sharing similar characteristics, in terms of execution time and ciphertext size, so as to determine their practical utility. Finally, parameters which determine the characteristics of the proposed schemes are also examined.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
The present research problem is to study the existing encryption methods and to develop a new technique which is performance wise superior to other existing techniques and at the same time can be very well incorporated in the communication channels of Fault Tolerant Hard Real time systems along with existing Error Checking / Error Correcting codes, so that the intention of eaves dropping can be defeated. There are many encryption methods available now. Each method has got it's own merits and demerits. Similarly, many crypt analysis techniques which adversaries use are also available.
Resumo:
One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.
Resumo:
Recently a scheme has been proposed for constructing quantum error-correcting codes that embed a finite-dimensional code space in the infinite-dimensional Hilbert space of a system described by continuous quantum variables. One of the difficult steps in this scheme is the preparation of the encoded states. We show how these states can be generated by coupling a continuous quantum variable to a single qubit. An ion trap quantum computer provides a natural setting for a continuous system coupled to a qubit. We discuss how encoded states may be generated in an ion trap.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Using event-related brain potentials, the time course of error detection and correction was studied in healthy human subjects. A feedforward model of error correction was used to predict the timing properties of the error and corrective movements. Analysis of the multichannel recordings focused on (1) the error-related negativity (ERN) seen immediately after errors in response- and stimulus-locked averages and (2) on the lateralized readiness potential (LRP) reflecting motor preparation. Comparison of the onset and time course of the ERN and LRP components showed that the signs of corrective activity preceded the ERN. Thus, error correction was implemented before or at least in parallel with the appearance of the ERN component. Also, the amplitude of the ERN component was increased for errors, followed by fast corrective movements. The results are compatible with recent views considering the ERN component as the output of an evaluative system engaged in monitoring motor conflict.
Resumo:
Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work.