913 resultados para Errors codes
Resumo:
This article has the purpose to review the main codes used to detect and correct errors in data communication specifically in the computer's network. The Hamming's code and the Ciclic Redundancy Code (CRC) are presented as the focus of this article as well as CRC hardware implementation. Each code is reviewed in details in order to fill the gaps in the literature and to make it accessible to the computer science and engineering students as well as to anyone who may be interested in learning the technique to treat error in data communication.
Resumo:
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011).
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
Communication is the process of transmitting data across channel. Whenever data is transmitted across a channel, errors are likely to occur. Coding theory is a stream of science that deals with finding efficient ways to encode and decode data, so that any likely errors can be detected and corrected. There are many methods to achieve coding and decoding. One among them is Algebraic Geometric Codes that can be constructed from curves. Cryptography is the science ol‘ security of transmitting messages from a sender to a receiver. The objective is to encrypt message in such a way that an eavesdropper would not be able to read it. A eryptosystem is a set of algorithms for encrypting and decrypting for the purpose of the process of encryption and decryption. Public key eryptosystem such as RSA and DSS are traditionally being prel‘en‘ec| for the purpose of secure communication through the channel. llowever Elliptic Curve eryptosystem have become a viable altemative since they provide greater security and also because of their usage of key of smaller length compared to other existing crypto systems. Elliptic curve cryptography is based on group of points on an elliptic curve over a finite field. This thesis deals with Algebraic Geometric codes and their relation to Cryptography using elliptic curves. Here Goppa codes are used and the curves used are elliptic curve over a finite field. We are relating Algebraic Geometric code to Cryptography by developing a cryptographic algorithm, which includes the process of encryption and decryption of messages. We are making use of fundamental properties of Elliptic curve cryptography for generating the algorithm and is used here to relate both.
Resumo:
Alternant codes over arbitrary finite commutative local rings with identity are constructed in terms of parity-check matrices. The derivation is based on the factorization of x s - 1 over the unit group of an appropriate extension of the finite ring. An efficient decoding procedure which makes use of the modified Berlekamp-Massey algorithm to correct errors and erasures is presented. Furthermore, we address the construction of BCH codes over Zm under Lee metric.
Resumo:
We propose new classes of linear codes over integer rings of quadratic extensions of Q, the field of rational numbers. The codes are considered with respect to a Mannheim metric, which is a Manhattan metric modulo a two-dimensional (2-D) grid. In particular, codes over Gaussian integers and Eisenstein-Jacobi integers are extensively studied. Decoding algorithms are proposed for these codes when up to two coordinates of a transmitted code vector are affected by errors of arbitrary Mannheim weight. Moreover, we show that the proposed codes are maximum-distance separable (MDS), with respect to the Hamming distance. The practical interest in such Mannheim-metric codes is their use in coded modulation schemes based on quadrature amplitude modulation (QAM)-type constellations, for which neither the Hamming nor the Lee metric is appropriate.
Resumo:
In this paper, we present a decoding principle for Goppa codes constructed by generalized polynomials, which is based on modified Berlekamp-Massey algorithm. This algorithm corrects all errors up to the Hamming weight $t\leq 2r$, i.e., whose minimum Hamming distance is $2^{2}r+1$.
Resumo:
In this paper, we introduced new construction techniques of BCH, alternant, Goppa, Srivastava codes through the semigroup ring B[X; 1 3Z0] instead of the polynomial ring B[X; Z0], where B is a finite commutative ring with identity, and for these constructions we improve the several results of [1]. After this, we present a decoding principle for BCH, alternant and Goppa codes which is based on modified Berlekamp-Massey algorithm. This algorithm corrects all errors up to the Hamming weight t ≤ r/2, i.e., whose minimum Hamming distance is r + 1.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble “miscodes” of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated “strength” of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was “weak,” indicating that the two types of errors were “linked.” It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding “strategy” that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved “strongly” encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the “carried over” information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the “dynamic” nature of the role hippocampus plays in delay type memory tasks.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction.
Resumo:
Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.