974 resultados para CORRECTING OUTPUT CODES
Resumo:
The Z(4)-linearity is a construction technique of good binary codes. Motivated by this property, we address the problem of extending the Z(4)-linearity to Z(q)n-linearity. In this direction, we consider the n-dimensional Lee space of order q, that is, (Z(q)(n), d(L)), as one of the most interesting spaces for coding applications. We establish the symmetry group of Z(q)(n) for any n and q by determining its isometries. We also show that there is no cyclic subgroup of order q(n) in Gamma(Z(q)(n)) acting transitively in Z(q)(n). Therefore, there exists no Z(q)n-linear code with respect to the cyclic subgroup.
Resumo:
Synthetic-heterodyne demodulation is a useful technique for dynamic displacement and velocity detection in interferometric sensors, as it can provide an output signal that is immune to interferometric drift. With the advent of cost-effective, high-speed real-time signal-processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. In synthetic heterodyne, to obtain the actual dynamic displacement or vibration of the object under test requires knowledge of the interferometer visibility and also the argument of two Bessel functions. In this paper, a method is described for determining the former and setting the Bessel function argument to a set value, which ensures maximum sensitivity. Conventional synthetic-heterodyne demodulation requires the use of two in-phase local oscillators; however, the relative phase of these oscillators relative to the interferometric signal is unknown. It is shown that, by using two additional quadrature local oscillators, a demodulated signal can be obtained that is independent of this phase difference. The experimental interferometer is aMichelson configuration using a visible single-mode laser, whose current is sinusoidally modulated at a frequency of 20 kHz. The detected interferometer output is acquired using a 250 kHz analog-to-digital converter and processed in real time. The system is used to measure the displacement sensitivity frequency response and linearity of a piezoelectric mirror shifter over a range of 500 Hz to 10 kHz. The experimental results show good agreement with two data-obtained independent techniques: the signal coincidence and denominated n-commuted Pernick method.
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Resumo:
This dissertation concerns the intersection of three areas of discrete mathematics: finite geometries, design theory, and coding theory. The central theme is the power of finite geometry designs, which are constructed from the points and t-dimensional subspaces of a projective or affine geometry. We use these designs to construct and analyze combinatorial objects which inherit their best properties from these geometric structures. A central question in the study of finite geometry designs is Hamada’s conjecture, which proposes that finite geometry designs are the unique designs with minimum p-rank among all designs with the same parameters. In this dissertation, we will examine several questions related to Hamada’s conjecture, including the existence of counterexamples. We will also study the applicability of certain decoding methods to known counterexamples. We begin by constructing an infinite family of counterexamples to Hamada’s conjecture. These designs are the first infinite class of counterexamples for the affine case of Hamada’s conjecture. We further demonstrate how these designs, along with the projective polarity designs of Jungnickel and Tonchev, admit majority-logic decoding schemes. The codes obtained from these polarity designs attain error-correcting performance which is, in certain cases, equal to that of the finite geometry designs from which they are derived. This further demonstrates the highly geometric structure maintained by these designs. Finite geometries also help us construct several types of quantum error-correcting codes. We use relatives of finite geometry designs to construct infinite families of q-ary quantum stabilizer codes. We also construct entanglement-assisted quantum error-correcting codes (EAQECCs) which admit a particularly efficient and effective error-correcting scheme, while also providing the first general method for constructing these quantum codes with known parameters and desirable properties. Finite geometry designs are used to give exceptional examples of these codes.
Resumo:
The International GNSS Service (IGS) issues four sets of so-called ultra-rapid products per day, which are based on the contributions of the IGS Analysis Centers. The traditional (“old”) ultra-rapid orbit and earth rotation parameters (ERP) solution of the Center for Orbit Determination in Europe (CODE) was based on the output of three consecutive 3-day long-arc rapid solutions. Information from the IERS Bulletin A was required to generate the predicted part of the old CODE ultra-rapid product. The current (“new”) product, activated in November 2013, is based on the output of exactly one multi-day solution. A priori information from the IERS Bulletin A is no longer required for generating and predicting the orbits and ERPs. This article discusses the transition from the old to the new CODE ultra-rapid orbit and ERP products and the associated improvement in reliability and performance. All solutions used in this article were generated with the development version of the Bernese GNSS Software. The package was slightly extended to meet the needs of the new CODE ultra-rapid generation.
Resumo:
The BSRN Toolbox is a software package supplied by the WRMC and is freely available to all station scientists and data users. The main features of the package include a download manager for Station- to-Archive files, a tool to convert files into human readable TAB-separated ASCII-tables (similar to those output by the PANGAEA database), and a tool to check data sets for violations of the "BSRN Global Network recommended QC tests, V2.0" quality criteria. The latter tool creates quality codes, one per measured value, indicating if the data are "physically possible," "extremely rare," or if "intercomparison limits are exceeded." In addition, auxiliary data such as solar zenith angle or global calculated from diffuse and direct can be output. All output from the QC tool can be visualized using PanPlot (doi:10.1594/PANGAEA.816201).
Resumo:
The visual world is presented to the brain through patterns of action potentials in the population of optic nerve fibers. Single-neuron recordings show that each retinal ganglion cell has a spatially restricted receptive field, a limited integration time, and a characteristic spectral sensitivity. Collectively, these response properties define the visual message conveyed by that neuron's action potentials. Since the size of the optic nerve is strictly constrained, one expects the retina to generate a highly efficient representation of the visual scene. By contrast, the receptive fields of nearby ganglion cells often overlap, suggesting great redundancy among the retinal output signals. Recent multineuron recordings may help resolve this paradox. They reveal concerted firing patterns among ganglion cells, in which small groups of nearby neurons fire synchronously with delays of only a few milliseconds. As there are many more such firing patterns than ganglion cells, such a distributed code might allow the retina to compress a large number of distinct visual messages into a small number of optic nerve fibers. This paper will review the evidence for a distributed coding scheme in the retinal output. The performance limits of such codes are analyzed with simple examples, illustrating that they allow a powerful trade-off between spatial and temporal resolution.
Resumo:
We investigate the performance of Gallager type error- correcting codes for Binary Symmetric Channels, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability, with improved decoding properties is obtained for finite K and C.
Resumo:
Low-density parity-check codes with irregular constructions have recently been shown to outperform the most advanced error-correcting codes to date. In this paper we apply methods of statistical physics to study the typical properties of simple irregular codes. We use the replica method to find a phase transition which coincides with Shannon's coding bound when appropriate parameters are chosen. The decoding by belief propagation is also studied using statistical physics arguments; the theoretical solutions obtained are in good agreement with simulation results. We compare the performance of irregular codes with that of regular codes and discuss the factors that contribute to the improvement in performance.
Resumo:
An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.
Resumo:
Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.
Resumo:
We propose a method to determine the critical noise level for decoding Gallager type low density parity check error correcting codes. The method is based on the magnetization enumerator (¸M), rather than on the weight enumerator (¸W) presented recently in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.