916 resultados para Perfect codes
Resumo:
A new coding technique to be used in steganography is evaluated. The performanceof this new technique is computed and comparisons with the well-known theoreticalupper bound, Hamming upper bound and basic LSB are established.
Resumo:
Recently, minimum and non-minimum delay perfect codes were proposed for any channel of dimension n. Their construction appears in the literature as a subset of cyclic division algebras over Q(zeta(3)) only for the dimension n = 2(s)n(1), where s is an element of {0,1}, n(1) is odd and the signal constellations are isomorphic to Z[zeta(3)](n) In this work, we propose an innovative methodology to extend the construction of minimum and non-minimum delay perfect codes as a subset of cyclic division algebras over Q(zeta(3)), where the signal constellations are isomorphic to the hexagonal A(2)(n)-rotated lattice, for any channel of any dimension n such that gcd(n,3) = 1. (C) 2012 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this work, we propose an innovative methodology to extend the construction of minimum and non-minimum delay perfect codes as a subset of cyclic division algebras over ℚ(ζ3), where the signal constellations are isomorphic to the hexagonal An 2 -rotated lattice, for any channel of any dimension n such that gcd{n, 3) = 1.
Resumo:
En este proyecto se implementan tres algoritmos esteganográficos diferentes usando JPEG2000 como portador del mensaje, se calcula el rendimiento de cada uno de ellos y se comparan usando una gráfica. El objetivo es visualizar para unos casos específicos que el algoritmo basado en el producto de dos códigos lineales perfectos tiene mejor rendimiento que el obtenido con algoritmos como el F5 y el LSB.
Resumo:
La finalitat d'aquest projecte és aconseguir construir codis binaris perfectes no lineals de manera eficient. Per a fer-ho, hem desenvolupat un paquet de software per a l'intèrpret MAGMA que conté funcions per a la construcció de codis perfectes, càlcul d'invariants de codis i altres funcions complementàries per a fer càlculs sobre les paraules d'un codi.
Resumo:
Um dos grandes problemas em aberto na matemática até os dias de hoje é a questão do empacotamento esférico. Para tentar resolver este problema, tem-se estudado alguns fatores importantes inerentes a isso. Nesse trabalho apresentamos uma breve introdução à teoria de reticulados e teoria de códigos, onde trataremos conceitos como densidade de empacotamento e de cobertura. O objetivo deste trabalho é o estudo da densidade de empacotamento e de cobertura em reticulados relativos à norma p. Neste estudo enfatizaremos o artigo Quasi-perfect codes in the lp metric de Strapasson et al. [13] onde é estabelecida a noção de perfeição e imperfeição de reticulados relativos à norma p, e é apresentado um algoritmo que busca por reticulados perfeitos e quase-perfeitos.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Finding large deletion correcting codes is an important issue in coding theory. Many researchers have studied this topic over the years. Varshamov and Tenegolts constructed the Varshamov-Tenengolts codes (VT codes) and Levenshtein showed the Varshamov-Tenengolts codes are perfect binary one-deletion correcting codes in 1992. Tenegolts constructed T codes to handle the non-binary cases. However the T codes are neither optimal nor perfect, which means some progress can be established. Latterly, Bours showed that perfect deletion-correcting codes have a close relationship with design theory. By this approach, Wang and Yin constructed perfect 5-deletion correcting codes of length 7 for large alphabet size. For our research, we focus on how to extend or combinatorially construct large codes with longer length, few deletions and small but non-binary alphabet especially ternary. After a brief study, we discovered some properties of T codes and produced some large codes by 3 different ways of extending some existing good codes.
Resumo:
Dedicated to the memory of the late professor Stefan Dodunekov on the occasion of his 70th anniversary. We classify up to multiplier equivalence maximal (v, 3, 1) optical orthogonal codes (OOCs) with v ≤ 61 and maximal (v, 3, 2, 1) OOCs with v ≤ 99. There is a one-to-one correspondence between maximal (v, 3, 1) OOCs, maximal cyclic binary constant weight codes of weight 3 and minimum dis tance 4, (v, 3; ⌊(v − 1)/6⌋) difference packings, and maximal (v, 3, 1) binary cyclically permutable constant weight codes. Therefore the classification of (v, 3, 1) OOCs holds for them too. Some of the classified (v, 3, 1) OOCs are perfect and they are equivalent to cyclic Steiner triple systems of order v and (v, 3, 1) cyclic difference families.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
We adopt the Dirac model for graphene and calculate the Casimir interaction energy between a plane suspended graphene sample and a parallel plane perfect conductor. This is done in two ways. First, we use the quantum-field-theory approach and evaluate the leading-order diagram in a theory with 2+1-dimensional fermions interacting with 3+1-dimensional photons. Next, we consider an effective theory for the electromagnetic field with matching conditions induced by quantum quasiparticles in graphene. The first approach turns out to be the leading order in the coupling constant of the second one. The Casimir interaction for this system appears to be rather weak. It exhibits a strong dependence on the mass of the quasiparticles in graphene.
Resumo:
We show that commutative group spherical codes in R(n), as introduced by D. Slepian, are directly related to flat tori and quotients of lattices. As consequence of this view, we derive new results on the geometry of these codes and an upper bound for their cardinality in terms of minimum distance and the maximum center density of lattices and general spherical packings in the half dimension of the code. This bound is tight in the sense it can be arbitrarily approached in any dimension. Examples of this approach and a comparison of this bound with Union and Rankin bounds for general spherical codes is also presented.
Resumo:
Surface heat treatment in glasses and ceramics, using CO(2) lasers, has attracted the attention of several researchers around the world due to its impact in technological applications, such as lab-on-a-chip devices, diffraction gratings and microlenses. Microlens fabrication on a glass surface has been studied mainly due to its importance in optical devices (fiber coupling, CCD signal enhancement, etc). The goal of this work is to present a systematic study of the conditions for microlens fabrications, along with the viability of using microlens arrays, recorded on the glass surface, as bidimensional codes for product identification. This would allow the production of codes without any residues (like the fine powder generated by laser ablation) and resistance to an aggressive environment, such as sterilization processes. The microlens arrays were fabricated using a continuous wave CO(2) laser, focused on the surface of flat commercial soda-lime silicate glass substrates. The fabrication conditions were studied based on laser power, heating time and microlens profiles. A He-Ne laser was used as a light source in a qualitative experiment to test the viability of using the microlenses as bidimensional codes.