979 resultados para codes over rings
Resumo:
We propose new classes of linear codes over integer rings of quadratic extensions of Q, the field of rational numbers. The codes are considered with respect to a Mannheim metric, which is a Manhattan metric modulo a two-dimensional (2-D) grid. In particular, codes over Gaussian integers and Eisenstein-Jacobi integers are extensively studied. Decoding algorithms are proposed for these codes when up to two coordinates of a transmitted code vector are affected by errors of arbitrary Mannheim weight. Moreover, we show that the proposed codes are maximum-distance separable (MDS), with respect to the Hamming distance. The practical interest in such Mannheim-metric codes is their use in coded modulation schemes based on quadrature amplitude modulation (QAM)-type constellations, for which neither the Hamming nor the Lee metric is appropriate.
Resumo:
This study identified the areas of poor specificity in national injury hospitalization data and the areas of improvement and deterioration in specificity over time. A descriptive analysis of ten years of national hospital discharge data for Australia from July 2002-June 2012 was performed. Proportions and percentage change of defined/undefined codes over time was examined. At the intent block level, accidents and assault were the most poorly defined with over 11% undefined in each block. The mechanism blocks for accidents showed a significant deterioration in specificity over time with up to 20% more undefined codes in some mechanisms. Place and activity were poorly defined at the broad block level (43% and 72% undefined respectively). Private hospitals and hospitals in very remote locations recorded the highest proportion of undefined codes. Those aged over 60 years and females had the higher proportion of undefined code usage. This study has identified significant, and worsening, deficiencies in the specificity of coded injury data in several areas. Focal attention is needed to improve the quality of injury data, especially on those identified in this study, to provide the evidence base needed to address the significant burden of injury in the Australian community.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
Cooperative communication using rateless codes, in which the source transmits an infinite number of parity bits to the destination until the receipt of an acknowledgment, has recently attracted considerable interest. It provides a natural and efficient mechanism for accumulating mutual information from multiple transmitting relays. We develop an analysis of queued cooperative relay systems that combines the communication-theoretic transmission aspects of cooperative communication using rateless codes over Rayleigh fading channels with the queuing-theoretic aspects associated with buffering messages at the relays. Relay cooperation combined with queuing reduces the message transmission times and also helps distribute the traffic load in the network, which improves throughput significantly.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.
Resumo:
We present here an information reconciliation method and demonstrate for the first time that it can achieve efficiencies close to 0.98. This method is based on the belief propagation decoding of non-binary LDPC codes over finite (Galois) fields. In particular, for convenience and faster decoding we only consider power-of-two Galois fields.
Resumo:
This paper provides concordance procedures for product-level trade and production data in the EU and examines the implications of changing product classifications on measured product adding and dropping at Belgian firms. Using the algorithms developed by Pierce and Schott (2012a, 2012b), the paper develops concordance procedures that allow researchers to trace changes in coding systems over time and to translate product-level production and trade data into a common classification that is consistent both within a single year and over time. Separate procedures are created for the eightdigit Combined Nomenclature system used to classify international trade activities at the product level within the European Union as well as for the eight-digit Prodcom categories used to classify products in European domestic production data. The paper further highlights important differences in coverage between the Prodcom and Combined Nomenclature classifications which need to be taken into account when generating combined domestic production and international trade data at the product level. The use of consistent product codes over time results in less product adding and dropping at continuing firms in the Belgian export and production data.
Resumo:
It is known that by employing space-time-frequency codes (STFCs) to frequency selective MIMO-OFDM systems, all the three diversity viz spatial, temporal and multipath can be exploited. There exists space-time-frequency block codes (STFBCs) designed using orthogonal designs with constellation precoder to get full diversity (Z.Liu, Y.Xin and G.Giannakis IEEE Trans. Signal Processing, Oct. 2002). Since orthogonal designs of rate one exists only for two transmit antennas, for more than two transmit antennas STFBCs of rate-one and full-diversity cannot be constructed using orthogonal designs. This paper presents a STFBC scheme of rate one for four transmit antennas designed using quasi-orthogonal designs along with co-ordinate interleaved orthogonal designs (Zafar Ali Khan and B. Sundar Rajan Proc: ISIT 2002). Conditions on the signal sets that give full-diversity are identified. Simulation results are presented to show the superiority of our codes over the existing ones.
Resumo:
Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.
Resumo:
A multiseries integrable model (MSIM) is defined as a family of compatible flows on an infinite-dimensional Lie group of N-tuples of formal series around N given poles on the Riemann sphere. Broad classes of solutions to a MSIM are characterized through modules over rings of rational functions, called asymptotic modules. Possible ways for constructing asymptotic modules are Riemann-Hilbert and ∂̄ problems. When MSIM's are written in terms of the group coordinates, some of them can be contracted into standard integrable models involving a small number of scalar functions only. Simple contractible MSIM's corresponding to one pole, yield the Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy. Two-pole contractible MSIM's are exhibited, which lead to a hierarchy of solvable systems of nonlinear differential equations consisting of (2 + 1) -dimensional evolution equations and of quite strong differential constraints. © 1989 American Institute of Physics.
Resumo:
In this paper, we prove the nonexistence of arcs with parameters (232, 48) and (233, 48) in PG(4,5). This rules out the existence of linear codes with parameters [232,5,184] and [233,5,185] over the field with five elements and improves two instances in the recent tables by Maruta, Shinohara and Kikui of optimal codes of dimension 5 over F5.
Resumo:
We present a construction of constant weight codes based on the prime ideals of a Noetherian commutative ring. The coding scheme is based on the uniqueness of the primary decomposition of ideals in Noetherian rings. The source alphabet consists of a set of radical ideals constructed from a chosen subset of the prime spectrum of the ring. The distance function between two radical ideals is taken to be the Hamming metric based on the symmetric distance between sets. As an application we construct codes for random networks employing SAF routing.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.