992 resultados para Maximum distance separable (MDS) convolutional codes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maximum distance separable (MDS) convolutional codes are characterized through the property that the free distance meets the generalized Singleton bound. The existence of free MDS convolutional codes over Zpr was recently discovered in Oued and Sole (IEEE Trans Inf Theory 59(11):7305–7313, 2013) via the Hensel lift of a cyclic code. In this paper we further investigate this important class of convolutional codes over Zpr from a new perspective. We introduce the notions of p-standard form and r-optimal parameters to derive a novel upper bound of Singleton type on the free distance. Moreover, we present a constructive method for building general (non necessarily free) MDS convolutional codes over Zpr for any given set of parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose new classes of linear codes over integer rings of quadratic extensions of Q, the field of rational numbers. The codes are considered with respect to a Mannheim metric, which is a Manhattan metric modulo a two-dimensional (2-D) grid. In particular, codes over Gaussian integers and Eisenstein-Jacobi integers are extensively studied. Decoding algorithms are proposed for these codes when up to two coordinates of a transmitted code vector are affected by errors of arbitrary Mannheim weight. Moreover, we show that the proposed codes are maximum-distance separable (MDS), with respect to the Hamming distance. The practical interest in such Mannheim-metric codes is their use in coded modulation schemes based on quadrature amplitude modulation (QAM)-type constellations, for which neither the Hamming nor the Lee metric is appropriate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main results of this paper are twofold: the first one is a matrix theoretical result. We say that a matrix is superregular if all of its minors that are not trivially zero are nonzero. Given a a×b, a ≥ b, superregular matrix over a field, we show that if all of its rows are nonzero then any linear combination of its columns, with nonzero coefficients, has at least a−b + 1 nonzero entries. Secondly, we make use of this result to construct convolutional codes that attain the maximum possible distance for some fixed parameters of the code, namely, the rate and the Forney indices. These results answer some open questions on distances and constructions of convolutional codes posted in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we use some classical ideas from linear systems theory to analyse convolutional codes. In particular, we exploit input-state-output representations of periodic linear systems to study periodically time-varying convolutional codes. In this preliminary work we focus on the column distance of these codes and derive explicit necessary and sufficient conditions for an (n, 2, 1) periodically time-varying convolutional code to have Maximum Distance Profile (MDP).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this contribution, we propose a first general definition of rank-metric convolutional codes for multi-shot network coding. To this aim, we introduce a suitable concept of distance and we establish a generalized Singleton bound for this class of codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigate a novel model of concatenation of a pair of two-dimensional (2D) convolutional codes. We consider finite-support 2D convolutional codes and choose the so-called Fornasini-Marchesini input-state-output (ISO) model to represent these codes. More concretely, we interconnect in series two ISO representations of two 2D convolutional codes and derive the ISO representation of the ob- tained 2D convolutional code. We provide necessary condition for this representation to be minimal. Moreover, structural properties of modal reachability and modal observability of the resulting 2D convolutional codes are investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of local geodetic networks for georeferencing of rural properties has become a requirement after publication of the Georeferencing Technical Standard by INCRA. According to this standard, the maximum distance of baselines to GNSS L1 receivers is of 20 km. Besides the length of the baseline, the geometry and the number of geodetic control stations are other factors to be considered in the implementation of geodetic networks. Thus, this research aimed to examine the influence of baseline lengths higher than the regulated limit of 20 km, the geometry and the number of control stations on quality of local geodetic networks for georeferencing, and also to demonstrate the importance of using specific tests to evaluate the solution of ambiguities and on the quality of the adjustment. The results indicated that the increasing number of control stations has improved the quality of the network, the geometry has not influenced on the quality and the baseline length has influenced on the quality; however, lengths higher than 20 km has not interrupted the implementation, with GPS L1 receiver, of the local geodetic network for the purpose of georeferencing. Also, the use of different statistical tests, both for the evaluation of the resolution of ambiguities and for the adjustment, have enabled greater clearness in analyzing the results, which allow that unsuitable observations may be eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As consumers demand more functionality) from their electronic devices and manufacturers supply the demand then electrical power and clock requirements tend to increase, however reassessing system architecture can fortunately lead to suitable counter reductions. To maintain low clock rates and therefore reduce electrical power, this paper presents a parallel convolutional coder for the transmit side in many wireless consumer devices. The coder accepts a parallel data input and directly computes punctured convolutional codes without the need for a separate puncturing operation while the coded bits are available at the output of the coder in a parallel fashion. Also as the computation is in parallel then the coder can be clocked at 7 times slower than the conventional shift-register based convolutional coder (using DVB 7/8 rate). The presented coder is directly relevant to the design of modern low-power consumer devices

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spherical codes in even dimensions n = 2m generated by a commutative group of orthogonal matrices can be determined by a quotient of m-dimensional lattices when the sublattice has an orthogonal basis. We discuss here the existence of orthogonal sublattices of the lattices A2, D3, D4 and E8, which have the best packing density in their dimensions, in order to generate families of commutative group codes approaching the bound presented in Siqueira and Costa (2008) [14]. © 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of local geodetic networks for georeferencing of rural properties has become a requirement after publication of the Georeferencing Technical Standard by INCRA. According to this standard, the maximum distance of baselines to GNSS L1 receivers is of 20 km. Besides the length of the baseline, the geometry and the number of geodetic control stations are other factors to be considered in the implementation of geodetic networks. Thus, this research aimed to examine the influence of baseline lengths higher than the regulated limit of 20 km, the geometry and the number of control stations on quality of local geodetic networks for georeferencing, and also to demonstrate the importance of using specific tests to evaluate the solution of ambiguities and on the quality of the adjustment. The results indicated that the increasing number of control stations has improved the quality of the network, the geometry has not influenced on the quality and the baseline length has influenced on the quality; however, lengths higher than 20 km has not interrupted the implementation, with GPS L1 receiver, of the local geodetic network for the purpose of georeferencing. Also, the use of different statistical tests, both for the evaluation of the resolution of ambiguities and for the adjustment, have enabled greater clearness in analyzing the results, which allow that unsuitable observations may be eliminated.