835 resultados para Error correction codes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter proposes a simple tuning algorithm for digital deadbeat control based on error correlation. By injecting a square-wave reference input and calculating the correlation of the control error, a gain correction for deadbeat control is obtained. The proposed solution is simple, it requires a short tuning time, and it is suitable for different DC-DC converter topologies. Simulation and experimental results on synchronous buck converters confirm the properties of the proposed tuning algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For an n(t) transmit, n(r) receive antenna system (n(t) x n(r) system), a full-rate space time block code (STBC) transmits at least n(min) = min(n(t), n(r))complex symbols per channel use. The well-known Golden code is an example of a full-rate, full-diversity STBC for two transmit antennas. Its ML-decoding complexity is of the order of M(2.5) for square M-QAM. The Silver code for two transmit antennas has all the desirable properties of the Golden code except its coding gain, but offers lower ML-decoding complexity of the order of M(2). Importantly, the slight loss in coding gain is negligible compared to the advantage it offers in terms of lowering the ML-decoding complexity. For higher number of transmit antennas, the best known codes are the Perfect codes, which are full-rate, full-diversity, information lossless codes (for n(r) >= n(t)) but have a high ML-decoding complexity of the order of M(ntnmin) (for n(r) < n(t), the punctured Perfect codes are considered). In this paper, a scheme to obtain full-rate STBCs for 2(a) transmit antennas and any n(r) with reduced ML-decoding complexity of the order of M(nt)(n(min)-3/4)-0.5 is presented. The codes constructed are also information lossless for >= n(t), like the Perfect codes, and allow higher mutual information than the comparable punctured Perfect codes for n(r) < n(t). These codes are referred to as the generalized Silver codes, since they enjoy the same desirable properties as the comparable Perfect codes (except possibly the coding gain) with lower ML-decoding complexity, analogous to the Silver code and the Golden code for two transmit antennas. Simulation results of the symbol error rates for four and eight transmit antennas show that the generalized Silver codes match the punctured Perfect codes in error performance while offering lower ML-decoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel method of constructing Dispersion Matrices (DM) for Coherent Space-Time Shift Keying (CSTSK) relying on arbitrary PSK signal sets by exploiting codes from division algebras. We show that classic codes from Cyclic Division Algebras (CDA) may be interpreted as DMs conceived for PSK signal sets. Hence various benefits of CDA codes such as their ability to achieve full diversity are inherited by CSTSK. We demonstrate that the proposed CDA based DMs are capable of achieving a lower symbol error ratio than the existing DMs generated using the capacity as their optimization objective function for both perfect and imperfect channel estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. The application of O-CDMA to multimedia transmission (voice, data, and video) is investigated. The simultaneous transmission of various services is achieved by assigning to each user unique multiple code signatures. Thus, by applying a parallel mapping technique, we achieve multi-rate services. A random access protocol is proposed, here, where all distinct codes are used, for packet transmission. The codes, Optical Orthogonal Code (OOC), or 1D codes and Wavelength/Time Single-Pulse-per-Row (W/T SPR), or 2D codes, are analyzed. These 1D and 2D codes with varied weight are used to differentiate the Quality of Service (QoS). The theoretical bit error probability corresponding to the quality of each service is established using 1D and 2D codes in the receiver noiseless case and compared. The results show that, using 2D codes QoS in multimedia transmission is better than using 1D codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. © 2004 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. ©2003 Published by Elsevier Science B. V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building on Item Response Theory we introduce students’ optimal behavior in multiple-choice tests. Our simulations indicate that the optimal penalty is relatively high, because although correction for guessing discriminates against risk-averse subjects, this effect is small compared with the measurement error that the penalty prevents. This result obtains when knowledge is binary or partial, under different normalizations of the score, when risk aversion is related to knowledge and when there is a pass-fail break point. We also find that the mean degree of difficulty should be close to the mean level of knowledge and that the variance of difficulty should be high.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Body length measurement is an important part of growth, condition, and mortality analyses of larval and juvenile fish. If the measurements are not accurate (i.e., do not reflect real fish length), results of subsequent analyses may be affected considerably (McGurk, 1985; Fey, 1999; Porter et al., 2001). The primary cause of error in fish length measurement is shrinkage related to collection and preservation (Theilacker, 1980; Hay, 1981; Butler, 1992; Fey, 1999). The magnitude of shrinkage depends on many factors, namely the duration and speed of the collection tow, abundance of other planktonic organisms in the sample (Theilacker, 1980; Hay, 1981; Jennings, 1991), the type and strength of the preservative (Hay, 1982), and the species of fish (Jennings, 1991; Fey, 1999). Further, fish size affects shrinkage (Fowler and Smith, 1983; Fey, 1999, 2001), indicating that live length should be modeled as a function of preserved length (Pepin et al., 1998; Fey, 1999).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New space-time trellis codes with four- and eight-level phase-shift keying (PSK) and 16-phase quadrature amplitude modulation (QAM) for two transmit antennas in slow-fading channels are presented in this paper. Unlike most of the codes that are reported in the literature, the proposed codes are specifically designed to minimize the frame error probability from a union-bound perspective. The performance of the proposed codes with various memory orders and receive antennas is evaluated by simulation. It is shown that the proposed codes outperform previously known codes in all studied cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.