835 resultados para Error correction codes
Resumo:
In recognition-based user interface, users’ satisfaction is determined not only by recognition accuracy but also by effort to correct recognition errors. In this paper, we introduce a crossmodal error correction technique, which allows users to correct errors of Chinese handwriting recognition by speech. The focus of the paper is a multimodal fusion algorithm supporting the crossmodal error correction. By fusing handwriting and speech recognition, the algorithm can correct errors in both character extraction and recognition of handwriting. The experimental result indicates that the algorithm is effective and efficient. Moreover, the evaluation also shows the correction technique can help users to correct errors in handwriting recognition more efficiently than the other two error correction techniques.
Resumo:
Fast forward error correction codes are becoming an important component in bulk content delivery. They fit in naturally with multicast scenarios as a way to deal with losses and are now seeing use in peer to peer networks as a basis for distributing load. In particular, new irregular sparse parity check codes have been developed with provable average linear time performance, a significant improvement over previous codes. In this paper, we present a new heuristic for generating codes with similar performance based on observing a server with an oracle for client state. This heuristic is easy to implement and provides further intuition into the need for an irregular heavy tailed distribution.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user. © 2010 The American Physical Society.
Resumo:
Single-molecule sequencing instruments can generate multikilobase sequences with the potential to greatly improve genome and transcriptome assembly. However, the error rates of single-molecule reads are high, which has limited their use thus far to resequencing bacteria. To address this limitation, we introduce a correction algorithm and assembly strategy that uses short, high-fidelity sequences to correct the error in single-molecule sequences. We demonstrate the utility of this approach on reads generated by a PacBio RS instrument from phage, prokaryotic and eukaryotic whole genomes, including the previously unsequenced genome of the parrot Melopsittacus undulatus, as well as for RNA-Seq reads of the corn (Zea mays) transcriptome. Our long-read correction achieves >99.9% base-call accuracy, leading to substantially better assemblies than current sequencing strategies: in the best example, the median contig size was quintupled relative to high-coverage, second-generation assemblies. Greater gains are predicted if read lengths continue to increase, including the prospect of single-contig bacterial chromosome assembly.
Resumo:
We investigated the role of visual feedback of task performance in visuomotor adaptation. Participants produced novel two degrees of freedom movements (elbow flexion-extension, forearm pronation-supination) to move a cursor towards visual targets. Following trials with no rotation, participants were exposed to a 60A degrees visuomotor rotation, before returning to the non-rotated condition. A colour cue on each trial permitted identification of the rotated/non-rotated contexts. Participants could not see their arm but received continuous and concurrent visual feedback (CF) of a cursor representing limb position or post-trial visual feedback (PF) representing the movement trajectory. Separate groups of participants who received CF were instructed that online modifications of their movements either were, or were not, permissible as a means of improving performance. Feedforward-mediated performance improvements occurred for both CF and PF groups in the rotated environment. Furthermore, for CF participants this adaptation occurred regardless of whether feedback modifications of motor commands were permissible. Upon re-exposure to the non-rotated environment participants in the CF, but not PF, groups exhibited post-training aftereffects, manifested as greater angular deviations from a straight initial trajectory, with respect to the pre-rotation trials. Accordingly, the nature of the performance improvements that occurred was dependent upon the timing of the visual feedback of task performance. Continuous visual feedback of task performance during task execution appears critical in realising automatic visuomotor adaptation through a recalibration of the visuomotor mapping that transforms visual inputs into appropriate motor commands.
Resumo:
We analyze the effect of a quantum error correcting code on the entanglement of encoded logical qubits in the presence of a dephasing interaction with a correlated environment. Such correlated reservoir introduces entanglement between physical qubits. We show that for short times the quantum error correction interprets such entanglement as errors and suppresses it. However, for longer time, although quantum error correction is no longer able to correct errors, it enhances the rate of entanglement production due to the interaction with the environment.
Resumo:
The authors studied pattern stability and error correction during in-phase and antiphase 4-ball fountain juggling. To obtain ball trajectories, they made and digitized high-speed film recordings of 4 highly skilled participants juggling at 3 different heights (and thus different frequencies). From those ball trajectories, the authors determined and analyzed critical events (i.e., toss, zenith, catch, and toss onset) in terms of variability of point estimates of relative phase and temporal correlations. Contrary to common findings on basic instances of rhythmic interlimb coordination, in-phase and antiphase patterns were equally variable (i.e., stable). Consistent with previous findings, however, pattern stability decreased with increasing frequency. In contrast to previous results for 3-ball cascade juggling, negative lag-one correlations for catch-catch intervals were absent, but the authors obtained evidence for error corrections between catches and toss onsets. That finding may have reflected participants' high skill level, which yielded smaller errors that allowed for corrections later in the hand cycle.
Resumo:
Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.
Resumo:
Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.