871 resultados para error correction
Resumo:
To meet the growing demands of the high data rate applications, suitable asynchronous schemes such as Fiber-Optic Code Division Multiple Access (FO-CDMA) are required in the last mile. FO-CDMA scheme offers potential benefits and at the same time it faces many challenges. Wavelength/Time (W/T) 2-D codes for use in FO-CDMA, can be classified mainly into two types: 1) hybrid codes and 2) matrix codes, to reduce the 'time' like property, have been proposed. W/T single-pulse-per-row (SPR) are energy efficient codes as this family of codes have autocorrelation sidelobes of '0', which is unique to this family and the important feature of the W/T multiple-pulses-per-row (MPR) codes is that the aspect ratio can be varied by trade off between wavelength and temporal lengths. These W/T codes have improved cardinality and spectral efficiency over other W/T codes and at the same time have lowest crosscorrelation values. In this paper, we analyze the performances of the FO-CDMA networks using W/T SPR codes and W/T MPR codes, with and without forward error correction (FEC) coding and show that with FEC there is dual advantage of error correction and reduced spread sequence length.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
Image and video filtering is a key image-processing task in computer vision especially in noisy environment. In most of the cases the noise source is unknown and hence possess a major difficulty in the filtering operation. In this paper we present an error-correction based learning approach for iterative filtering. A new FIR filter is designed in which the filter coefficients are updated based on Widrow-Hoff rule. Unlike the standard filter the proposed filter has the ability to remove noise without the a priori knowledge of the noise. Experimental result shows that the proposed filter efficiently removes the noise and preserves the edges in the image. We demonstrate the capability of the proposed algorithm by testing it on standard images infected by Gaussian noise and on a real time video containing inherent noise. Experimental result shows that the proposed filter is better than some of the existing standard filters
Resumo:
There are many wireless sensor network(WSN) applications which require reliable data transfer between the nodes. Several techniques including link level retransmission, error correction methods and hybrid Automatic Repeat re- Quest(ARQ) were introduced into the wireless sensor networks for ensuring reliability. In this paper, we use Automatic reSend request(ASQ) technique with regular acknowledgement to design reliable end-to-end communication protocol, called Adaptive Reliable Transport(ARTP) protocol, for WSNs. Besides ensuring reliability, objective of ARTP protocol is to ensure message stream FIFO at the receiver side instead of the byte stream FIFO used in TCP/IP protocol suite. To realize this objective, a new protocol stack has been used in the ARTP protocol. The ARTP protocol saves energy without affecting the throughput by sending three different types of acknowledgements, viz. ACK, NACK and FNACK with semantics different from that existing in the literature currently and adapting to the network conditions. Additionally, the protocol controls flow based on the receiver's feedback and congestion by holding ACK messages. To the best of our knowledge, there has been little or no attempt to build a receiver controlled regularly acknowledged reliable communication protocol. We have carried out extensive simulation studies of our protocol using Castalia simulator, and the study shows that our protocol performs better than related protocols in wireless/wire line networks, in terms of throughput and energy efficiency.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
A recent approach for the construction of constant dimension subspace codes, designed for error correction in random networks, is to consider the codes as orbits of suitable subgroups of the general linear group. In particular, a cyclic orbit code is the orbit of a cyclic subgroup. Hence a possible method to construct large cyclic orbit codes with a given minimum subspace distance is to select a subspace such that the orbit of the Singer subgroup satisfies the distance constraint. In this paper we propose a method where some basic properties of difference sets are employed to select such a subspace, thereby providing a systematic way of constructing cyclic orbit codes with specified parameters. We also present an explicit example of such a construction.
Resumo:
The set of all subspaces of F-q(n) is denoted by P-q(n). The subspace distance d(S)(X, Y) = dim(X) + dim(Y)-2dim(X boolean AND Y) defined on P-q(n) turns it into a natural coding space for error correction in random network coding. A subset of P-q(n) is called a code and the subspaces that belong to the code are called codewords. Motivated by classical coding theory, a linear coding structure can be imposed on a subset of P-q(n). Braun et al. conjectured that the largest cardinality of a linear code, that contains F-q(n), is 2(n). In this paper, we prove this conjecture and characterize the maximal linear codes that contain F-q(n).
Resumo:
[Es]El presente trabajo se basa en las consultas que los profesores y profesoras de distintas asignaturas nos hacen al profesorado de lengua. Muchas veces los profesores debemos corregir no sólo el contenido de los trabajos de nuestros alumnos, sino también la lengua. La discusión no es nueva: ¿somos todos los profesores también profesores de lengua? Es un desafío del que difícilmente podemos escapar, ya que la lengua además de ser una materia de estudio también es el vehículo en el que se imparten los contenidos de todas las asignaturas. Con el presente trabajo pretendemos ayudar a los profesores que no imparten lengua como asignatura a corregir los trabajos de sus alumnos. Esta propuesta consta de tres ejes de actuación marcados por un orden de prioridad: prevenir, autocorregir y ayudar.
Resumo:
This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.
In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.
This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.
Resumo:
Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we
•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;
•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;
•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;
•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;
•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.
The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we
•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;
•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.
The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.
Resumo:
221 p.
Resumo:
分析了RS(Reed-Solomon)纠错码和网格编码调制(Trellis Coded Modulation,TCM)技术在光脉冲位置调制通信中的应用.在此基础上,提出了以RS码作为外码,以网格编码调制技术作为内码,用于光脉冲位置调制通信的新编码方案,能以几乎不减少通信速率的优势,提高传统RS码系统在时变带限光信道中的通信性能.模拟研究了在不同空间光信道条件下,传统RS码的符号正确传输率和误码率,并对网格编码调制的编码增益、RS码与网格编码调制级联时的编码增益进行了仿真研究,证实了本方案的有效性.
Resumo:
文中介绍的误差自修正方法是通过光栅位移测量系统中单片机对光栅传感器的多个零位信号进行计数,并根据测量值和系统设定值得到的误差函数自动进行误差修正。实验结果表明,该方法对光栅位移测量系统的误差既可自动进行有效的修正,又可提高系统的测量精度。
Resumo:
O objetivo específico da presente dissertação é estimar a elasticidade-PIB do Imposto de Renda Pessoa Física (IRPF) e Imposto Renda Pessoa Jurídica (IRPJ) no Brasil entre 1986 e 2012. A pesquisa também incorpora em seus objetivos uma análise técnica a respeito da tributação e seus impactos sobre o sistema econômico, tanto a nível microeconômico e macroeconômico, além de abordar o IRPF e IRPJ em seu aspecto econômico e jurídico. No tratamento metodológico são utilizados modelos de Vetor de Correção de erros (VEC) para estimar as elasticidades-PIB do IRPF e IRPJ. Os resultados apontam uma elasticidade-PIB, tanto para IRPF quanto IRPJ, acima da unidade, na maioria dos modelos estimados, e existem períodos determinados que impactam consideravelmente sobre à arrecadação desses tributos.