954 resultados para Error-correcting Codes


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An experimental study aimed at assessing the influence of redundancy and neutrality on the performance of an (1+1)-ES evolution strategy modeled using Markov chains and applied to NK fitness landscapes is presented. For the study, two families of redundant binary representations, one non-neutral family which is based on linear transformations and that allows the phenotypic neighborhoods to be designed in a simple and effective way, and the neutral family based on the mathematical formulation of error control codes are used. The results indicate whether redundancy or neutrality affects more strongly the behavior of the algorithm used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans ce mémoire, nous nous pencherons tout particulièrement sur une primitive cryptographique connue sous le nom de partage de secret. Nous explorerons autant le domaine classique que le domaine quantique de ces primitives, couronnant notre étude par la présentation d’un nouveau protocole de partage de secret quantique nécessitant un nombre minimal de parts quantiques c.-à-d. une seule part quantique par participant. L’ouverture de notre étude se fera par la présentation dans le chapitre préliminaire d’un survol des notions mathématiques sous-jacentes à la théorie de l’information quantique ayant pour but primaire d’établir la notation utilisée dans ce manuscrit, ainsi que la présentation d’un précis des propriétés mathématique de l’état de Greenberger-Horne-Zeilinger (GHZ) fréquemment utilisé dans les domaines quantiques de la cryptographie et des jeux de la communication. Mais, comme nous l’avons mentionné plus haut, c’est le domaine cryptographique qui restera le point focal de cette étude. Dans le second chapitre, nous nous intéresserons à la théorie des codes correcteurs d’erreurs classiques et quantiques qui seront à leur tour d’extrême importances lors de l’introduction de la théorie quantique du partage de secret dans le chapitre suivant. Dans la première partie du troisième chapitre, nous nous concentrerons sur le domaine classique du partage de secret en présentant un cadre théorique général portant sur la construction de ces primitives illustrant tout au long les concepts introduits par des exemples présentés pour leurs intérêts autant historiques que pédagogiques. Ceci préparera le chemin pour notre exposé sur la théorie quantique du partage de secret qui sera le focus de la seconde partie de ce même chapitre. Nous présenterons alors les théorèmes et définitions les plus généraux connus à date portant sur la construction de ces primitives en portant un intérêt particulier au partage quantique à seuil. Nous montrerons le lien étroit entre la théorie quantique des codes correcteurs d’erreurs et celle du partage de secret. Ce lien est si étroit que l’on considère les codes correcteurs d’erreurs quantiques étaient de plus proches analogues aux partages de secrets quantiques que ne leur étaient les codes de partage de secrets classiques. Finalement, nous présenterons un de nos trois résultats parus dans A. Broadbent, P.-R. Chouha, A. Tapp (2009); un protocole sécuritaire et minimal de partage de secret quantique a seuil (les deux autres résultats dont nous traiterons pas ici portent sur la complexité de la communication et sur la simulation classique de l’état de GHZ).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuronal responses are conspicuously variable. We focus on one particular aspect of that variability: the precision of action potential timing. We show that for common models of noisy spike generation, elementary considerations imply that such variability is a function of the input, and can be made arbitrarily large or small by a suitable choice of inputs. Our considerations are expected to extend to virtually any mechanism of spike generation, and we illustrate them with data from the visual pathway. Thus, a simplification usually made in the application of information theory to neural processing is violated: noise is not independent of the message. However, we also show the existence of error-correcting topologies, which can achieve better timing reliability than their components.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show the similarity between belief propagation and TAP, for decoding corrupted messages encoded by Sourlas's method. The latter is a special case of the Gallager error- correcting code, where the code word comprises products of K bits selected randomly from the original message. We examine the efficacy of solutions obtained by the two methods for various values of K and show that solutions for K>=3 may be sensitive to the choice of initial conditions in the case of unbiased patterns. Good approximations are obtained generally for K=2 and for biased patterns in the case of K>=3, especially when Nishimori's temperature is being used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate a 40 Gbit/s all-Raman amplified standard single mode fibre (SMF) transmission system with the mid-range amplifier spacing of 80-90 km. The impact of span configuration on double Rayleigh back scattering (DRBS) was studied. Four different span configurations were compared experimentally. A transmission distance of 1666 km in SMF has been achieved without forward error correcting (FEC) for the first time. The results demonstrate that the detrimental effects associated with high pump power Raman amplification in standard fibre can be minimised by dispersion map optimisation. © 2003 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we experimentally demonstrate a 10 Mb/s error free visible light communications (VLC) system using polymer light-emitting diodes (PLEDs) for the first time. The PLED under test is a blue emitter with ∼600 kHz bandwidth. Having such a low bandwidth means the introduction of an intersymbol interference (ISI) induced penalty at higher transmission speeds and thus the requirement for an equalizer. In this work we improve on previous literature by implementing a decision feedback equalizer, rather than a linear equalizer. Considering 7% and 20% forward error correction codes, transmission speeds up to ∼12 Mb/s can be supported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article studies a simple, coherent approach for identifying and estimating error-correcting vector autoregressive moving average (EC-VARMA) models. Canonical correlation analysis is implemented for both determining the cointegrating rank, using a strongly consistent method, and identifying the short-run VARMA dynamics, using the scalar component methodology. Finite-sample performance is evaluated via Monte Carlo simulations and the approach is applied to modelling and forecasting US interest rates. The results reveal that EC-VARMA models generate significantly more accurate out-of-sample forecasts than vector error correction models (VECMs), especially for short horizons.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In handling large volumes of data such as chemical notations, serial numbers for books, etc., it is always advisable to provide checking methods which would indicate the presence of errors. The entire new discipline of coding theory is devoted to the study of the construction of codes which provide such error-detecting and correcting means.l Although these codes are very powerful, they are highly sophisticated from the point of view of practical implementation

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.