999 resultados para Cyclic generalized polynomial codes
Resumo:
The frequency spectrums are inefficiently utilized and cognitive radio has been proposed for full utilization of these spectrums. The central idea of cognitive radio is to allow the secondary user to use the spectrum concurrently with the primary user with the compulsion of minimum interference. However, designing a model with minimum interference is a challenging task. In this paper, a transmission model based on cyclic generalized polynomial codes discussed in [2] and [15], is proposed for the improvement in utilization of spectrum. The proposed model assures a non interference data transmission of the primary and secondary users. Furthermore, analytical results are presented to show that the proposed model utilizes spectrum more efficiently as compared to traditional models.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study establishes that for a given binary BCH code C0 n of length n generated by a polynomial g(x) ∈ F2[x] of degree r there exists a family of binary cyclic codes {Cm 2m−1(n+1)n}m≥1 such that for each m ≥ 1, the binary cyclic code Cm 2m−1(n+1)n has length 2m−1(n + 1)n and is generated by a generalized polynomial g(x 1 2m ) ∈ F2[x, 1 2m Z≥0] of degree 2mr. Furthermore, C0 n is embedded in Cm 2m−1(n+1)n and Cm 2m−1(n+1)n is embedded in Cm+1 2m(n+1)n for each m ≥ 1. By a newly proposed algorithm, codewords of the binary BCH code C0 n can be transmitted with high code rate and decoded by the decoder of any member of the family {Cm 2m−1(n+1)n}m≥1 of binary cyclic codes, having the same code rate.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Codes C-1,...,C-M of length it over F-q and an M x N matrix A over F-q define a matrix-product code C = [C-1 (...) C-M] (.) A consisting of all matrix products [c(1) (...) c(M)] (.) A. This generalizes the (u/u + v)-, (u + v + w/2u + v/u)-, (a + x/b + x/a + b + x)-, (u + v/u - v)- etc. constructions. We study matrix-product codes using Linear Algebra. This provides a basis for a unified analysis of /C/, d(C), the minimum Hamming distance of C, and C-perpendicular to. It also reveals an interesting connection with MDS codes. We determine /C/ when A is non-singular. To underbound d(C), we need A to be 'non-singular by columns (NSC)'. We investigate NSC matrices. We show that Generalized Reed-Muller codes are iterative NSC matrix-product codes, generalizing the construction of Reed-Muller codes, as are the ternary 'Main Sequence codes'. We obtain a simpler proof of the minimum Hamming distance of such families of codes. If A is square and NSC, C-perpendicular to can be described using C-1(perpendicular to),...,C-M(perpendicular to) and a transformation of A. This yields d(C-perpendicular to). Finally we show that an NSC matrix-product code is a generalized concatenated code.
Resumo:
In this paper we generalize the concept of geometrically uniform codes, formerly employed in Euclidean spaces, to hyperbolic spaces. We also show a characterization of generalized coset codes through the concept of G-linear codes.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
In this work, we determine the coset weight spectra of all binary cyclic codes of lengths up to 33, ternary cyclic and negacyclic codes of lengths up to 20 and of some binary linear codes of lengths up to 33 which are distance-optimal, by using some of the algebraic properties of the codes and a computer assisted search. Having these weight spectra the monotony of the function of the undetected error probability after t-error correction P(t)ue (C,p) could be checked with any precision for a linear time. We have used a programm written in Maple to check the monotony of P(t)ue (C,p) for the investigated codes for a finite set of points of p € [0, p/(q-1)] and in this way to determine which of them are not proper.
Resumo:
In this paper, we study the approximation of solutions of the homogeneous Helmholtz equation Δu + ω 2 u = 0 by linear combinations of plane waves with different directions. We combine approximation estimates for homogeneous Helmholtz solutions by generalized harmonic polynomials, obtained from Vekua’s theory, with estimates for the approximation of generalized harmonic polynomials by plane waves. The latter is the focus of this paper. We establish best approximation error estimates in Sobolev norms, which are explicit in terms of the degree of the generalized polynomial to be approximated, the domain size, and the number of plane waves used in the approximations.
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
A Goppa code is described in terms of a polynomial, known as Goppa polynomial, and in contrast to cyclic codes, where it is difficult to estimate the minimum Hamming distance d from the generator polynomial. Furthermore, a Goppa code has the property that d ≥ deg(h(X))+1, where h(X) is a Goppa polynomial. In this paper, we present a decoding principle for Goppa codes constructed by generalized polynomials, which is based on modified Berlekamp-Massey algorithm.
Resumo:
A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.