885 resultados para FEC using Reed-Solomon and Tornado codes
Resumo:
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Resumo:
This paper presents an alternative Forward Error Correction scheme, based on Reed-Solomon codes, with the aim of protecting the transmission of RTP-multimedia streams: the inter-packet symbol approach. This scheme is based on an alternative bit structure that allocates each symbol of the Reed-Solomon code in several RTP-media packets. This characteristic permits to exploit better the recovery capability of Reed-Solomon codes against bursty packet losses. The performance of our approach has been studied in terms of encoding/decoding time versus recovery capability, and compared with other proposed schemes in the literature. The theoretical analysis has shown that our approach allows the use of a lower size of the Galois Fields compared to other solutions. This lower size results in a decrease of the required encoding/decoding time while keeping a comparable recovery capability. Finally, experimental results have been carried out to assess the performance of our approach compared to other schemes in a simulated environment, where models for wireless and wireline channels have been considered.
Resumo:
This work studies the turbo decoding of Reed-Solomon codes in QAM modulation schemes for additive white Gaussian noise channels (AWGN) by using a geometric approach. Considering the relations between the Galois field elements of the Reed-Solomon code and the symbols combined with their geometric dispositions in the QAM constellation, a turbo decoding algorithm, based on the work of Chase and Pyndiah, is developed. Simulation results show that the performance achieved is similar to the one obtained with the pragmatic approach with binary decomposition and analysis.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The relatively high phase noise of coherent optical systems poses unique challenges for forward error correction (FEC). In this letter, we propose a novel semianalytical method for selecting combinations of interleaver lengths and binary Bose-Chaudhuri-Hocquenghem (BCH) codes that meet a target post-FEC bit error rate (BER). Our method requires only short pre-FEC simulations, based on which we design interleavers and codes analytically. It is applicable to pre-FEC BER ∼10-3, and any post-FEC BER. In addition, we show that there is a tradeoff between code overhead and interleaver delay. Finally, for a target of 10-5, numerical simulations show that interleaver-code combinations selected using our method have post-FEC BER around 2× target. The target BER is achieved with 0.1 dB extra signal-to-noise ratio.
Resumo:
In this paper the construction of Reed-Solomon RS(255,239) codeword is described and the process of coding and decoding a message is simulated and verified. RS(255,239), or its shortened version RS(224,208) is used as a coding technique in Low-Power Single Carrier (LPSC) physical layer, as described in IEEE 802.11ad standard. The encoder takes 239 8-bit information symbols, adds 16 parity symbols and constructs 255-byte codeword to be transmitted through wireless communication channel. RS(255,239) codeword is defined over Galois Field GF and is used for correcting upto 8 symbol errors. RS(255,239) code construction is fully implemented and Simulink test project is constructed for testing and analyzing purposes.
Resumo:
This paper introduces the concept of special subsets when applied to generator matrices based on lattices and cosets as presented by Calder-bank and Sloane. By using the special subsets we propose a non exhaustive code search for optimum codes. Although non exhaustive, the search always results in optimum codes for given (k1, V, Λ/Λ′). Tables with binary and ternary optimum codes to partitions of lattices with 8, 9 e 16 cosets, were obtained.
Resumo:
The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.
Resumo:
To evaluate vaginal microbiological and functional aspects in women with and without premature ovarian failure (POF) and the relationship with sexual function. A cross-sectional study of 36 women with POF under hormonal therapy who were age-matched with 36 women with normal gonadal function. The vaginal tropism was assessed through hormonal vaginal cytology, vaginal pH and vaginal health index (VHI). Vaginal flora were assessed by the amine test, bacterioscopy and culture for fungi. Sexual function was evaluated through the questionnaire Female Sexual Function Index (FSFI). Women in both groups were of similar age and showed similar marital status. The two groups presented vaginal tropic scores according to the VHI but the tropism was worse among women in the POF group. No difference was observed with respect to hormonal cytology and pH. Vaginal flora was similar in both groups. Women with POF showed worse sexual performance with more pain and poorer lubrication than women in the control group. The VHI, the only parameter evaluated showing statistical difference between the groups, did not correlate with the domains of pain and lubrication in the FSFI questionnaire. These findings suggest that the use of systemic estrogen among women with POF is not enough to improve complaints of lubrication and pain despite conferring similar tropism and vaginal flora. Other therapeutic options need to be evaluated.
Resumo:
A new criterion has been recently proposed combining the topological instability (lambda criterion) and the average electronegativity difference (Delta e) among the elements of an alloy to predict and select new glass-forming compositions. In the present work, this criterion (lambda.Delta e) is applied to the Al-Ni-La and Al-Ni-Gd ternary systems and its predictability is validated using literature data for both systems and additionally, using own experimental data for the Al-La-Ni system. The compositions with a high lambda.Delta e value found in each ternary system exhibit a very good correlation with the glass-forming ability of different alloys as indicated by their supercooled liquid regions (Delta T(x)) and their critical casting thicknesses. In the case of the Al-La-Ni system, the alloy with the largest lambda.Delta e value, La(56)Al(26.5)Ni(17.5), exhibits the highest glass-forming ability verified for this system. Therefore, the combined lambda.Delta e criterion is a simple and efficient tool to select new glass-forming compositions in Al-Ni-RE systems. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3563099]
Resumo:
Currently, the acoustic and nanoindentation techniques are two of the most used techniques for material elastic modulus measurement. In this article fundamental principles and limitations of both techniques are shown and discussed. Last advances in nanoindentation technique are also reviewed. An experimental study in ceramic, metallic, composite and single crystals was also done. Results shown that ultrasonic technique is capable to provide results in agreement with those reported in literature. However, ultrasonic technique does not allow measuring the elastic modulus of some small samples and single crystals. On the other hand, the nanoindentation technique estimates the elastic modulus values in reasonable agreement with those measured by acoustic methods, particularly in amorphous materials, while in some policristaline materials some deviation from expected values was obtained.
Resumo:
The aim of this study was to compare REML/BLUP and Least Square procedures in the prediction and estimation of genetic parameters and breeding values in soybean progenies. F(2:3) and F(4:5) progenies were evaluated in the 2005/06 growing season and the F(2:4) and F(4:6) generations derived thereof were evaluated in 2006/07. These progenies were originated from two semi-early, experimental lines that differ in grain yield. The experiments were conducted in a lattice design and plots consisted of a 2 m row, spaced 0.5 m apart. The trait grain yield per plot was evaluated. It was observed that early selection is more efficient for the discrimination of the best lines from the F(4) generation onwards. No practical differences were observed between the least square and REML/BLUP procedures in the case of the models and simplifications for REML/BLUP used here.