8 resultados para iterative error correction

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis a novel transmission format, named Coherent Wavelength Division Multiplexing (CoWDM) for use in high information spectral density optical communication networks is proposed and studied. In chapter I a historical view of fibre optic communication systems as well as an overview of state of the art technology is presented to provide an introduction to the subject area. We see that, in general the aim of modern optical communication system designers is to provide high bandwidth services while reducing the overall cost per transmitted bit of information. In the remainder of the thesis a range of investigations, both of a theoretical and experimental nature are carried out using the CoWDM transmission format. These investigations are designed to consider features of CoWDM such as its dispersion tolerance, compatibility with forward error correction and suitability for use in currently installed long haul networks amongst others. A high bit rate optical test bed constructed at the Tyndall National Institute facilitated most of the experimental work outlined in this thesis and a collaboration with France Telecom enabled long haul transmission experiments using the CoWDM format to be carried out. An amount of research was also carried out on ancillary topics such as optical comb generation, forward error correction and phase stabilisation techniques. The aim of these investigations is to verify the suitability of CoWDM as a cost effective solution for use in both current and future high bit rate optical communication networks

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cystic Fibrosis (CF) is an autosomal recessive monogenic disorder caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene with the ΔF508 mutation accounting for approximately 70% of all CF cases worldwide. This thesis investigates whether existing zinc finger nucleases designed in this lab and CRISPR/gRNAs designed in this thesis can mediate efficient homology-directed repair (HDR) with appropriate donor repair plasmids to correct CF-causing mutations in a CF cell line. Firstly, the most common mutation, ΔF508, was corrected using a pair of existing ZFNs, which cleave in intron 9, and the donor repair plasmid pITR-donor-XC, which contains the correct CTT sequence and two unique restriction sites. HDR was initially determined to be <1% but further analysis by next generation sequencing (NGS) revealed HDR occurred at a level of 2%. This relatively low level of repair was determined to be a consequence of distance from the cut site to the mutation and so rather than designing a new pair of ZFNs, the position of the existing intron 9 ZFNs was exploited and attempts made to correct >80% of CF-causing mutations. The ZFN cut site was used as the site for HDR of a mini-gene construct comprising exons 10-24 from CFTR cDNA (with appropriate splice acceptor and poly A sites) to allow production of full length corrected CFTR mRNA. Finally, the ability to cleave closer to the mutation and mediate repair of CFTR using the latest gene editing tool CRISPR/Cas9 was explored. Two CRISPR gRNAs were tested; CRISPR ex10 was shown to cleave at an efficiency of 15% and CRISPR in9 cleaved at 3%. Both CRISPR gRNAs mediated HDR with appropriate donor plasmids at a rate of ~1% as determined by NGS. This is the first evidence of CRISPR induced HDR in CF cell lines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Copper dimethylamino-2-propoxide [Cu(dmap)2] is used as a precursor for low-temperature atomic layer deposition (ALD) of copper thin films. Chemisorption of the precursor is the necessary first step of ALD, but it is not known in this case whether there is selectivity for adsorption sites, defects, or islands on the substrate. Therefore, we study the adsorption of the Cu(dmap)2 molecule on the different sites on flat and rough Cu surfaces using PBE, PBE-D3, optB88-vdW, and vdW-DF2 methods. We found the relative order of adsorption energies for Cu(dmap)2 on Cu surfaces is Eads (PBE-D3) > Eads (optB88-vdW) > Eads (vdW-DF2) > Eads (PBE). The PBE and vdW-DF2 methods predict one chemisorption structure, while optB88-vdW predicts three chemisorption structures for Cu(dmap)2 adsorption among four possible adsorption configurations, whereas PBE-D3 predicts a chemisorbed structure for all the adsorption sites on Cu(111). All the methods with and without van der Waals corrections yield a chemisorbed molecule on the Cu(332) step and Cu(643) kink because of less steric hindrance on the vicinal surfaces. Strong distortion of the molecule and significant elongation of Cu–N bonds are predicted in the chemisorbed structures, indicating that the ligand–Cu bonds break during the ALD of Cu from Cu(dmap)2. The molecule loses its initial square-planar structure and gains linear O–Cu–O bonding as these atoms attach to the surface. As a result, the ligands become unstable and the precursor becomes more reactive to the coreagent. Charge redistribution mainly occurs between the adsorbate O–Cu–O bond and the surface. Bader charge analysis shows that electrons are donated from the surface to the molecule in the chemisorbed structures, so that the Cu center in the molecule is partially reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Leaving Certificate (LC) is the national, standardised state examination in Ireland necessary for entry to third level education – this presents a massive, raw corpus of data with the potential to yield invaluable insight into the phenomena of learner interlanguage. With samples of official LC Spanish examination data, this project has compiled a digitised corpus of learner Spanish comprised of the written and oral production of 100 candidates. This corpus was then analysed using a specific investigative corpus technique, Computer-aided Error Analysis (CEA, Dagneaux et al, 1998). CEA is a powerful apparatus in that it greatly facilitates the quantification and analysis of a large learner corpus in digital format. The corpus was both compiled and analysed with the use of UAM Corpus Tool (O’Donnell 2013). This Tool allows for the recording of candidate-specific variables such as grade, examination level, task type and gender, therefore allowing for critical analysis of the corpus as one unit, as separate written and oral sub corpora and also of performance per task, level and gender. This is an interdisciplinary work combining aspects of Applied Linguistics, Learner Corpus Research and Foreign Language (FL) Learning. Beginning with a review of the context of FL learning in Ireland and Europe, I go on to discuss the disciplinary context and theoretical framework for this work and outline the methodology applied. I then perform detailed quantitative and qualitative analyses before going on to combine all research findings outlining principal conclusions. This investigation does not make a priori assumptions about the data set, the LC Spanish examination, the context of FLs or of any aspect of learner competence. It undertakes to provide the linguistic research community and the domain of Spanish language learning and pedagogy in Ireland with an empirical, descriptive profile of real learner performance, characterising learner difficulty.