11 resultados para Data encoding
em Aston University Research Archive
Resumo:
The current optical communications network consists of point-to-point optical transmission paths interconnected with relatively low-speed electronic switching and routing devices. As the demand for capacity increases, then higher speed electronic devices will become necessary. It is however hard to realise electronic chip-sets above 10 Gbit/s, and therefore to increase the achievable performance of the network, electro-optic and all-optic switching and routing architectures are being investigated. This thesis aims to provide a detailed experimental analysis of high-speed optical processing within an optical time division multiplexed (OTDM) network node. This includes the functions of demultiplexing, 'drop and insert' multiplexing, data regeneration, and clock recovery. It examines the possibilities of combining these tasks using a single device. Two optical switching technologies are explored. The first is an all-optical device known as 'semiconductor optical amplifier-based nonlinear optical loop mirror' (SOA-NOLM). Switching is achieved by using an intense 'control' pulse to induce a phase shift in a low-intensity signal propagating through an interferometer. Simultaneous demultiplexing, data regeneration and clock recovery are demonstrated for the first time using a single SOA-NOLM. The second device is an electroabsorption (EA) modulator, which until this thesis had been used in a uni-directional configuration to achieve picosecond pulse generation, data encoding, demultiplexing, and 'drop and insert' multiplexing. This thesis presents results on the use of an EA modulator in a novel bi-directional configuration. Two independent channels are demultiplexed from a high-speed OTDM data stream using a single device. Simultaneous demultiplexing with stable, ultra-low jitter clock recovery is demonstrated, and then used in a self-contained 40 Gbit/s 'drop and insert' node. Finally, a 10 GHz source is analysed that exploits the EA modulator bi-directionality to increase the pulse extinction ratio to a level where it could be used in an 80 Gbit/s OTDM network.
Resumo:
Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.
Resumo:
There is a growing demand for data transmission over digital networks involving mobile terminals. An important class of data required for transmission over mobile terminals is image information such as street maps, floor plans and identikit images. This sort of transmission is of particular interest to the service industries such as the Police force, Fire brigade, medical services and other services. These services cannot be applied directly to mobile terminals because of the limited capacity of the mobile channels and the transmission errors caused by the multipath (Rayleigh) fading. In this research, transmission of line diagram images such as floor plans and street maps, over digital networks involving mobile terminals at transmission rates of 2400 bits/s and 4800 bits/s have been studied. A low bit-rate source encoding technique using geometric codes is found to be suitable to represent line diagram images. In geometric encoding, the amount of data required to represent or store the line diagram images is proportional to the image detail. Thus a simple line diagram image would require a small amount of data. To study the effect of transmission errors due to mobile channels on the transmitted images, error sources (error files), which represent mobile channels under different conditions, have been produced using channel modelling techniques. Satisfactory models of the mobile channel have been obtained when compared to the field test measurements. Subjective performance tests have been carried out to evaluate the quality and usefulness of the received line diagram images under various mobile channel conditions. The effect of mobile transmission errors on the quality of the received images has been determined. To improve the quality of the received images under various mobile channel conditions, forward error correcting codes (FEC) with interleaving and automatic repeat request (ARQ) schemes have been proposed. The performance of the error control codes have been evaluated under various mobile channel conditions. It has been shown that a FEC code with interleaving can be used effectively to improve the quality of the received images under normal and severe mobile channel conditions. Under normal channel conditions, similar results have been obtained when using ARQ schemes. However, under severe mobile channel conditions, the FEC code with interleaving shows better performance.
Resumo:
The need for low bit-rate speech coding is the result of growing demand on the available radio bandwidth for mobile communications both for military purposes and for the public sector. To meet this growing demand it is required that the available bandwidth be utilized in the most economic way to accommodate more services. Two low bit-rate speech coders have been built and tested in this project. The two coders combine predictive coding with delta modulation, a property which enables them to achieve simultaneously the low bit-rate and good speech quality requirements. To enhance their efficiency, the predictor coefficients and the quantizer step size are updated periodically in each coder. This enables the coders to keep up with changes in the characteristics of the speech signal with time and with changes in the dynamic range of the speech waveform. However, the two coders differ in the method of updating their predictor coefficients. One updates the coefficients once every one hundred sampling periods and extracts the coefficients from input speech samples. This is known in this project as the Forward Adaptive Coder. Since the coefficients are extracted from input speech samples, these must be transmitted to the receiver to reconstruct the transmitted speech sample, thus adding to the transmission bit rate. The other updates its coefficients every sampling period, based on information of output data. This coder is known as the Backward Adaptive Coder. Results of subjective tests showed both coders to be reasonably robust to quantization noise. Both were graded quite good, with the Forward Adaptive performing slightly better, but with a slightly higher transmission bit rate for the same speech quality, than its Backward counterpart. The coders yielded acceptable speech quality of 9.6kbps for the Forward Adaptive and 8kbps for the Backward Adaptive.
Resumo:
The aim of this work was to construct short analogues of the repetitive water-binding domain of the Pseudomonas syringae ice nucleation protein, InaZ. Structural analysis of these analogues might provide data pertaining to the protein-water contacts that underlie ice nucleation. An artificial gene coding for a 48-mer repeat sequence from InaZ was synthesized from four oligodeoxyribonucleotides and ligated into the expression vector, pGEX2T. The recombinant vector was cloned in Escherichia coli and a glutathione S-transferase fusion protein obtained. This fusion protein displayed a low level of ice-nucleating activity when tested by a droplet freezing assay. The fusion protein could be cleaved with thrombin, providing a means for future recovery of the 48-mer peptide in amounts suitable for structural analysis by nuclear magnetic resonance spectroscopy.
Resumo:
Through direct modeling, a reduction of pattern-dependent errors in a standard fiber-based transmission link at 40 Gbits/s rate is demonstrated by application of a skewed data pre-encoding. The trade-off between the improvement of the bit error rate and the loss in the data rate is examined.
Resumo:
Through modelling of direct error computation, a reduction of pattern- dependent errors in a standard fiber-based transmission link at 40 Gb/s rate is demonstrated by application of a skewed data pre-encoding. The trade-off between the bit-error rate improvement and the data rate loss is examined.
Resumo:
Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.
Resumo:
Through direct modeling, a reduction of pattern-dependent errors in a standard fiber-based transmission link at 40 Gbits/s rate is demonstrated by application of a skewed data pre-encoding. The trade-off between the improvement of the bit error rate and the loss in the data rate is examined. © 2007 Optical Society of America.
Resumo:
Through modelling of direct error computation, a reduction of pattern- dependent errors in a standard fiber-based transmission link at 40 Gb/s rate is demonstrated by application of a skewed data pre-encoding. The trade-off between the bit-error rate improvement and the data rate loss is examined.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.