326 resultados para quadrature
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
"Errata"--P. [xvi]
Resumo:
Mode of access: Internet.
Resumo:
With: Higher geometry and mensuration / by Nathan Scholfield.
Resumo:
Focussing particularly on solid-state laser systems, the phase-noise penalties of laser injection-locking and electro-optical phase-locking are derived using linearised quantum mechanical models. The fundamental performance limit (minimum achievable output phase noise) for an injection-locked laser (IJL) system at low frequencies is equal to that of a standard phase-insensitive amplifier, whereas, in principle, that of a phase-locked laser (PLL) system can be better. At high frequencies, the output phase noise of the IJL system is limited by that of the master laser, while that of the PLL system tends to a weighted sum of contributions from the master and slave laser fields. Under conditions of large amplification, particularly where there has been significant attenuation, the noise penalties are shown to be substantial. Nonideal photodetector characteristics are shown to add significantly to the noise penalties for the PLL system. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
A powerful decoupling method is introduced to obtain decoupled signal voltages from quadrature coils in magnetic resonance imaging (MRI). The new method uses the knowledge of the position of the signal source in MRI, the active slice, to define a new mutual impedance which accurately quantifies the coupling voltages and enables them to be removed almost completely. Results show that by using the new decoupling method, the percentage errors in the decoupled voltages are of the order of 10(-7)% and isolations between two coils are more than 170 dB.
Resumo:
We report for the first time the experimental demonstration of doubly differential quadrature phase shift keying (DDQPSK) using optical coherent detection. This method is more robust against high frequency offsets (FO) than conventional single differential quadrature phase shift keying (SDQPSK) with offset compensation. DDQPSK is shown to be able to compensate large FOs (up to the baud rate) and has lower computational requirements than other FO compensation methods. DDQPSK is a simple algorithm to implement in a real-time decoder for optical burst switched network scenarios. Simulation results are also provided, which show good agreement with the experimental results for both SDQPSK and DDQPSK transmissions. © 1989-2012 IEEE.
Resumo:
We demonstrate the first multi-wavelength regeneration of quadrature phase shift keyed (QPSK) formatted signals, showing a simultaneous Q2-factor improvement in excess of 3.8 dB for signals degraded by phase distortion
Resumo:
We propose a new nonlinear optical loop mirror based configuration capable of regenerating regular rectangular quadrature amplitude modulated (QAM) signals. The scheme achieves suppression of noise distortion on both signal quadratures through the realization of two orthogonal regenerative Fourier transformations. Numerical simulations show the performance of the scheme for high constellation complexities (including 256-QAM formats).
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.