818 resultados para TRANSFORMADA DE FOURIER


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Near-infrared (NIR) and Fourier transform infrared (FTIR) spectroscopy have been used to determine the mineralogical character of isomorphic substitutions for Mg2+ by divalent transition metals Fe, Mn, Co and Ni in natural halotrichite series. The minerals are characterised by d-d transitions in NIR region 12000-7500 cm-1. NIR spectrum of halotrichite reveals broad feature from 12000 to 7500 cm-1 with a splitting of two bands resulting from ferrous ion transition 5T2g ® 5Eg. The presence of overtones of OH- fundamentals near 7000 cm-1 confirms molecular water in the mineral structure of the halotrichite series. The appearance of the most intense peak at around 5132 cm-1 is a common feature in the three minerals and is derived from combination of OH- vibrations of water molecules and 2 water bending modes. The influence of cations like Mg2+, Fe2+, Mn2+, Co2+, Ni2+ shows on the spectra of halotrichites. Especially wupatkiite-OH stretching vibrations in which bands are distorted conspicuously to low wave numbers at 3270, 2904 and 2454 cm-1. The observation of high frequency 2 mode in the infrared spectrum at 1640 cm-1 indicates coordination of water molecules is strongly hydrogen bonded in natural halotrichites. The splittings of bands in 3 and 4 (SO4)2- stretching regions may be attributed to the reduction of symmetry from Td to C2v for sulphate ion. This work has shown the usefulness of NIR spectroscopy for the rapid identification and classification of the halotrichite minerals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fourier transfonn (FT) Raman, Raman microspectroscopy and Fourier transform infrared (FTIR) spectroscopy have been used for the structural analysis and characterisation of untreated and chemically treated wool fibres. For FT -Raman spectroscopy novel methods of sample presentation have been developed and optimised for the analysis of wool. No significant fluorescence was observed and the spectra could be obtained routinely. The stability of wool keratin to the laser source was investigated and the visual and spectroscopic signs of sample damage were established. Wool keratin was found to be extremely robust with no signs of sample degradation observed for laser powers of up to 600 m W and for exposure times of up to seven and half hours. Due to improvements in band resolution and signal-to-noise ratio, several previously unobserved spectral features have become apparent. The assignment of the Raman active vibrational modes of wool have been reviewed and updated to include these features. The infrared spectroscopic techniques of attenuated total reflectance (ATR) and photoacoustic (P A) have been used to examine shrinkproofed and mothproofed wool samples. Shrinkproofing is an oxidative chemical treatment used to selectively modifY the surface of a wool fibre. Mothproofing is a chemical treatment applied to wool for the prevention of insect attack. The ability of PAS and A TR to vary the penetration depth by varying certain instrumental parameters was used to obtain spectra of the near surface regions of these chemically treated samples. These spectra were compared with those taken with a greater penetration depth, which therefore represent more of the bulk wool sample. The PA and ATR spectra demonstrated that oxidation was restricted to the near-surface layer of wool. Extensive curve fitting of ATR spectra of untreated wool indicated that cuticle was composed of a mixed protein conformation, but was predominately that of an a.-helix. The cortex was proposed to be a mixture of both a.helical and ~-pleated sheet protein conformations. These findings were supported by PAS depth profiling results. Raman microspectroscopy was used in an extensive investigation of the molecular structure of the wool fibre. This included determining the orientation of certain functional groups within the wool fibre and the symmetry of particular vibrations. The orientation ofbonds within the wool fibre was investigated by orientating the wool fibre axis parallel and then perpendicular to the plane of polarisation of the electric vector of the incident radiation. It was experimentally determined that the majority of C=O and N-H bonds of the peptide bond of wool lie parallel to the fibre axis. Additionally, a number of the important vibrations associated with the a-helix were also found to lie parallel to the fibre axis. Further investigation into the molecular structure of wool involved determining what effect stretching the wool fibre had on bond orientation. Raman spectra of stretched and unstretched wool fibres indicated that extension altered the orientation ofthe aromatic rings, the CH2 and CH3 groups of the amino acids. Curve fitting results revealed that extension resulted in significant destruction of the a-helix structure a substantial increase in the P-pleated sheet structure. Finally, depolarisation ratios were calculated for Raman spectra. The vibrations associated with the aromatic rings of amino acids had very low ratios which indicated that the vibrations were highly symmetrical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.