998 resultados para COMPRESSION MODES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic emission (AE) technique is a popular tool used for structural health monitoring of civil, mechanical and aerospace structures. It is a non-destructive method based on rapid release of energy within a material by crack initiation or growth in the form of stress waves. Recording of these waves by means of sensors and subsequent analysis of the recorded signals convey information about the nature of the source. Ability to locate the source of stress waves is an important advantage of AE technique; but as AE waves travel in various modes and may undergo mode conversions, understanding of the modes (‘modal analysis’) is often necessary in order to determine source location accurately. This paper presents results of experiments aimed at finding locations of artificial AE sources on a thin plate and identifying wave modes in the recorded signal waveforms. Different source locating techniques will be investigated and importance of wave mode identification will be explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Concentrations of ultrafine (<0.1µm) particles (UFPs) and PM2.5 (<2.5µm) were measured whilst commuting along a similar route by train, bus, ferry and automobile in Sydney, Australia. One trip on each transport mode was undertaken during both morning and evening peak hours throughout a working week, for a total of 40 trips. Analyses comprised one-way ANOVA to compare overall (i.e. all trips combined) geometric mean concentrations of both particle fractions measured across transport modes, and assessment of both the correlation between wind speed and individual trip means of UFPs and PM2.5, and the correlation between the two particle fractions. Overall geometric mean concentrations of UFPs and PM2.5 ranged from 2.8 (train) to 8.4 (bus) × 104 particles cm-3 and 22.6 (automobile) to 29.6 (bus) µg m-3, respectively, and a statistically significant difference (p <0.001) between modes was found for both particle fractions. Individual trip geometric mean concentrations were between 9.7 × 103 (train) and 2.2 × 105 (bus) particles cm-3 and 9.5 (train) to 78.7 (train) µg m-3. Estimated commuter exposures were variable, and the highest return trip mean PM2.5 exposure occurred in the ferry mode, whilst the highest UFP exposure occurred during bus trips. The correlation between fractions was generally poor, and in keeping with the duality of particle mass and number emissions in vehicle-dominated urban areas. Wind speed was negatively correlated with, and a generally poor determinant of, UFP and PM2.5 concentrations, suggesting a more significant role for other factors in determining commuter exposure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated the effect of dielectric filling in a V groove on the propagation parameters of channel plasmon-polariton (CPP) modes. In particular, existence conditions and critical groove angles, mode localization, field structure, dispersion, and propagation distances of CPP modes are analyzed as functions of dielectric permittivity inside the groove. It is demonstrated that increasing dielectric permittivity in the groove results in a rapid increase of mode localization near the tip of the groove and increase of both the critical angles that determine a range of groove angles for which CPP modes can exist. Detailed analysis of the field structure has demonstrated that the maximum of the field in a CPP mode is typically reached at a small distance from the tip of the groove. The effect of rounded tip is also investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transit Oriented Developments (TODs) are often designed to promote the use of sustainable modes of transport and reduce car usage. This paper investigates the effect of personal and transit characteristics on travel choices of TOD users. Binary logistic regression models were developed to determine the probability of choosing sustainable modes of transport including walking, cycling and public transport. Kelvin Grove Urban Village (KGUV) located in Brisbane, Australia was chosen as case study TOD. The modal splits for employees, students, shoppers and residents showed that 47% of employees, 84% of students, 71% of shoppers and 56% of residents used sustainable modes of transport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Bone healing is sensitive to the initial mechanical conditions with tissue differentiation being determined within days of trauma. Whilst axial compression is regarded as stimulatory, the role of interfragmentary shear is controversial. The purpose of this study was to determine how the initial mechanical conditions produced by interfragmentary shear and torsion differ from those produced by axial compressive movements. ----- ----- Methods: The finite element method was used to estimate the strain, pressure and fluid flow in the early callus tissue produced by the different modes of interfragmentary movement found in vivo. Additionally, tissue formation was predicted according to three principally different mechanobiological theories. ----- ----- Findings: Large interfragmentary shear movements produced comparable strains and less fluid flow and pressure than moderate axial interfragmentary movements. Additionally, combined axial and shear movements did not result in overall increases in the strains and the strain magnitudes were similar to those produced by axial movements alone. Only when axial movements where applied did the non-distortional component of the pressure–deformation theory influence the initial tissue predictions. ----- ----- Interpretation: This study found that the mechanical stimuli generated by interfragmentary shear and torsion differed from those produced by axial interfragmentary movements. The initial tissue formation as predicted by the mechanobiological theories was dominated by the deformation stimulus.