988 resultados para Compression Analysis


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Differential settlement at the bridge approach between the deck and rail track on ground is often considered as a source of challenging technical and economical problem. This caused by the sudden stiffness changes between the bridge deck and the track on ground, and changes in soil stiffness of backfill and sub-grade with soil moisture content and loading history. To minimise the negative social and economic impacts due to poor performances of railway tracks at bridge transition zones, it is important, a special attention to be given at design, construction and maintenance stages. It is critically challenging to obtain an appropriate design solution for any given site condition and most of the existing conventional design approaches are unable to address the actual on-site behaviour due to their inherent assumptions of continuity and lack of clarifying of the local effects. An evaluation of existing design techniques is considered to estimate their contributions to a potential solution for bridge transition zones. This paper analyses five different approaches: the Chinese Standard, the European Standard with three different approaches, and the Australian approach. Each design approach is used to calculate the layer thicknesses, accounting critical design features such as the train speed, the axle load, the backfill subgrade condition, and the dynamic loading response. Considering correlation between track degradation and design parameters, this paper concludes that there is still a need of an optimised design approach for bridge transition zones.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based on Terzaghi's consolidation theory, percent of consolidation, U, versus the time factor, T, relationship for constant/linear excess pore water pressure distribution, it is possible to generate theoretical log10(H2/t) versus U curves where H is the length of the drainage path of a consolidating layer, and t is the time for different known values of the coefficient of consolidation, cν. A method has been developed wherein both the theoretical and experimental behavior of soils during consolidation can be simultaneously compared and studied on the same plot. The experimental log10(H2/t) versus U curves have been compared with the theoretical curves. The deviations of the experimental behavior from the theory are explained in terms of initial compression and secondary compression. Analysis of results indicates that the secondary compression essentially starts from about 60% consolidation. A simple procedure is presented for calculating the value of cv from the δ-t data using log10(H2/t) versus U plot.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis introduced Bayesian statistics as an analysis technique to isolate resonant frequency information in in-cylinder pressure signals taken from internal combustion engines. Applications of these techniques are relevant to engine design (performance and noise), energy conservation (fuel consumption) and alternative fuel evaluation. The use of Bayesian statistics, over traditional techniques, allowed for a more in-depth investigation into previously difficult to isolate engine parameters on a cycle-by-cycle basis. Specifically, these techniques facilitated the determination of the start of pre-mixed and diffusion combustion and for the in-cylinder temperature profile to be resolved on individual consecutive engine cycles. Dr Bodisco further showed the utility of the Bayesian analysis techniques by applying them to in-cylinder pressure signals taken from a compression ignition engine run with fumigated ethanol.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inspired by the Brazilian disk geometry we examine the utility of an edge cracked semicircular disk (ECSD) specimen for rapid assessment of fracture toughness of brittle materials using compressive loading. It is desirable to optimize the geometry towards a constant form factor F for evaluating K-I. In this investigation photoelastic and finite element results for K-I evaluation highlight the effect of loading modeled using a Hertzian. A Hertzian loading subtending 4 degrees at the center leads to a surprisingly constant form factor of 1.36. This special case is further analyzed by applying uniform pressure over a chord for facilitating testing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new mixed-mode compression fracture specimen, obliquely oriented edge cracked semicircular disk (OECSD) is analyzed by extending pure opening mode configuration of edge cracked semicircular disk (ECSD) under Hertzian compression. Photoelastic experiments are conducted on two different specimens of OECSD of same size and different crack lengths and inclinations. Finite element method (FEM) is used to solve a number of cases of the problem varying crack length and crack inclination. FE results show a good match with experiments. Inclination of edge crack in OECSD can be so made as to obtain any mode-mixity ratio between zero and one and beyond for any crack length. The new specimen can be used for fracture testing under compression more conveniently than the existing ones in several ways.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a highly efficient content-lossless compression scheme for Chinese document images. The scheme combines morphologic analysis with pattern matching to cluster patterns. In order to achieve the error maps with minimal error numbers, the morphologic analysis is applied to decomposing and recomposing the Chinese character patterns. In the pattern matching, the criteria are adapted to the characteristics of Chinese characters. Since small-size components sometimes can be inserted into the blank spaces of large-size components, we can achieve small-size pattern library images. Arithmetic coding is applied to the final compression. Our method achieves much better compression performance than most alternative methods, and assures content-lossless reconstruction. (c) 2006 Society of Photo-Optical Instrumentation Engineers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a highly efficient content-lossless compression scheme for Chinese document images. The scheme combines morphologic analysis with pattern matching to cluster patterns. In order to achieve the error maps with minimal error numbers, the morphologic analysis is applied to decomposing and recomposing the Chinese character patterns. In the pattern matching, the criteria are adapted to the characteristics of Chinese characters. Since small-size components sometimes can be inserted into the blank spaces of large-size components, we can achieve small-size pattern library images. Arithmetic coding is applied to the final compression. Our method achieves much better compression performance than most alternative methods, and assures content-lossless reconstruction. (c) 2006 Society of Photo-Optical Instrumentation Engineers.