896 resultados para segmental compression forces


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the practical but challenging problem of motion planning for a deeply submerged rigid body. Here, we formulate the dynamic equations of motion of a submerged rigid body under the architecture of differential geometric mechanics and include external dissipative and potential forces. The mechanical system is represented as a forced affine-connection control system on the configuration space SE(3). Solutions to the motion planning problem are computed by concatenating and reparameterizing the integral curves of decoupling vector fields. We provide an extension to this inverse kinematic method to compensate for external potential forces caused by buoyancy and gravity. We present a mission scenario and implement the theoretically computed control strategy onto a test-bed autonomous underwater vehicle. This scenario emphasizes the use of this motion planning technique in the under-actuated situation; the vehicle loses direct control on one or more degrees of freedom. We include experimental results to illustrate our technique and validate our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An experimental programme in 2007 used three air suspended heavy vehicles travelling over typical urban roads to determine whether dynamic axle-to-chassis forces could be reduced by using larger-than-standard diameter longitudinal air lines. This paper presents methodology, interim analysis and partial results from that programme. Alterations to dynamic measures derived from axle-to-chassis forces for the case of standard-sized longitudinal air lines vs. the test case where larger longitudinal air lines were fitted are presented and discussed. This leads to conclusions regarding the possibility that dynamic loadings between heavy vehicle suspensions and chassis may be reduced by fitting larger longitudinal air lines to air-suspended heavy vehicles. Reductions in the shock and vibration loads to heavy vehicle suspension components could lead to lighter and more economical chassis and suspensions. This could therefore lead to reduced tare and increased payloads without an increase in gross vehicle mass.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, well-established clinical therapeutic approaches for bone reconstruction are restricted to the transplantation of autografts and allografts, and the implantation of metal devices or ceramic-based implants to assist bone regeneration. Bone grafts possess osteoconductive and osteoinductive properties, however they are limited in access and availability and associated with donor site morbidity, haemorrhage, risk of infection, insufficient transplant integration, graft devitalisation, and subsequent resorption resulting in decreased mechanical stability. As a result, recent research focuses on the development of alternative therapeutic concepts. Analysing the tissue engineering literature it can be concluded that bone regeneration has become a focus area in the field. Hence, a considerable number of research groups and commercial entities work on the development of tissue engineered constructs for bone regeneration. However, bench to bedside translations are still infrequent as the process towards approval by regulatory bodies is protracted and costly, requiring both comprehensive in vitro and in vivo studies. In translational orthopaedic research, the utilisation of large preclinical animal models is a conditio sine qua non. Consequently, to allow comparison between different studies and their outcomes, it is essential that animal models, fixation devices, surgical procedures and methods of taking measurements are well standardized to produce reliable data pools as a base for further research directions. The following chapter reviews animal models of the weight-bearing lower extremity utilized in the field which include representations of fracture-healing, segmental bone defects, and fracture non-unions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Fusionless scoliosis surgery is an early-stage treatment for idiopathic scoliosis which claims potential advantages over current fusion-based surgical procedures. Anterior vertebral stapling using a shape memory alloy staple is one such approach. Despite increasing interest in this technique, little is known about the effects on the spine following insertion, or the mechanism of action of the staple. The purpose of this study was to investigate the biomechanical consequences of staple insertion in the anterior thoracic spine, using in vitro experiments on an immature bovine model. Methods: Individual calf spine thoracic motion segments were tested in flexion, extension, lateral bending and axial rotation. Changes in motion segment rotational stiffness following staple insertion were measured on a series of 14 specimens. Strain gauges were attached to three of the staples in the series to measure forces transmitted through the staple during loading. A micro-CT scan of a single specimen was performed after loading to qualitatively examine damage to the vertebral bone caused by the staple. Findings: Small but statistically significant decreases in bending stiffness occurred in flexion,extension, lateral bending away from the staple, and axial rotation away from the staple. Each strain-gauged staple showed a baseline compressive loading following insertion which was seen to gradually decrease during testing. Post-test micro-CT showed substantial bone and growth plate damage near the staple. Interpretation: Based on our findings it is possible that growth modulation following staple insertion is due to tissue damage rather than sustained mechanical compression of the motion segment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire safety design of building structures has received greater attention in recent times due to continuing loss of properties and lives during fires. However, fire performance of light gauge cold-formed steel structures is not well understood despite its increased usage in buildings. Cold-formed steel compression members are susceptible to various buckling modes such as local and distortional buckling and their ultimate strength behaviour is governed by these buckling modes. Therefore a research project based on experimental and numerical studies was undertaken to investigate the distortional buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. Lipped channel sections with and without additional lips were selected with three thicknesses of 0.6, 0.8, and 0.95 mm and both low and high strength steels (G250 and G550 steels). More than 150 compression tests were undertaken first at ambient and elevated temperatures. Finite element models of the tested compression members were then developed by including the degradation of mechanical properties with increasing temperatures. Comparison of finite element analysis and experimental results showed that the developed finite element models were capable of simulating the distortional buckling and strength behaviour at ambient and elevated temperatures up to 800 °C. The validated model was used to determine the effects of mechanical properties, geometric imperfections and residual stresses on the distortional buckling behaviour and strength of cold-formed steel columns. This paper presents the details of the numerical study and the results. It demonstrated the importance of using accurate mechanical properties at elevated temperatures in order to obtain reliable strength characteristics of cold-formed steel columns under fire conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout