903 resultados para Compression.
Resumo:
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
Resumo:
This paper explains the Genetic Algorithm (GA) evolution of optimized wavelet that surpass the cdf9/7 wavelet for fingerprint compression and reconstruction. Optimized wavelets have already been evolved in previous works in the literature, but they are highly computationally complex and time consuming. Therefore, in this work, a simple approach is made to reduce the computational complexity of the evolution algorithm. A training image set comprised of three 32x32 size cropped images performed much better than the reported coefficients in literature. An average improvement of 1.0059 dB in PSNR above the classical cdf9/7 wavelet over the 80 fingerprint images was achieved. In addition, the computational speed was increased by 90.18 %. The evolved coefficients for compression ratio (CR) 16:1 yielded better average PSNR for other CRs also. Improvement in average PSNR was experienced for degraded and noisy images as well
Resumo:
The thesis explores the area of still image compression. The image compression techniques can be broadly classified into lossless and lossy compression. The most common lossy compression techniques are based on Transform coding, Vector Quantization and Fractals. Transform coding is the simplest of the above and generally employs reversible transforms like, DCT, DWT, etc. Mapped Real Transform (MRT) is an evolving integer transform, based on real additions alone. The present research work aims at developing new image compression techniques based on MRT. Most of the transform coding techniques employ fixed block size image segmentation, usually 8×8. Hence, a fixed block size transform coding is implemented using MRT and the merits and demerits are analyzed for both 8×8 and 4×4 blocks. The N2 unique MRT coefficients, for each block, are computed using templates. Considering the merits and demerits of fixed block size transform coding techniques, a hybrid form of these techniques is implemented to improve the performance of compression. The performance of the hybrid coder is found to be better compared to the fixed block size coders. Thus, if the block size is made adaptive, the performance can be further improved. In adaptive block size coding, the block size may vary from the size of the image to 2×2. Hence, the computation of MRT using templates is impractical due to memory requirements. So, an adaptive transform coder based on Unique MRT (UMRT), a compact form of MRT, is implemented to get better performance in terms of PSNR and HVS The suitability of MRT in vector quantization of images is then experimented. The UMRT based Classified Vector Quantization (CVQ) is implemented subsequently. The edges in the images are identified and classified by employing a UMRT based criteria. Based on the above experiments, a new technique named “MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)”is developed. Its performance is evaluated and compared against existing techniques. A comparison with standard JPEG & the well-known Shapiro’s Embedded Zero-tree Wavelet (EZW) is done and found that the proposed technique gives better performance for majority of images
Resumo:
Recurrent iterated function systems (RIFSs) are improvements of iterated function systems (IFSs) using elements of the theory of Marcovian stochastic processes which can produce more natural looking images. We construct new RIFSs consisting substantially of a vertical contraction factor function and nonlinear transformations. These RIFSs are applied to image compression.
Resumo:
Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images
Resumo:
The primary objective of this study is to determine whether nonlinear frequency compression and linear transposition algorithms provide speech perception benefit in school-aged children.
Resumo:
This paper examines the selection of compression ratios for hearing aids.
Resumo:
The self-consistent field theory (SCFT) prediction for the compression force between two semi-dilute polymer brushes is compared to the benchmark experiments of Taunton et al. [Nature, 1988, 332, 712]. The comparison is done with previously established parameters, and without any fitting parameters whatsoever. The SCFT provides a significant quantitative improvement over the classical strong-stretching theory (SST), yielding excellent quantitative agreement with the experiment. Contrary to earlier suggestions, chain fluctuations cannot be ignored for normal experimental conditions. Although the analytical expressions of SST provide invaluable aids to understanding the qualitative behavior of polymeric brushes, the numerical SCFT is necessary in order to provide quantitatively accurate predictions.
Resumo:
We compare the use of plastically compressed collagen gels to conventional collagen gels as scaffolds onto which corneal limbal epithelial cells (LECs) are seeded to construct an artificial corneal epithelium. LECs were isolated from bovine corneas (limbus) and seeded onto either conventional uncompressed or novel compressed collagen gels and grown in culture. Scanning electron microscopy (SEM) results showed that fibers within the uncompressed gel were loose and irregularly ordered, whereas the fibers within the compressed gel were densely packed and more evenly arranged. Quantitative analysis of LECs expansion across the surface of the two gels showed similar growth rates (p > 0.05). Under SEM, the LECs, expanded on uncompressed gels, showed a rough and heterogeneous morphology, whereas on the compressed gel, the cells displayed a smooth and homogeneous morphology. Transmission electron microscopy (TEM) results showed the compressed scaffold to contain collagen fibers of regular diameter and similar orientation resembling collagen fibers within the normal cornea. TEM and light microscopy also showed that cell–cell and cell–matrix attachment, stratification, and cell density were superior in LECs expanded upon compressed collagen gels. This study demonstrated that the compressed collagen gel was an excellent biomaterial scaffold highly suited to the construction of an artificial corneal epithelium and a significant improvement upon conventional collagen gels.
Resumo:
This paper examines the normal force between two opposing polyelectrolyte brushes and the interpenetration of their chains that is responsible for sliding friction. It focuses on the special case of semi-dilute brushes in a salt-free theta solvent, for which Zhulina and Borisov [J. Chem. Phys., {\bf 107}, 5952, (1997)] have derived analytical predictions using the classical strong-stretching theory (SST) introduced by Semenov and developed by Milner, Witten and Cates. Interestingly, the SST predicts that the brushes contract maintaining a polymer-free gap as they are compressed together, which provides an explanation for the ultra-low frictional forces observed in experiment. We examine the degree to which the SST predictions are affected by chain fluctuations by employing self-consistent field theory (SCFT). While the normal force is relatively unaffected, fluctuations are found to have a strong impact on brush interpenetration. Even still, the contraction of the brushes does significantly prolong the onset of interpenetration, implying that a sizeable normal force can be achieved before the sliding friction becomes significant.
Resumo:
Classical strong-stretching theory (SST) predicts that, as opposing polyelectrolyte brushes are compressed together in a salt-free theta solvent, they contract so as to maintain a finite polymer-free gap, which offers a potential explanation for the ultra-low frictional forces observed in experiments even with the application of large normal forces. However, the SST ignores chain fluctuations, which would tend to close the gap resulting in physical contact and in turn significant friction. In a preceding study, we examined the effect of fluctuations using self-consistent field theory (SCFT) and illustrated that high normal forces can still be applied before the gap is destroyed. We now look at the effect of adding salt. It is found to reduce the long-range interaction between the brushes but has little effect on the short-range part, provided the concentration does not enter the salted-brush regime. Consequently, the maximum normal force between two planar brushes at the point of contact is remarkably unaffected by salt. For the crossed-cylinder geometry commonly used in experiments, however, there is a gradual reduction because in this case the long-range part of the interaction contributes to the maximum normal force.
Resumo:
Vertebral compression fractures are a common clinical problem and the incidence of them will increase with the ageing population. Traditionally management has been conservative; however, there has been a growing trend towards vertebroplasty as an alternative therapy in patients with persisting severe pain. NICE produced guidance in 2003 recommending the procedure after 4 weeks of conservative management. Recent high-quality studies have been contradictory and there is currently a debate surrounding the role of the procedure with no agreement in the literature. We examine the evidence in both osteoporotic and malignant vertebral compression fractures; we also describe the benefits and side effects, alternative treatment options and the cost of the procedure. Finally, we recommend when vertebroplasty is most appropriately used based on the best available evidence.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.