982 resultados para Image compression


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A set of DCT domain properties for shifting and scaling by real amounts, and taking linear operations such as differentiation is described. The DCT coefficients of a sampled signal are subjected to a linear transform, which returns the DCT coefficients of the shifted, scaled and/or differentiated signal. The properties are derived by considering the inverse discrete transform as a cosine series expansion of the original continuous signal, assuming sampling in accordance with the Nyquist criterion. This approach can be applied in the signal domain, to give, for example, DCT based interpolation or derivatives. The same approach can be taken in decoding from the DCT to give, for example, derivatives in the signal domain. The techniques may prove useful in compressed domain processing applications, and are interesting because they allow operations from the continuous domain such as differentiation to be implemented in the discrete domain. An image matching algorithm illustrates the use of the properties, with improvements in computation time and matching quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A imagem digital no formato DICOM requer grande espaço para armazenamento, dificultando o arquivamento e transmissão da imagem via internet, sendo necessária, muitas vezes, a compressão das imagens por meio de formatos de arquivo como o JPEG. O objetivo neste estudo foi avaliar a influência dos formatos DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, na reprodutibilidade intra e interexaminador na marcação de pontos cefalométricos em Telerradiografias digitais em Norma Frontal. A amostra consistiu de 120 imagens digitais de Telerradiografias em Norma Frontal, obtidas de 30 indivíduos. As 30 imagens originais, em formato DICOM, posteriormente, foram convertidas para o formato JPEG, nos Fatores de Qualidade 100, 80 e 60. Após cegar e randomizar a amostra, três ortodontistas calibrados marcaram os 18 pontos cefalométricos em cada imagem utilizando um programa de cefalometria computadorizada, que registra as medidas dos pontos cefalométricos em um sistema de coordenadas cartesianas X e Y. Nos resultados, os testes estatísticos de correlações intraclasses e análise de variância (ANOVA) apresentaram concordância de reprodutibilidade dos pontos cefalométricos em Telerradiografias digitais em Norma Frontal, tanto intra como interexaminador, com exceção dos pontos ZL, ZR, AZ, JR, NC, CN na coordenada Y e A6 na coordenada X, independentemente dos formatos de arquivo. Em conclusão, os formatos de arquivo DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, não afetaram a reprodutibilidade intra e interexaminador na marcação dos pontos cefalométricos.(AU)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We extend our previous work into error-free representations of transform basis functions by presenting a novel error-free encoding scheme for the fast implementation of a Linzer-Feig Fast Cosine Transform (FCT) and its inverse. We discuss an 8x8 L-F scaled Discrete Cosine Transform where the architecture uses a new algebraic integer quantization of the 1-D radix-8 DCT that allows the separable computation of a 2-D DCT without any intermediate number representation conversions. The resulting architecture is very regular and reduces latency by 50% compared to a previous error-free design, with virtually the same hardware cost.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho será apresentado um método recente de compressão de imagens baseado na teoria dos Sistemas de Funções Iteradas (SFI), designado por Compressão Fractal. Descrever-se-á um modelo contínuo para a compressão fractal sobre o espaço métrico completo Lp, onde será definido um operador de transformação fractal contractivo associado a um SFI local com aplicações. Antes disso, será introduzida a teoria dos SFIs no espaço de Hausdorff ou espaço fractal, a teoria dos SFIs Locais - uma generalização dos SFIs - e dos SFIs no espaço Lp. Fornecida a fundamentação teórica para o método será apresentado detalhadamente o algoritmo de compressão fractal. Serão também descritas algumas estratégias de particionamento necessárias para encontrar o SFI com aplicações, assim como, algumas estratégias para tentar colmatar o maior entrave da compressão fractal: a complexidade de codificação. Esta dissertação assumirá essencialmente um carácter mais teórico e descritivo do método de compressão fractal, e de algumas técnicas, já implementadas, para melhorar a sua eficácia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the design and evaluation of a platform for networks of cameras in low-bandwidth, low-power sensor networks. In our work to date we have investigated two different DSP hardware/software platforms for undertaking the tasks of compression and object detection and tracking. We compare the relative merits of each of the hardware and software platforms in terms of both performance and energy consumption. Finally we discuss what we believe are the ongoing research questions for image processing in WSNs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Australian masonry standard allows either prism tests or correction factors based on the block height and mortar thickness to evaluate masonry compressive strength. The correction factor helps the taller units with conventional 10 mm mortar being not disadvantaged due to size effect. In recent times, 2-4 mm thick, high-adhesive mortars and H blocks with only the mid-web shell are used in masonry construction. H blocks and thinner and higher adhesive mortars have renewed interest of the compression behaviour of hollow concrete masonry and hence is revisited in this paper. This paper presents an experimental study carried out to examine the effects of the thickness of mortar joints, the type of mortar adhesives and the presence of web shells in the hollow concrete masonry prisms under axial compression. A non-contact digital image correlation technique was used to measure the deformation of the prisms and was found adequate for the determination of strain fi eld of the loaded face shells subjected to axial compression. It is found that the absence of end web shells lowers the compressive strength and stiffness of the prisms and the thinner and higher adhesive mortars increase the compressive strength and stiffness, while lowering the Poisson's ratio. © Institution of Engineers Australia, 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intervertebral disc withstands large compressive loads (up to nine times bodyweight in humans) while providing flexibility to the spinal column. At a microstructural level, the outer sheath of the disc (the annulus fibrosus) comprises 12–20 annular layers of alternately crisscrossed collagen fibres embedded in a soft ground matrix. The centre of the disc (the nucleus pulposus) consists of a hydrated gel rich in proteoglycans. The disc is the largest avascular structure in the body and is of much interest biomechanically due to the high societal burden of disc degeneration and back pain. Although the disc has been well characterized at the whole joint scale, it is not clear how the disc tissue microstructure confers its overall mechanical properties. In particular, there have been conflicting reports regarding the level of attachment between adjacent lamellae in the annulus, and the importance of these interfaces to the overall integrity of the disc is unknown. We used a polarized light micrograph of the bovine tail disc in transverse cross-section to develop an image-based finite element model incorporating sliding and separation between layers of the annulus, and subjected the model to axial compressive loading. Validation experiments were also performed on four bovine caudal discs. Interlamellar shear resistance had a strong effect on disc compressive stiffness, with a 40% drop in stiffness when the interface shear resistance was changed from fully bonded to freely sliding. By contrast, interlamellar cohesion had no appreciable effect on overall disc mechanics. We conclude that shear resistance between lamellae confers disc mechanical resistance to compression, and degradation of the interlamellar interface structure may be a precursor to macroscopic disc degeneration.