848 resultados para Lossless image compression
Resumo:
A imagem digital no formato DICOM requer grande espaço para armazenamento, dificultando o arquivamento e transmissão da imagem via internet, sendo necessária, muitas vezes, a compressão das imagens por meio de formatos de arquivo como o JPEG. O objetivo neste estudo foi avaliar a influência dos formatos DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, na reprodutibilidade intra e interexaminador na marcação de pontos cefalométricos em Telerradiografias digitais em Norma Frontal. A amostra consistiu de 120 imagens digitais de Telerradiografias em Norma Frontal, obtidas de 30 indivíduos. As 30 imagens originais, em formato DICOM, posteriormente, foram convertidas para o formato JPEG, nos Fatores de Qualidade 100, 80 e 60. Após cegar e randomizar a amostra, três ortodontistas calibrados marcaram os 18 pontos cefalométricos em cada imagem utilizando um programa de cefalometria computadorizada, que registra as medidas dos pontos cefalométricos em um sistema de coordenadas cartesianas X e Y. Nos resultados, os testes estatísticos de correlações intraclasses e análise de variância (ANOVA) apresentaram concordância de reprodutibilidade dos pontos cefalométricos em Telerradiografias digitais em Norma Frontal, tanto intra como interexaminador, com exceção dos pontos ZL, ZR, AZ, JR, NC, CN na coordenada Y e A6 na coordenada X, independentemente dos formatos de arquivo. Em conclusão, os formatos de arquivo DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, não afetaram a reprodutibilidade intra e interexaminador na marcação dos pontos cefalométricos.(AU)
Resumo:
A set of DCT domain properties for shifting and scaling by real amounts, and taking linear operations such as differentiation is described. The DCT coefficients of a sampled signal are subjected to a linear transform, which returns the DCT coefficients of the shifted, scaled and/or differentiated signal. The properties are derived by considering the inverse discrete transform as a cosine series expansion of the original continuous signal, assuming sampling in accordance with the Nyquist criterion. This approach can be applied in the signal domain, to give, for example, DCT based interpolation or derivatives. The same approach can be taken in decoding from the DCT to give, for example, derivatives in the signal domain. The techniques may prove useful in compressed domain processing applications, and are interesting because they allow operations from the continuous domain such as differentiation to be implemented in the discrete domain. An image matching algorithm illustrates the use of the properties, with improvements in computation time and matching quality.
Resumo:
A imagem digital no formato DICOM requer grande espaço para armazenamento, dificultando o arquivamento e transmissão da imagem via internet, sendo necessária, muitas vezes, a compressão das imagens por meio de formatos de arquivo como o JPEG. O objetivo neste estudo foi avaliar a influência dos formatos DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, na reprodutibilidade intra e interexaminador na marcação de pontos cefalométricos em Telerradiografias digitais em Norma Frontal. A amostra consistiu de 120 imagens digitais de Telerradiografias em Norma Frontal, obtidas de 30 indivíduos. As 30 imagens originais, em formato DICOM, posteriormente, foram convertidas para o formato JPEG, nos Fatores de Qualidade 100, 80 e 60. Após cegar e randomizar a amostra, três ortodontistas calibrados marcaram os 18 pontos cefalométricos em cada imagem utilizando um programa de cefalometria computadorizada, que registra as medidas dos pontos cefalométricos em um sistema de coordenadas cartesianas X e Y. Nos resultados, os testes estatísticos de correlações intraclasses e análise de variância (ANOVA) apresentaram concordância de reprodutibilidade dos pontos cefalométricos em Telerradiografias digitais em Norma Frontal, tanto intra como interexaminador, com exceção dos pontos ZL, ZR, AZ, JR, NC, CN na coordenada Y e A6 na coordenada X, independentemente dos formatos de arquivo. Em conclusão, os formatos de arquivo DICOM e JPEG, nos Fatores de Qualidade 100, 80 e 60, não afetaram a reprodutibilidade intra e interexaminador na marcação dos pontos cefalométricos.(AU)
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
We extend our previous work into error-free representations of transform basis functions by presenting a novel error-free encoding scheme for the fast implementation of a Linzer-Feig Fast Cosine Transform (FCT) and its inverse. We discuss an 8x8 L-F scaled Discrete Cosine Transform where the architecture uses a new algebraic integer quantization of the 1-D radix-8 DCT that allows the separable computation of a 2-D DCT without any intermediate number representation conversions. The resulting architecture is very regular and reduces latency by 50% compared to a previous error-free design, with virtually the same hardware cost.
Resumo:
Neste trabalho será apresentado um método recente de compressão de imagens baseado na teoria dos Sistemas de Funções Iteradas (SFI), designado por Compressão Fractal. Descrever-se-á um modelo contínuo para a compressão fractal sobre o espaço métrico completo Lp, onde será definido um operador de transformação fractal contractivo associado a um SFI local com aplicações. Antes disso, será introduzida a teoria dos SFIs no espaço de Hausdorff ou espaço fractal, a teoria dos SFIs Locais - uma generalização dos SFIs - e dos SFIs no espaço Lp. Fornecida a fundamentação teórica para o método será apresentado detalhadamente o algoritmo de compressão fractal. Serão também descritas algumas estratégias de particionamento necessárias para encontrar o SFI com aplicações, assim como, algumas estratégias para tentar colmatar o maior entrave da compressão fractal: a complexidade de codificação. Esta dissertação assumirá essencialmente um carácter mais teórico e descritivo do método de compressão fractal, e de algumas técnicas, já implementadas, para melhorar a sua eficácia.
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.
Resumo:
Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica