954 resultados para Fractal Image Coding


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Fractal Image Informatics toolbox (Oleschko et al., 2008 a; Torres-Argüelles et al., 2010) was applied to extract, classify and model the topological structure and dynamics of surface roughness in two highly eroded catchments of Mexico. Both areas are affected by gully erosion (Sidorchuk, 2005) and characterized by avalanche-like matter transport. Five contrasting morphological patterns were distinguished across the slope of the bare eroded surface of Faeozem (Queretaro State) while only one (apparently independent on the slope) roughness pattern was documented for Andosol (Michoacan State). We called these patterns ?the roughness clusters? and compared them in terms of metrizability, continuity, compactness, topological connectedness (global and local) and invariance, separability, and degree of ramification (Weyl, 1937). All mentioned topological measurands were correlated with the variance, skewness and kurtosis of the gray-level distribution of digital images. The morphology0 spatial dynamics of roughness clusters was measured and mapped with high precision in terms of fractal descriptors. The Hurst exponent was especially suitable to distinguish between the structure of ?turtle shell? and ?ramification? patterns (sediment producing zone A of the slope); as well as ?honeycomb? (sediment transport zone B) and ?dinosaur steps? and ?corals? (sediment deposition zone C) roughness clusters. Some other structural attributes of studied patterns were also statistically different and correlated with the variance, skewness and kurtosis of gray distribution of multiscale digital images. The scale invariance of classified roughness patterns was documented inside the range of five image resolutions. We conjectured that the geometrization of erosion patterns in terms of roughness clustering might benefit the most semi-quantitative models developed for erosion and sediment yield assessments (de Vente and Poesen, 2005).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this paper is to present some tools to analyze a digital chaotic signal. We have proposed some of them previously, as a new type of phase diagrams with binary signals converted to hexadecimal. Moreover, the main emphasis will be given in this paper to an analysis of the chaotic signal based on the Lempel and Ziv method. This technique has been employed partly by us to a very short stream of data. In this paper we will extend this method to long trains of data (larger than 2000 bit units). The main characteristics of the chaotic signal are obtained with this method being possible to present numerical values to indicate the properties of the chaos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new proposal to have secure communications in a system is reported. The basis is the use of a synchronized digital chaotic systems, sending the information signal added to an initial chaos. The received signal is analyzed by another chaos generator located at the receiver and, by a logic boolean function of the chaotic and the received signals, the original information is recovered. One of the most important facts of this system is that the bandwidth needed by the system remain the same with and without chaos.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Real-world images are complex objects, difficult to describe but at the same time possessing a high degree of redundancy. A very recent study [1] on the statistical properties of natural images reveals that natural images can be viewed through different partitions which are essentially fractal in nature. One particular fractal component, related to the most singular (sharpest) transitions in the image, seems to be highly informative about the whole scene. In this paper we will show how to decompose the image into their fractal components.We will see that the most singular component is related to (but not coincident with) the edges of the objects present in the scenes. We will propose a new, simple method to reconstruct the image with information contained in that most informative component.We will see that the quality of the reconstruction is strongly dependent on the capability to extract the relevant edges in the determination of the most singular set.We will discuss the results from the perspective of coding, proposing this method as a starting point for future developments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Jet impingement erosion test rig has been used to erode titanium alloy specimens (Ti-4Al-4V). Eroded surface profiles have been obtained by vertical sectioning method for light microscopy observation. Mixed fractals have been measured from profile images by a digital image processing and analysis technique. The use of this technique allows glimpsing a quantitative correlation among material properties, fractal surface topography and erosion phenomena. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Edible oil is an important contaminant in water and wastewater. Oil droplets smaller than 40 μm may remain in effluent as an emulsion and combine with other contaminants in water. Coagulation/flocculation processes are used to remove oil droplets from water and wastewater. By adding a polymer at proper dose, small oil droplets can be flocculated and separated from water. The purpose of this study was to characterize and analyze the morphology of flocs and floc formation in edible oil-water emulsions by using microscopic image analysis techniques. The fractal dimension, concentration of polymer, effect of pH and temperature are investigated and analyzed to develop a fractal model of the flocs. Three types of edible oil (corn, olive, and sunflower oil) at concentrations of 600 ppm (by volume) were used to determine the optimum polymer dosage and effect of pH and temperature. To find the optimum polymer dose, polymer was added to the oil-water emulsions at concentration of 0.5, 1.0, 1.5, 2.0, 3.0 and 3.5 ppm (by volume). The clearest supernatants obtained from flocculation of corn, olive, and sunflower oil were achieved at polymer dosage of 3.0 ppm producing turbidities of 4.52, 12.90, and 13.10 NTU, respectively. This concentration of polymer was subsequently used to study the effect of pH and temperature on flocculation. The effect of pH was studied at pH 5, 7, 9, and 11 at 30°C. Microscopic image analysis was used to investigate the morphology of flocs in terms of fractal dimension, radius of oil droplets trapped in floc, floc size, and histograms of oil droplet distribution. Fractal dimension indicates the density of oil droplets captured in flocs. By comparison of fractal dimensions, pH was found to be one of the most important factors controlling droplet flocculation. Neutral pH or pH 7 showed the highest degree of flocculation, while acidic (pH 5) and basic pH (pH 9 and pH 11) showed low efficiency of flocculation. The fractal dimensions achieved from flocculation of corn, olive, and sunflower oil at pH 7 and temperature 30°C were 1.2763, 1.3592, and 1.4413, respectively. The effect of temperature was explored at temperatures 20°, 30°, and 40°C and pH 7. The results of flocculation of oil at pH 7 and different temperatures revealed that temperature significantly affected flocculation. The fractal dimension of flocs formed in corn, olive and sunflower oil emulsion at pH 7 and temperature 20°, 30°, and 40°C were 1.82, 1.28, 1.29, 1.62, 1.36, 1.42, 1.36, 1.44, and 1.28, respectively. After comparison of fractal dimension, radius of oil droplets captured, and floc length in each oil type, the optimal flocculation temperature was determined to be 30°C. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Neste trabalho será apresentado um método recente de compressão de imagens baseado na teoria dos Sistemas de Funções Iteradas (SFI), designado por Compressão Fractal. Descrever-se-á um modelo contínuo para a compressão fractal sobre o espaço métrico completo Lp, onde será definido um operador de transformação fractal contractivo associado a um SFI local com aplicações. Antes disso, será introduzida a teoria dos SFIs no espaço de Hausdorff ou espaço fractal, a teoria dos SFIs Locais - uma generalização dos SFIs - e dos SFIs no espaço Lp. Fornecida a fundamentação teórica para o método será apresentado detalhadamente o algoritmo de compressão fractal. Serão também descritas algumas estratégias de particionamento necessárias para encontrar o SFI com aplicações, assim como, algumas estratégias para tentar colmatar o maior entrave da compressão fractal: a complexidade de codificação. Esta dissertação assumirá essencialmente um carácter mais teórico e descritivo do método de compressão fractal, e de algumas técnicas, já implementadas, para melhorar a sua eficácia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.