9 resultados para Code compression

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the major problems that prevents the spread of elections with the possibility of remote voting over electronic networks, also called Internet Voting, is the use of unreliable client platforms, such as the voter's computer and the Internet infrastructure connecting it to the election server. A computer connected to the Internet is exposed to viruses, worms, Trojans, spyware, malware and other threats that can compromise the election's integrity. For instance, it is possible to write a virus that changes the voter's vote to a predetermined vote on election's day. Another possible attack is the creation of a fake election web site where the voter uses a malicious vote program on the web site that manipulates the voter's vote (phishing/pharming attack). Such attacks may not disturb the election protocol, therefore can remain undetected in the eyes of the election auditors. We propose the use of Code Voting to overcome insecurity of the client platform. Code Voting consists in creating a secure communication channel to communicate the voter's vote between the voter and a trusted component attached to the voter's computer. Consequently, no one controlling the voter's computer can change the his/her's vote. The trusted component can then process the vote according to a cryptographic voting protocol to enable cryptographic verification at the server's side.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of the Miocene through Present compression in the Tagus Abyssal Plain are mapped using the most up to date available to scientific community multi-channel seismic reflection and refraction data. Correlation of the rift basin fault pattern with the deep crustal structure is presented along seismic line IAM-5. Four structural domains were recognized. In the oceanic realm mild deformation concentrates in Domain I adjacent to the Tore-Madeira Rise. Domain 2 is characterized by the absence of shortening structures, except near the ocean-continent transition (OCT), implying that Miocene deformation did not propagate into the Abyssal Plain, In Domain 3 we distinguish three sub-domains: Sub-domain 3A which coincides with the OCT, Sub-domain 3B which is a highly deformed adjacent continental segment, and Sub-domain 3C. The Miocene tectonic inversion is mainly accommodated in Domain 3 by oceanwards directed thrusting at the ocean-continent transition and continentwards on the continental slope. Domain 4 corresponds to the non-rifted continental margin where only minor extensional and shortening deformation structures are observed. Finite element numerical models address the response of the various domains to the Miocene compression, emphasizing the long-wavelength differential vertical movements and the role of possible rheologic contrasts. The concentration of the Miocene deformation in the transitional zone (TC), which is the addition of Sub-domain 3A and part of 3B, is a result of two main factors: (1) focusing of compression in an already stressed region due to plate curvature and sediment loading; and (2) theological weakening. We estimate that the frictional strength in the TC is reduced in 30% relative to the surrounding regions. A model of compressive deformation propagation by means of horizontal impingement of the middle continental crust rift wedge and horizontal shearing on serpentinized mantle in the oceanic realm is presented. This model is consistent with both the geological interpretation of seismic data and the results of numerical modelling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Evaluate the type of breast compression (gradual or no gradual) that provides less discomfort to the patient. Methods and Materials: The standard projections were simulated [craniocaudal/(CC) and mediolateral-oblique/(MLO)] with the two breast compressions in 90 volunteers women aged between 19 and 86. The women were organised in groups according to the breast density. The intensity of discomfort was evaluated using the scale that have represented several faces (0-10) proposed by Wong Baker in the end of each simulation. It was also applied an interview using focus group to debate the score that were attributed during pain evaluation and to identify the criteria that were considered to do the classification. Results: The women aged between 19-29y (with higher breast density) classified the pain during no gradual compression as 4 and the gradual compression as 2 for both projections. The MLO projection was considered the most uncomfortable. During the focus group interview applied to this group was highlighted that compression did not promoted pain but discomfort. They considered that the high expectations of pain did not correspond to the discomfort that they felt. Similar results were identified for the older women (30-50y; > 50y). Conclusion: The radiographers should considerer the technique for breast compression. The gradual compression was considered for the majority of the women as the most comfortable regardless of breast density. The MLO projection was considered as uncomfortable due to the positioning (axila and inclusion of pectoral muscle) and due to the higher breast compression compared to the CC projection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.