4 resultados para Decoding
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
A new fragile logo watermarking scheme is proposed for public authentication and integrity verification of images. The security of the proposed block-wise scheme relies on a public encryption algorithm and a hash function. The encoding and decoding methods can provide public detection capabilities even in the absence of the image indices and the original logos. Furthermore, the detector automatically authenticates input images and extracts possible multiple logos and image indices, which can be used not only to localise tampered regions, but also to identify the original source of images used to generate counterfeit images. Results are reported to illustrate the effectiveness of the proposed method.
Resumo:
Abstract- In this correspondence, a simple one-dimensional (1-D) differencing operation is applied to bilevel images prior to block coding to produce a sparse binary image that can be encoded efficiently using any of a number of well-known techniques. The difference image can be encoded more efficiently than the original bilevel image whenever the average run length of black pixels in the original image is greater than two. Compression is achieved because the correlation between adjacent pixels is reduced compared with the original image. The encoding/decoding operations are described and compression performance is presented for a set of standard bilevel images.
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
Pesiqta Rabbati is a unique homiletic midrash that follows the liturgical calendar in its presentation of homilies for festivals and special Sabbaths. This article attempts to utilize Pesiqta Rabbati in order to present a global theory of the literary production of rabbinic/homiletic literature. In respect to Pesiqta Rabbati it explores such areas as dating, textual witnesses, integrative apocalyptic meta-narrative, describing and mapping the structure of the text, internal and external constraints that impacted upon the text, text linguistic analysis, form-analysis: problems in the texts and linguistic gap-filling, transmission of text, strict formalization of a homiletic unit, deconstructing and reconstructing homiletic midrashim based upon form-analytic units of the homily, Neusner’s documentary hypothesis, surface structures of the homiletic unit, and textual variants. The suggested methodology may assist scholars in their production of editions of midrashic works by eliminating superfluous material and in their decoding and defining of ancient texts.