846 resultados para compression functions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explains the Genetic Algorithm (GA) evolution of optimized wavelet that surpass the cdf9/7 wavelet for fingerprint compression and reconstruction. Optimized wavelets have already been evolved in previous works in the literature, but they are highly computationally complex and time consuming. Therefore, in this work, a simple approach is made to reduce the computational complexity of the evolution algorithm. A training image set comprised of three 32x32 size cropped images performed much better than the reported coefficients in literature. An average improvement of 1.0059 dB in PSNR above the classical cdf9/7 wavelet over the 80 fingerprint images was achieved. In addition, the computational speed was increased by 90.18 %. The evolved coefficients for compression ratio (CR) 16:1 yielded better average PSNR for other CRs also. Improvement in average PSNR was experienced for degraded and noisy images as well

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis explores the area of still image compression. The image compression techniques can be broadly classified into lossless and lossy compression. The most common lossy compression techniques are based on Transform coding, Vector Quantization and Fractals. Transform coding is the simplest of the above and generally employs reversible transforms like, DCT, DWT, etc. Mapped Real Transform (MRT) is an evolving integer transform, based on real additions alone. The present research work aims at developing new image compression techniques based on MRT. Most of the transform coding techniques employ fixed block size image segmentation, usually 8×8. Hence, a fixed block size transform coding is implemented using MRT and the merits and demerits are analyzed for both 8×8 and 4×4 blocks. The N2 unique MRT coefficients, for each block, are computed using templates. Considering the merits and demerits of fixed block size transform coding techniques, a hybrid form of these techniques is implemented to improve the performance of compression. The performance of the hybrid coder is found to be better compared to the fixed block size coders. Thus, if the block size is made adaptive, the performance can be further improved. In adaptive block size coding, the block size may vary from the size of the image to 2×2. Hence, the computation of MRT using templates is impractical due to memory requirements. So, an adaptive transform coder based on Unique MRT (UMRT), a compact form of MRT, is implemented to get better performance in terms of PSNR and HVS The suitability of MRT in vector quantization of images is then experimented. The UMRT based Classified Vector Quantization (CVQ) is implemented subsequently. The edges in the images are identified and classified by employing a UMRT based criteria. Based on the above experiments, a new technique named “MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)”is developed. Its performance is evaluated and compared against existing techniques. A comparison with standard JPEG & the well-known Shapiro’s Embedded Zero-tree Wavelet (EZW) is done and found that the proposed technique gives better performance for majority of images

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Bieberbach conjecture about the coefficients of univalent functions of the unit disk was formulated by Ludwig Bieberbach in 1916 [Bieberbach1916]. The conjecture states that the coefficients of univalent functions are majorized by those of the Koebe function which maps the unit disk onto a radially slit plane. The Bieberbach conjecture was quite a difficult problem, and it was surprisingly proved by Louis de Branges in 1984 [deBranges1985] when some experts were rather trying to disprove it. It turned out that an inequality of Askey and Gasper [AskeyGasper1976] about certain hypergeometric functions played a crucial role in de Branges' proof. In this article I describe the historical development of the conjecture and the main ideas that led to the proof. The proof of Lenard Weinstein (1991) [Weinstein1991] follows, and it is shown how the two proofs are interrelated. Both proofs depend on polynomial systems that are directly related with the Koebe function. At this point algorithms of computer algebra come into the play, and computer demonstrations are given that show how important parts of the proofs can be automated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Student’s t-distribution has found various applications in mathematical statistics. One of the main properties of the t-distribution is to converge to the normal distribution as the number of samples tends to infinity. In this paper, by using a Cauchy integral we introduce a generalization of the t-distribution function with four free parameters and show that it converges to the normal distribution again. We provide a comprehensive treatment of mathematical properties of this new distribution. Moreover, since the Fisher F-distribution has a close relationship with the t-distribution, we also introduce a generalization of the F-distribution and prove that it converges to the chi-square distribution as the number of samples tends to infinity. Finally some particular sub-cases of these distributions are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Dissertation präsentieren wir zunächst eine Verallgemeinerung der üblichen Sturm-Liouville-Probleme mit symmetrischen Lösungen und erklären eine umfassendere Klasse. Dann führen wir einige neue Klassen orthogonaler Polynome und spezieller Funktionen ein, welche sich aus dieser symmetrischen Verallgemeinerung ableiten lassen. Als eine spezielle Konsequenz dieser Verallgemeinerung führen wir ein Polynomsystem mit vier freien Parametern ein und zeigen, dass in diesem System fast alle klassischen symmetrischen orthogonalen Polynome wie die Legendrepolynome, die Chebyshevpolynome erster und zweiter Art, die Gegenbauerpolynome, die verallgemeinerten Gegenbauerpolynome, die Hermitepolynome, die verallgemeinerten Hermitepolynome und zwei weitere neue endliche Systeme orthogonaler Polynome enthalten sind. All diese Polynome können direkt durch das neu eingeführte System ausgedrückt werden. Ferner bestimmen wir alle Standardeigenschaften des neuen Systems, insbesondere eine explizite Darstellung, eine Differentialgleichung zweiter Ordnung, eine generische Orthogonalitätsbeziehung sowie eine generische Dreitermrekursion. Außerdem benutzen wir diese Erweiterung, um die assoziierten Legendrefunktionen, welche viele Anwendungen in Physik und Ingenieurwissenschaften haben, zu verallgemeinern, und wir zeigen, dass diese Verallgemeinerung Orthogonalitätseigenschaft und -intervall erhält. In einem weiteren Kapitel der Dissertation studieren wir detailliert die Standardeigenschaften endlicher orthogonaler Polynomsysteme, welche sich aus der üblichen Sturm-Liouville-Theorie ergeben und wir zeigen, dass sie orthogonal bezüglich der Fisherschen F-Verteilung, der inversen Gammaverteilung und der verallgemeinerten t-Verteilung sind. Im nächsten Abschnitt der Dissertation betrachten wir eine vierparametrige Verallgemeinerung der Studentschen t-Verteilung. Wir zeigen, dass diese Verteilung gegen die Normalverteilung konvergiert, wenn die Anzahl der Stichprobe gegen Unendlich strebt. Eine ähnliche Verallgemeinerung der Fisherschen F-Verteilung konvergiert gegen die chi-Quadrat-Verteilung. Ferner führen wir im letzten Abschnitt der Dissertation einige neue Folgen spezieller Funktionen ein, welche Anwendungen bei der Lösung in Kugelkoordinaten der klassischen Potentialgleichung, der Wärmeleitungsgleichung und der Wellengleichung haben. Schließlich erklären wir zwei neue Klassen rationaler orthogonaler hypergeometrischer Funktionen, und wir zeigen unter Benutzung der Fouriertransformation und der Parsevalschen Gleichung, dass es sich um endliche Orthogonalsysteme mit Gewichtsfunktionen vom Gammatyp handelt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we solve the duplication problem P_n(ax) = sum_{m=0}^{n}C_m(n,a)P_m(x) where {P_n}_{n>=0} belongs to a wide class of polynomials, including the classical orthogonal polynomials (Hermite, Laguerre, Jacobi) as well as the classical discrete orthogonal polynomials (Charlier, Meixner, Krawtchouk) for the specific case a = −1. We give closed-form expressions as well as recurrence relations satisfied by the duplication coefficients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a similar manner as in some previous papers, where explicit algorithms for finding the differential equations satisfied by holonomic functions were given, in this paper we deal with the space of the q-holonomic functions which are the solutions of linear q-differential equations with polynomial coefficients. The sum, product and the composition with power functions of q-holonomic functions are also q-holonomic and the resulting q-differential equations can be computed algorithmically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recurrent iterated function systems (RIFSs) are improvements of iterated function systems (IFSs) using elements of the theory of Marcovian stochastic processes which can produce more natural looking images. We construct new RIFSs consisting substantially of a vertical contraction factor function and nonlinear transformations. These RIFSs are applied to image compression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The basic thermodynamic functions, the entropy, free energy, and enthalpy, for element 105 (hahnium) in electronic configurations d^3 s^2, d^3 sp, and d^4s^1 and for its +5 ionized state (5f^14) have been calculated as a function of temperature. The data are based on the results of the calculations of the corresponding electronic states of element 105 using the multiconfiguration Dirac-Fock method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exam questions and solutions in LaTex