948 resultados para Laplace transforms
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
This thesis studies properties of transforms based on parabolic scaling, like Curvelet-, Contourlet-, Shearlet- and Hart-Smith-transform. Essentially, two di erent questions are considered: How these transforms can characterize H older regularity and how non-linear approximation of a piecewise smooth function converges. In study of Hölder regularities, several theorems that relate regularity of a function f : R2 → R to decay properties of its transform are presented. Of particular interest is the case where a function has lower regularity along some line segment than elsewhere. Theorems that give estimates for direction and location of this line, and regularity of the function are presented. Numerical demonstrations suggest also that similar theorems would hold for more general shape of segment of low regularity. Theorems related to uniform and pointwise Hölder regularity are presented as well. Although none of the theorems presented give full characterization of regularity, the su cient and necessary conditions are very similar. Another theme of the thesis is the study of convergence of non-linear M ─term approximation of functions that have discontinuous on some curves and otherwise are smooth. With particular smoothness assumptions, it is well known that squared L2 approximation error is O(M-2(logM)3) for curvelet, shearlet or contourlet bases. Here it is shown that assuming higher smoothness properties, the log-factor can be removed, even if the function still is discontinuous.
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Resumo:
La présente thèse porte sur différentes questions émanant de la géométrie spectrale. Ce domaine des mathématiques fondamentales a pour objet d'établir des liens entre la géométrie et le spectre d'une variété riemannienne. Le spectre d'une variété compacte fermée M munie d'une métrique riemannienne $g$ associée à l'opérateur de Laplace-Beltrami est une suite de nombres non négatifs croissante qui tend vers l’infini. La racine carrée de ces derniers représente une fréquence de vibration de la variété. Cette thèse présente quatre articles touchant divers aspects de la géométrie spectrale. Le premier article, présenté au Chapitre 1 et intitulé « Superlevel sets and nodal extrema of Laplace eigenfunctions », porte sur la géométrie nodale d'opérateurs elliptiques. L’objectif de mes travaux a été de généraliser un résultat de L. Polterovich et de M. Sodin qui établit une borne sur la distribution des extrema nodaux sur une surface riemannienne pour une assez vaste classe de fonctions, incluant, entre autres, les fonctions propres associées à l'opérateur de Laplace-Beltrami. La preuve fournie par ces auteurs n'étant valable que pour les surfaces riemanniennes, je prouve dans ce chapitre une approche indépendante pour les fonctions propres de l’opérateur de Laplace-Beltrami dans le cas des variétés riemanniennes de dimension arbitraire. Les deuxième et troisième articles traitent d'un autre opérateur elliptique, le p-laplacien. Sa particularité réside dans le fait qu'il est non linéaire. Au Chapitre 2, l'article « Principal frequency of the p-laplacian and the inradius of Euclidean domains » se penche sur l'étude de bornes inférieures sur la première valeur propre du problème de Dirichlet du p-laplacien en termes du rayon inscrit d’un domaine euclidien. Plus particulièrement, je prouve que, si p est supérieur à la dimension du domaine, il est possible d'établir une borne inférieure sans aucune hypothèse sur la topologie de ce dernier. L'étude de telles bornes a fait l'objet de nombreux articles par des chercheurs connus, tels que W. K. Haymann, E. Lieb, R. Banuelos et T. Carroll, principalement pour le cas de l'opérateur de Laplace. L'adaptation de ce type de bornes au cas du p-laplacien est abordée dans mon troisième article, « Bounds on the Principal Frequency of the p-Laplacian », présenté au Chapitre 3 de cet ouvrage. Mon quatrième article, « Wolf-Keller theorem for Neumann Eigenvalues », est le fruit d'une collaboration avec Guillaume Roy-Fortin. Le thème central de ce travail gravite autour de l'optimisation de formes dans le contexte du problème aux valeurs limites de Neumann. Le résultat principal de cet article est que les valeurs propres de Neumann ne sont pas toujours maximisées par l'union disjointe de disques arbitraires pour les domaines planaires d'aire fixée. Le tout est présenté au Chapitre 4 de cette thèse.
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
La presente obra está pensada como libro de texto para la asignatura de cálculo de los diferentes estudios de la Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación de la Universidad de Cantabria. A medida que se presenta la teoría se incluye, con objeto de ilustrarla, un buen número de ejemplos sencillos. Cada capítulo finaliza con ejercicios resueltos detalladamente y una relación de ejercicios propuestos, algunos de ellos incluídos en exámen. Se desarrollan cuatro temas fundamentalmente: cálculo vectorial, ecuaciones diferenciales ordinarias, integral de Fourier y transformada de Laplace.
Resumo:
The identification, tracking, and statistical analysis of tropical convective complexes using satellite imagery is explored in the context of identifying feature points suitable for tracking. The feature points are determined based on the shape of complexes using the distance transform technique. This approach has been applied to the determination feature points for tropical convective complexes identified in a time series of global cloud imagery. The feature points are used to track the complexes, and from the tracks statistical diagnostic fields are computed. This approach allows the nature and distribution of organized deep convection in the Tropics to be explored.
Resumo:
This article presents an overview of a transform method for solving linear and integrable nonlinear partial differential equations. This new transform method, proposed by Fokas, yields a generalization and unification of various fundamental mathematical techniques and, in particular, it yields an extension of the Fourier transform method.
Resumo:
This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We provide a unified framework for a range of linear transforms that can be used for the analysis of terahertz spectroscopic data, with particular emphasis on their application to the measurement of leaf water content. The use of linear transforms for filtering, regression, and classification is discussed. For illustration, a classification problem involving leaves at three stages of drought and a prediction problem involving simulated spectra are presented. Issues resulting from scaling the data set are discussed. Using Lagrange multipliers, we arrive at the transform that yields the maximum separation between the spectra and show that this optimal transform is equivalent to computing the Euclidean distance between the samples. The optimal linear transform is compared with the average for all the spectra as well as with the Karhunen–Loève transform to discriminate a wet leaf from a dry leaf. We show that taking several principal components into account is equivalent to defining new axes in which data are to be analyzed. The procedure shows that the coefficients of the Karhunen–Loève transform are well suited to the process of classification of spectra. This is in line with expectations, as these coefficients are built from the statistical properties of the data set analyzed.