981 resultados para Positive Definite Functions
Resumo:
We prove that any isotropic positive definite function on the sphere can be written as the spherical self-convolution of an isotropic real-valued function. It is known that isotropic positive definite functions on d-dimensional Euclidean space admit a continuous derivative of order [(d − 1)/2]. We show that the same holds true for isotropic positive definite functions on spheres and prove that this result is optimal for all odd dimensions.
Resumo:
Some relationships between representations of a hypergroup X, its algebras, and positive definite functions on X are studied. Also, various types of convergence of positive definite functions on X are discussed.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
We prove that any continuous function with domain {z ∈ C: |z| ≤ 1} that generates a bizonal positive definite kernel on the unit sphere in 'C POT.Q' , q ⩾ 3, is continuously differentiable in {z ∈ C: |z| < 1} up to order q − 2, with respect to both z and 'Z BARRA'. In particular, the partial derivatives of the function with respect to x = Re z and y = Im z exist and are continuous in {z ∈ C: |z| < 1} up to the same order.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.
Resumo:
We are concerned with the class ∏n of nxn complex matrices A for which the Hermitian part H(A) = A+A*/2 is positive definite.
Various connections are established with other classes such as the stable, D-stable and dominant diagonal matrices. For instance it is proved that if there exist positive diagonal matrices D, E such that DAE is either row dominant or column dominant and has positive diagonal entries, then there is a positive diagonal F such that FA ϵ ∏n.
Powers are investigated and it is found that the only matrices A for which Am ϵ ∏n for all integers m are the Hermitian elements of ∏n. Products and sums are considered and criteria are developed for AB to be in ∏n.
Since ∏n n is closed under inversion, relations between H(A)-1 and H(A-1) are studied and a dichotomy observed between the real and complex cases. In the real case more can be said and the initial result is that for A ϵ ∏n, the difference H(adjA) - adjH(A) ≥ 0 always and is ˃ 0 if and only if S(A) = A-A*/2 has more than one pair of conjugate non-zero characteristic roots. This is refined to characterize real c for which cH(A-1) - H(A)-1 is positive definite.
The cramped (characteristic roots on an arc of less than 180°) unitary matrices are linked to ∏n and characterized in several ways via products of the form A -1A*.
Classical inequalities for Hermitian positive definite matrices are studied in ∏n and for Hadamard's inequality two types of generalizations are given. In the first a large subclass of ∏n in which the precise statement of Hadamardis inequality holds is isolated while in another large subclass its reverse is shown to hold. In the second Hadamard's inequality is weakened in such a way that it holds throughout ∏n. Both approaches contain the original Hadamard inequality as a special case.
Resumo:
The purpose of this paper is to continue to develop the recently introduced concept of a regular positive-real function and its application to the classification of low-complexity two-terminal networks. This paper studies five- and six-element series-parallel networks with three reactive elements and presents a complete characterisation and graphical representation of the realisability conditions for these networks. The results are motivated by an approach to passive mechanical control which makes use of the inerter device. ©2009 IEEE.
Resumo:
Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.