967 resultados para Positive Definite Function
Resumo:
We prove that any isotropic positive definite function on the sphere can be written as the spherical self-convolution of an isotropic real-valued function. It is known that isotropic positive definite functions on d-dimensional Euclidean space admit a continuous derivative of order [(d − 1)/2]. We show that the same holds true for isotropic positive definite functions on spheres and prove that this result is optimal for all odd dimensions.
Resumo:
Some relationships between representations of a hypergroup X, its algebras, and positive definite functions on X are studied. Also, various types of convergence of positive definite functions on X are discussed.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
We prove that any continuous function with domain {z ∈ C: |z| ≤ 1} that generates a bizonal positive definite kernel on the unit sphere in 'C POT.Q' , q ⩾ 3, is continuously differentiable in {z ∈ C: |z| < 1} up to order q − 2, with respect to both z and 'Z BARRA'. In particular, the partial derivatives of the function with respect to x = Re z and y = Im z exist and are continuous in {z ∈ C: |z| < 1} up to the same order.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.
Resumo:
We are concerned with the class ∏n of nxn complex matrices A for which the Hermitian part H(A) = A+A*/2 is positive definite.
Various connections are established with other classes such as the stable, D-stable and dominant diagonal matrices. For instance it is proved that if there exist positive diagonal matrices D, E such that DAE is either row dominant or column dominant and has positive diagonal entries, then there is a positive diagonal F such that FA ϵ ∏n.
Powers are investigated and it is found that the only matrices A for which Am ϵ ∏n for all integers m are the Hermitian elements of ∏n. Products and sums are considered and criteria are developed for AB to be in ∏n.
Since ∏n n is closed under inversion, relations between H(A)-1 and H(A-1) are studied and a dichotomy observed between the real and complex cases. In the real case more can be said and the initial result is that for A ϵ ∏n, the difference H(adjA) - adjH(A) ≥ 0 always and is ˃ 0 if and only if S(A) = A-A*/2 has more than one pair of conjugate non-zero characteristic roots. This is refined to characterize real c for which cH(A-1) - H(A)-1 is positive definite.
The cramped (characteristic roots on an arc of less than 180°) unitary matrices are linked to ∏n and characterized in several ways via products of the form A -1A*.
Classical inequalities for Hermitian positive definite matrices are studied in ∏n and for Hadamard's inequality two types of generalizations are given. In the first a large subclass of ∏n in which the precise statement of Hadamardis inequality holds is isolated while in another large subclass its reverse is shown to hold. In the second Hadamard's inequality is weakened in such a way that it holds throughout ∏n. Both approaches contain the original Hadamard inequality as a special case.
Resumo:
Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.
Resumo:
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
Resumo:
The generalization of the geometric mean of positive scalars to positive definite matrices has attracted considerable attention since the seminal work of Ando. The paper generalizes this framework of matrix means by proposing the definition of a rank-preserving mean for two or an arbitrary number of positive semi-definite matrices of fixed rank. The proposed mean is shown to be geometric in that it satisfies all the expected properties of a rank-preserving geometric mean. The work is motivated by operations on low-rank approximations of positive definite matrices in high-dimensional spaces.© 2012 Elsevier Inc. All rights reserved.
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.