993 resultados para POSITIVE DEFINITE KERNELS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove that any continuous function with domain {z ∈ C: |z| ≤ 1} that generates a bizonal positive definite kernel on the unit sphere in 'C POT.Q' , q ⩾ 3, is continuously differentiable in {z ∈ C: |z| < 1} up to order q − 2, with respect to both z and 'Z BARRA'. In particular, the partial derivatives of the function with respect to x = Re z and y = Im z exist and are continuous in {z ∈ C: |z| < 1} up to the same order.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are concerned with the class ∏n of nxn complex matrices A for which the Hermitian part H(A) = A+A*/2 is positive definite.

Various connections are established with other classes such as the stable, D-stable and dominant diagonal matrices. For instance it is proved that if there exist positive diagonal matrices D, E such that DAE is either row dominant or column dominant and has positive diagonal entries, then there is a positive diagonal F such that FA ϵ ∏n.

Powers are investigated and it is found that the only matrices A for which Am ϵ ∏n for all integers m are the Hermitian elements of ∏n. Products and sums are considered and criteria are developed for AB to be in ∏n.

Since ∏n n is closed under inversion, relations between H(A)-1 and H(A-1) are studied and a dichotomy observed between the real and complex cases. In the real case more can be said and the initial result is that for A ϵ ∏n, the difference H(adjA) - adjH(A) ≥ 0 always and is ˃ 0 if and only if S(A) = A-A*/2 has more than one pair of conjugate non-zero characteristic roots. This is refined to characterize real c for which cH(A-1) - H(A)-1 is positive definite.

The cramped (characteristic roots on an arc of less than 180°) unitary matrices are linked to ∏n and characterized in several ways via products of the form A -1A*.

Classical inequalities for Hermitian positive definite matrices are studied in ∏n and for Hadamard's inequality two types of generalizations are given. In the first a large subclass of ∏n in which the precise statement of Hadamardis inequality holds is isolated while in another large subclass its reverse is shown to hold. In the second Hadamard's inequality is weakened in such a way that it holds throughout ∏n. Both approaches contain the original Hadamard inequality as a special case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze reproducing kernel Hilbert spaces of positive definite kernels on a topological space X being either first countable or locally compact. The results include versions of Mercer's theorem and theorems on the embedding of these spaces into spaces of continuous and square integrable functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove that any isotropic positive definite function on the sphere can be written as the spherical self-convolution of an isotropic real-valued function. It is known that isotropic positive definite functions on d-dimensional Euclidean space admit a continuous derivative of order [(d − 1)/2]. We show that the same holds true for isotropic positive definite functions on spheres and prove that this result is optimal for all odd dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some relationships between representations of a hypergroup X, its algebras, and positive definite functions on X are studied. Also, various types of convergence of positive definite functions on X are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural alignments are the most widely used tools for comparing proteins with low sequence similarity. The main contribution of this paper is to derive various kernels on proteins from structural alignments, which do not use sequence information. Central to the kernels is a novel alignment algorithm which matches substructures of fixed size using spectral graph matching techniques. We derive positive semi-definite kernels which capture the notion of similarity between substructures. Using these as base more sophisticated kernels on protein structures are proposed. To empirically evaluate the kernels we used a 40% sequence non-redundant structures from 15 different SCOP superfamilies. The kernels when used with SVMs show competitive performance with CE, a state of the art structure comparison program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalization of the geometric mean of positive scalars to positive definite matrices has attracted considerable attention since the seminal work of Ando. The paper generalizes this framework of matrix means by proposing the definition of a rank-preserving mean for two or an arbitrary number of positive semi-definite matrices of fixed rank. The proposed mean is shown to be geometric in that it satisfies all the expected properties of a rank-preserving geometric mean. The work is motivated by operations on low-rank approximations of positive definite matrices in high-dimensional spaces.© 2012 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.