838 resultados para Reproducing Kernel Hilbert Spaces
Resumo:
This paper presents a computation of the $V_gamma$ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression $epsilon$-insensitive loss function, and general $L_p$ loss functions. Finiteness of the RV_gamma$ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the $L_epsilon$ or general $L_p$ loss functions. This paper presenta a novel proof of this result also for the case that a bias is added to the functions in the RKHS.
Resumo:
We analyze reproducing kernel Hilbert spaces of positive definite kernels on a topological space X being either first countable or locally compact. The results include versions of Mercer's theorem and theorems on the embedding of these spaces into spaces of continuous and square integrable functions.
Resumo:
We study the action of a weighted Fourier–Laplace transform on the functions in the reproducing kernel Hilbert space (RKHS) associated with a positive definite kernel on the sphere. After defining a notion of smoothness implied by the transform, we show that smoothness of the kernel implies the same smoothness for the generating elements (spherical harmonics) in the Mercer expansion of the kernel. We prove a reproducing property for the weighted Fourier–Laplace transform of the functions in the RKHS and embed the RKHS into spaces of smooth functions. Some relevant properties of the embedding are considered, including compactness and boundedness. The approach taken in the paper includes two important notions of differentiability characterized by weighted Fourier–Laplace transforms: fractional derivatives and Laplace–Beltrami derivatives.
Resumo:
We consider analytic reproducing kernel Hilbert spaces H with orthonormal bases of the form {(a(n) + b(n)z)z(n) : n >= 0}. If b(n) = 0 for all n, then H is a diagonal space and multiplication by z, M-z, is a weighted shift. Our focus is on providing extensive classes of examples for which M-z is a bounded subnormal operator on a tridiagonal space H where b(n) not equal 0. The Aronszajn sum of H and (1 - z)H where H is either the Hardy space or the Bergman space on the disk are two such examples.
Resumo:
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. In particular, when the involved kernel is analytic in the sampling parameter it can be stated in an abstract setting of reproducing kernel Hilbert spaces of entire functions which includes as a particular case the classical Shannon sampling theory. This abstract setting allows us to obtain a sort of converse result and to characterize when the sampling formula associated with an analytic Kramer kernel can be expressed as a Lagrange-type interpolation series. On the other hand, the de Branges spaces of entire functions satisfy orthogonal sampling formulas which can be written as Lagrange-type interpolation series. In this work some links between all these ideas are established.
Resumo:
We consider a natural representation of solutions for Tikhonov functional equations. This will be done by applying the theory of reproducing kernels to the approximate solutions of general bounded linear operator equations (when defined from reproducing kernel Hilbert spaces into general Hilbert spaces), by using the Hilbert-Schmidt property and tensor product of Hilbert spaces. As a concrete case, we shall consider generalized fractional functions formed by the quotient of Bergman functions by Szegö functions considered from the multiplication operators on the Szegö spaces.
Resumo:
This work investigates theoretical properties of symmetric and anti-symmetric kernels. First chapters give an overview of the theory of kernels used in supervised machine learning. Central focus is on the regularized least squares algorithm, which is motivated as a problem of function reconstruction through an abstract inverse problem. Brief review of reproducing kernel Hilbert spaces shows how kernels define an implicit hypothesis space with multiple equivalent characterizations and how this space may be modified by incorporating prior knowledge. Mathematical results of the abstract inverse problem, in particular spectral properties, pseudoinverse and regularization are recollected and then specialized to kernels. Symmetric and anti-symmetric kernels are applied in relation learning problems which incorporate prior knowledge that the relation is symmetric or anti-symmetric, respectively. Theoretical properties of these kernels are proved in a draft this thesis is based on and comprehensively referenced here. These proofs show that these kernels can be guaranteed to learn only symmetric or anti-symmetric relations, and they can learn any relations relative to the original kernel modified to learn only symmetric or anti-symmetric parts. Further results prove spectral properties of these kernels, central result being a simple inequality for the the trace of the estimator, also called the effective dimension. This quantity is used in learning bounds to guarantee smaller variance.
Resumo:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
O objetivo deste trabalho é apresentar a base teórica para o problema de aprendizagem através de exemplos conforme as ref. [14], [15] e [16]. Aprender através de exemplos pode ser examinado como o problema de regressão da aproximação de uma função multivaluada sobre um conjunto de dados esparsos. Tal problema não é bem posto e a maneira clássica de resolvê-lo é através da teoria de regularização. A teoria de regularização clássica, como será considerada aqui, formula este problema de regressão como o problema variacional de achar a função f que minimiza o funcional Q[f] = 1 n n Xi=1 (yi ¡ f(xi))2 + ¸kfk2 K; onde kfk2 K é a norma em um espa»co de Hilbert especial que chamaremos de Núcleo Reprodutivo (Reproducing Kernel Hilbert Spaces), ou somente RKHS, IH definido pela função positiva K, o número de pontos do exemplo n e o parâmetro de regularização ¸. Sob condições gerais a solução da equação é dada por f(x) = n Xi=1 ciK(x; xi): A teoria apresentada neste trabalho é na verdade a fundamentação para uma teoria mais geral que justfica os funcionais regularizados para a aprendizagem através de um conjunto infinito de dados e pode ser usada para estender consideravelmente a estrutura clássica a regularização, combinando efetivamente uma perspectiva de análise funcional com modernos avanços em Teoria de Probabilidade e Estatística.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We study the existence theory for parabolic variational inequalities in weighted L2 spaces with respect to excessive measures associated with a transition semigroup. We characterize the value function of optimal stopping problems for finite and infinite dimensional diffusions as a generalized solution of such a variational inequality. The weighted L2 setting allows us to cover some singular cases, such as optimal stopping for stochastic equations with degenerate diffusion coeficient. As an application of the theory, we consider the pricing of American-style contingent claims. Among others, we treat the cases of assets with stochastic volatility and with path-dependent payoffs.
Resumo:
We study complete continuity properties of operators onto ℓ2 and prove several results in the Dunford–Pettis theory of JB∗-triples and their projective tensor products, culminating in characterisations of the alternative Dunford–Pettis property for where E and F are JB∗-triples.
Resumo:
Operator spaces of Hilbertian JC∗ -triples E are considered in the light of the universal ternary ring of operators (TRO) introduced in recent work. For these operator spaces, it is shown that their triple envelope (in the sense of Hamana) is the TRO they generate, that a complete isometry between any two of them is always the restriction of a TRO isomorphism and that distinct operator space structures on a fixed E are never completely isometric. In the infinite-dimensional cases, operator space structure is shown to be characterized by severe and definite restrictions upon finite-dimensional subspaces. Injective envelopes are explicitly computed.
Resumo:
In this paper, we introduce and study a new system of variational inclusions involving (H, eta)-monotone operators in Hilbert space. Using the resolvent operator associated with (H, eta)monotone operators, we prove the existence and uniqueness of solutions for this new system of variational inclusions. We also construct a new algorithm for approximating the solution of this system and discuss the convergence of the sequence of iterates generated by the algorithm. (c) 2005 Elsevier Ltd. All rights reserved.