981 resultados para Positive Definite Functions
Resumo:
2000 Mathematics Subject Classification: Primary: 42A05. Secondary: 42A82, 11N05.
Resumo:
The generalization of the geometric mean of positive scalars to positive definite matrices has attracted considerable attention since the seminal work of Ando. The paper generalizes this framework of matrix means by proposing the definition of a rank-preserving mean for two or an arbitrary number of positive semi-definite matrices of fixed rank. The proposed mean is shown to be geometric in that it satisfies all the expected properties of a rank-preserving geometric mean. The work is motivated by operations on low-rank approximations of positive definite matrices in high-dimensional spaces.© 2012 Elsevier Inc. All rights reserved.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.
Resumo:
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.
Resumo:
This paper presents numerical simulation of the evolution of one-dimensional normal shocks, their propagation, reflection and interaction in air using a single diaphragm Riemann shock tube and validate them using experimental results. Mathematical model is derived for one-dimensional compressible flow of viscous and conducting medium. Dimensionless form of the mathematical model is used to construct space-time finite element processes based on minimization of the space-time residual functional. The space-time local approximation functions for space-time p-version hierarchical finite elements are considered in higher order GRAPHICS] spaces that permit desired order of global differentiability of local approximations in space and time. The resulting algebraic systems from this approach yield unconditionally positive-definite coefficient matrices, hence ensure unique numerical solution. The evolution is computed for a space-time strip corresponding to a time increment Delta t and then time march to obtain the evolution up to any desired value of time. Numerical studies are designed using recently invented hand-driven shock tube (Reddy tube) parameters, high/low side density and pressure values, high- and low-pressure side shock tube lengths, so that numerically computed results can be compared with actual experimental measurements.
Resumo:
In this paper, we apply the preconditioned conjugate gradient method to the solution of positive-definite Toeplitz systems, especially we introduce a new kind of co-circulant preconditioners Pn[ca] by the use of embedding method. We have also discussed the properties of these new preconditioners and proved that many of former preconditioners can be considered as some special cases of Pn[co\. Because of the introduction of co-circulant preconditioners pn[a>], we can greatly overcome the singularity caused by circulant preconditioners. We have discussed the oo-circulant series and functions. We compare the ordinary circularity with the co-circularity, showing that the latter one can be considered as the extended form of the former one; correspondingly, many methods and theorems of the ordinary circularity can be extended. Furthermore, we present the co-circulant decompositional method. By the use of this method, we can divide any co-circulant signal into a summation of many sub-signals; especially among those sub-signals, there are many subseries of which their period is just equal to 1, which are actually the frequency elements of the original co-circulant signal. In this way, we can establish the relationship between the signal and its frequency elements, that is, the frequency elements hi the frequency domain are actually signals with the period of 1 in the spatial domain. We have also proved that the co-circulant has already existed in the traditional Fourier theory. By the use of different criteria for constructing preconditioners, we can get many different preconditioned systems. From the preconditioned systems PN[
Resumo:
Feasibility of nonlinear and adaptive control methodologies in multivariable linear time-invariant systems with state-space realization (A, B, C) is apparently limited by the standard strictly positive realness conditions that imply that the product CB must be positive definite symmetric. This paper expands the applicability of the strictly positive realness conditions used for the proofs of stability of adaptive control or control with uncertainty by showing that the not necessarily symmetric CB is only required to have a diagonal Jordan form and positive eigenvalues. The paper also shows that under the new condition any minimum-phase systems can be made strictly positive real via constant output feedback. The paper illustrates the usefulness of these extended properties with an adaptive control example. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We analyze reproducing kernel Hilbert spaces of positive definite kernels on a topological space X being either first countable or locally compact. The results include versions of Mercer's theorem and theorems on the embedding of these spaces into spaces of continuous and square integrable functions.
Resumo:
We study the action of a weighted Fourier–Laplace transform on the functions in the reproducing kernel Hilbert space (RKHS) associated with a positive definite kernel on the sphere. After defining a notion of smoothness implied by the transform, we show that smoothness of the kernel implies the same smoothness for the generating elements (spherical harmonics) in the Mercer expansion of the kernel. We prove a reproducing property for the weighted Fourier–Laplace transform of the functions in the RKHS and embed the RKHS into spaces of smooth functions. Some relevant properties of the embedding are considered, including compactness and boundedness. The approach taken in the paper includes two important notions of differentiability characterized by weighted Fourier–Laplace transforms: fractional derivatives and Laplace–Beltrami derivatives.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.