67 resultados para Numerical linear algebra, weighted geometric matrix mean, Krylov subspace methods, numerical quadrature
em Cambridge University Engineering Department Publications Database
Resumo:
The generalization of the geometric mean of positive scalars to positive definite matrices has attracted considerable attention since the seminal work of Ando. The paper generalizes this framework of matrix means by proposing the definition of a rank-preserving mean for two or an arbitrary number of positive semi-definite matrices of fixed rank. The proposed mean is shown to be geometric in that it satisfies all the expected properties of a rank-preserving geometric mean. The work is motivated by operations on low-rank approximations of positive definite matrices in high-dimensional spaces.© 2012 Elsevier Inc. All rights reserved.
Resumo:
A method is given for solving an optimal H2 approximation problem for SISO linear time-invariant stable systems. The method, based on constructive algebra, guarantees that the global optimum is found; it does not involve any gradient-based search, and hence avoids the usual problems of local minima. We examine mostly the case when the model order is reduced by one, and when the original system has distinct poles. This case exhibits special structure which allows us to provide a complete solution. The problem is converted into linear algebra by exhibiting a finite-dimensional basis for a certain space, and can then be solved by eigenvalue calculations, following the methods developed by Stetter and Moeller. The use of Buchberger's algorithm is avoided by writing the first-order optimality conditions in a special form, from which a Groebner basis is immediately available. Compared with our previous work the method presented here has much smaller time and memory requirements, and can therefore be applied to systems of significantly higher McMillan degree. In addition, some hypotheses which were required in the previous work have been removed. Some examples are included.
Resumo:
The classes of continuous-time flows on Rn×p that induce the same flow on the set of p- dimensional subspaces of Rn×p are described. The power flow is briefly reviewed in this framework, and a subspace generalization of the Rayleigh quotient flow [Linear Algebra Appl. 368C, 2003, pp. 343-357] is proposed and analyzed. This new flow displays a property akin to deflation in finite time. © 2008 Yokohama Publishers.
Resumo:
In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixed-rank matrix. We study the Riemannian manifold geometry of the set of fixed-rank matrices and develop efficient line-search algorithms. The proposed algorithms have many applications, scale to high-dimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixed-rank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the state-of-the-art, and that manifold optimization offers a versatile framework for the design of rank-constrained machine learning algorithms. Copyright 2011 by the author(s)/owner(s).
Resumo:
This paper introduces a new metric and mean on the set of positive semidefinite matrices of fixed-rank. The proposed metric is derived from a well-chosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric. The resulting Riemannian space has strong geometrical properties: it is geodesically complete, and the metric is invariant with respect to all transformations that preserve angles (orthogonal transformations, scalings, and pseudoinversion). A meaningful approximation of the associated Riemannian distance is proposed, that can be efficiently numerically computed via a simple algorithm based on SVD. The induced mean preserves the rank, possesses the most desirable characteristics of a geometric mean, and is easy to compute. © 2009 Society for Industrial and Applied Mathematics.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.