953 resultados para Matrices doublement stochastiques
Resumo:
It is known that the diagonal-Schur complements of strictly diagonally dominant matrices are strictly diagonally dominant matrices [J.Z. Liu, Y.Q. Huang, Some properties on Schur complements of H-matrices and diagonally dominant matrices, Linear Algebra Appl. 389 (2004) 365-380], and the same is true for nonsingular H-matrices [J.Z. Liu, J.C. Li, Z.T. Huang, X. Kong, Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices, Linear Algebra Appl. 428 (2008) 1009-1030]. In this paper, we research the properties on diagonal-Schur complements of block diagonally dominant matrices and prove that the diagonal-Schur complements of block strictly diagonally dominant matrices are block strictly diagonally dominant matrices, and the same holds for generalized block strictly diagonally dominant matrices. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Let A and B be nonsingular M-matrices. A lower bound on the minimum eigenvalue q(B circle A(-1)) for the Hadamard product of A(-1) and B, and a lower bound on the minimum eigenvalue q(A star B) for the Fan product of A and B are given. In addition, an upper bound on the spectral radius rho(A circle B) of nonnegative matrices A and B is also obtained. These bounds improve several existing results in some cases and the estimating formulas are easier to calculate for they are only depending on the entries of matrices A and B. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The generalization of the geometric mean of positive scalars to positive definite matrices has attracted considerable attention since the seminal work of Ando. The paper generalizes this framework of matrix means by proposing the definition of a rank-preserving mean for two or an arbitrary number of positive semi-definite matrices of fixed rank. The proposed mean is shown to be geometric in that it satisfies all the expected properties of a rank-preserving geometric mean. The work is motivated by operations on low-rank approximations of positive definite matrices in high-dimensional spaces.© 2012 Elsevier Inc. All rights reserved.
Resumo:
The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixedrank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks. © 2011 Gilles Meyer, Silvere Bonnabel and Rodolphe Sepulchre.