33 resultados para multiple linear regression analysis
Resumo:
This paper presents an incremental learning solution for Linear Discriminant Analysis (LDA) and its applications to object recognition problems. We apply the sufficient spanning set approximation in three steps i.e. update for the total scatter matrix, between-class scatter matrix and the projected data matrix, which leads an online solution which closely agrees with the batch solution in accuracy while significantly reducing the computational complexity. The algorithm yields an efficient solution to incremental LDA even when the number of classes as well as the set size is large. The incremental LDA method has been also shown useful for semi-supervised online learning. Label propagation is done by integrating the incremental LDA into an EM framework. The method has been demonstrated in the task of merging large datasets which were collected during MPEG standardization for face image retrieval, face authentication using the BANCA dataset, and object categorisation using the Caltech101 dataset. © 2010 Springer Science+Business Media, LLC.
Resumo:
The paper presents a multiscale procedure for the linear analysis of components made of lattice materials. The method allows the analysis of both pin-jointed and rigid-jointed microtruss materials with arbitrary topology of the unit cell. At the macroscopic level, the procedure enables to determine the lattice stiffness, while at the microscopic level the internal forces in the lattice elements are expressed in terms of the macroscopic strain applied to the lattice component. A numeric validation of the method is described. The procedure is completely automated and can be easily used within an optimization framework to find the optimal geometric parameters of a given lattice material. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixed-rank matrix. We study the Riemannian manifold geometry of the set of fixed-rank matrices and develop efficient line-search algorithms. The proposed algorithms have many applications, scale to high-dimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixed-rank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the state-of-the-art, and that manifold optimization offers a versatile framework for the design of rank-constrained machine learning algorithms. Copyright 2011 by the author(s)/owner(s).