6 resultados para switching regression model

em Cambridge University Engineering Department Publications Database


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixed-rank matrix. We study the Riemannian manifold geometry of the set of fixed-rank matrices and develop efficient line-search algorithms. The proposed algorithms have many applications, scale to high-dimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixed-rank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the state-of-the-art, and that manifold optimization offers a versatile framework for the design of rank-constrained machine learning algorithms. Copyright 2011 by the author(s)/owner(s).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixedrank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks. © 2011 Gilles Meyer, Silvere Bonnabel and Rodolphe Sepulchre.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The simulation of complex chemical systems often requires a multi-level description, in which a region of special interest is treated using a computationally expensive quantum mechanical (QM) model while its environment is described by a faster, simpler molecular mechanical (MM) model. Furthermore, studying dynamic effects in solvated systems or bio-molecules requires a variable definition of the two regions, so that atoms or molecules can be dynamically re-assigned between the QM and MM descriptions during the course of the simulation. Such reassignments pose a problem for traditional QM/MM schemes by exacerbating the errors that stem from switching the model at the boundary. Here we show that stable, long adaptive simulations can be carried out using density functional theory with the BLYP exchange-correlation functional for the QM model and a flexible TIP3P force field for the MM model without requiring adjustments of either. Using a primary benchmark system of pure water, we investigate the convergence of the liquid structure with the size of the QM region, and demonstrate that by using a sufficiently large QM region (with radius 6 Å) it is possible to obtain radial and angular distributions that, in the QM region, match the results of fully quantum mechanical calculations with periodic boundary conditions, and, after a smooth transition, also agree with fully MM calculations in the MM region. The key ingredient is the accurate evaluation of forces in the QM subsystem which we achieve by including an extended buffer region in the QM calculations. We also show that our buffered-force QM/MM scheme is transferable by simulating the solvated Cl(-) ion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems, and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss the usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with state-of-the-art algorithms and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. © 2013 Springer-Verlag Berlin Heidelberg.