16 resultados para Constrained nonlinear optimization
em Cambridge University Engineering Department Publications Database
Resumo:
The problem of calculating the minimum lap or maneuver time of a nonlinear vehicle, which is linearized at each time step, is formulated as a convex optimization problem. The formulation provides an alternative to previously used quasi-steady-state analysis or nonlinear optimization. Key steps are: the use of model predictive control; expressing the minimum time problem as one of maximizing distance traveled along the track centerline; and linearizing the track and vehicle trajectories by expressing them as small displacements from a fixed reference. A consequence of linearizing the vehicle dynamics is that nonoptimal steering control action can be generated, but attention to the constraints and the cost function minimizes the effect. Optimal control actions and vehicle responses for a 90 deg bend are presented and compared to the nonconvex nonlinear programming solution. Copyright © 2013 by ASME.
Resumo:
Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design effcient numerical algorithms. In particular, optimization on manifolds is well-suited to deal with rank and orthogonality constraints. Such structured constraints appear pervasively in machine learning applications, including low-rank matrix completion, sensor network localization, camera network registration, independent component analysis, metric learning, dimensionality reduction and so on. The Manopt toolbox, available at www.manopt.org, is a user-friendly, documented piece of software dedicated to simplify experimenting with state of the art Riemannian optimization algorithms. By dealing internally with most of the differential geometry, the package aims particularly at lowering the entrance barrier. © 2014 Nicolas Boumal.
Resumo:
The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that alternates between fixed-rank optimization and rank-one updates. The fixed-rank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a Riemannian structure that leads to efficient computations. We present a second-order trust-region algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictor-corrector approach that outperforms the naive warm-restart approach on the fixed-rank quotient manifold. The performance of the proposed algorithm is illustrated on problems of low-rank matrix completion and multivariate linear regression. © 2013 Society for Industrial and Applied Mathematics.