81 resultados para Convex Functions
Resumo:
We provide a cooperative control algorithm to stabilize symmetric formations to motion around closed curves suitable for mobile sensor networks. This work extends previous results for stabilization of symmetric circular formations. We study a planar particle model with decentralized steering control subject to limited communication. Because of their unique spectral properties, the Laplacian matrices of circulant graphs play a key role. We illustrate the result for a skewed superellipse, which is a type of curve that includes circles, ellipses, and rounded parallelograms. © 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper generalizes recent Lyapunov constructions for a cascade of two nonlinear systems, one of which is stable rather than asymptotically stable. A new cross-term construction in the Lyapunov function allows us to replace earlier growth conditions by a necessary boundedness condition. This method is instrumental in the global stabilization of feedforward systems, and new stabilization results are derived from the generalized construction.
Resumo:
The problem of calculating the minimum lap or maneuver time of a nonlinear vehicle, which is linearized at each time step, is formulated as a convex optimization problem. The formulation provides an alternative to previously used quasi-steady-state analysis or nonlinear optimization. Key steps are: the use of model predictive control; expressing the minimum time problem as one of maximizing distance traveled along the track centerline; and linearizing the track and vehicle trajectories by expressing them as small displacements from a fixed reference. A consequence of linearizing the vehicle dynamics is that nonoptimal steering control action can be generated, but attention to the constraints and the cost function minimizes the effect. Optimal control actions and vehicle responses for a 90 deg bend are presented and compared to the nonconvex nonlinear programming solution. Copyright © 2013 by ASME.
Resumo:
A new version of the Multi-objective Alliance Algorithm (MOAA) is described. The MOAA's performance is compared with that of NSGA-II using the epsilon and hypervolume indicators to evaluate the results. The benchmark functions chosen for the comparison are from the ZDT and DTLZ families and the main classical multi-objective (MO) problems. The results show that the new MOAA version is able to outperform NSGA-II on almost all the problems.
Resumo:
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.