Convex Optimization Algorithms and Statistical Bounds for Learning Structured Models


Autoria(s): Jalali, Amin
Contribuinte(s)

Fazel Sarjoui, Maryam

Data(s)

22/09/2016

22/09/2016

01/08/2016

Resumo

Thesis (Ph.D.)--University of Washington, 2016-08

Design and analysis of tractable methods for estimation of structured models from massive high-dimensional datasets has been a topic of research in statistics, machine learning and engineering for many years. Regularization, the act of simultaneously optimizing a data fidelity term and a structure-promoting term, is a widely used approach in different machine learning and signal processing tasks. Appropriate regularizers, with efficient optimization techniques, can help in exploiting the prior structural information on the underlying model. This dissertation is focused on exploring new structures, devising efficient convex relaxations for exploiting them, and studying the statistical performance of such estimators. We address three problems under this framework on which we elaborate below. In many applications, we aim to reconstruct models that are known to have more than one structure at the same time. Having a rich literature on exploiting common structures like sparsity and low rank at hand, one could pose similar questions about simultaneously structured models with several low-dimensional structures. Using the respective known convex penalties for the involved structures, we show that multi-objective optimization with these penalties can do no better, order-wise, than exploiting only one of the present structures. This suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, not one that combines the convex relaxations for each structure. This work, while applicable for general structures, yields interesting results for the case of sparse and low-rank matrices which arise in applications such as sparse phase retrieval and quadratic compressed sensing. We then turn our attention to the design and efficient optimization of convex penalties for structured learning. We introduce a general class of semidefinite representable penalties, called variational Gram functions (VGF), and provide a list of optimization tools for solving regularized estimation problems involving VGFs. Exploiting the variational structure in VGFs, as well as the variational structure in many common loss functions, enables us to devise efficient optimization techniques as well as to provide guarantees on the solutions of many regularized loss minimization problems. Finally, we explore the statistical and computational trade-offs in the community detection problem. We study recovery regimes and algorithms for community detection in sparse graphs generated under a heterogeneous stochastic block model in its most general form. In this quest, we were able to expand the applicability of semidefinite programs (in exact community detection) to some new and important network configurations, which provides us with a better understanding of the ability of semidefinite programs in reaching statistical identifiability limits.

Formato

application/pdf

Identificador

Jalali_washington_0250E_16536.pdf

http://hdl.handle.net/1773/37103

Idioma(s)

en_US

Palavras-Chave #data science #high-dimensional statistics #machine learning #network science #optimization #Operations research #Theoretical mathematics #Statistics #electrical engineering
Tipo

Thesis