The rademacher complexity of coregularized kernel classes


Autoria(s): Rosenberg, David; Bartlett, Peter L.
Contribuinte(s)

Meila, Marina

Shen, Xiaotng

Data(s)

2007

Resumo

In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.

Identificador

http://eprints.qut.edu.au/45644/

Relação

http://www.stat.umn.edu/~aistat/proceedings/start.htm

Rosenberg, David & Bartlett, Peter L. (2007) The rademacher complexity of coregularized kernel classes. In Meila, Marina & Shen, Xiaotng (Eds.) 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007), 21- 24 March 2007, Caribe Hilton Hotel, San Juan, Puerto Rico.

Direitos

Copyright 2007 [please consult the author]

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #080100 ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING #(CoRLS) algorithm
Tipo

Conference Paper