Sequential Optimal Recovery: A Paradigm for Active Learning


Autoria(s): Niyogi, Partha
Data(s)

20/10/2004

20/10/2004

12/05/1995

Resumo

In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).

Formato

21 p.

620644 bytes

788387 bytes

application/postscript

application/pdf

Identificador

AIM-1514

CBCL-113

http://hdl.handle.net/1721.1/7200

Idioma(s)

en_US

Relação

AIM-1514

CBCL-113

Palavras-Chave #function approximation #optimal recovery #learning theory #adaptive sampling