Bandit-Based Algorithms for Budgeted Learning


Autoria(s): Deng, Kun; Bourke, Chris; Scott, Stephen; Sunderman, Julie; Zheng, Yaling
Data(s)

01/01/2007

Resumo

We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples’ labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples in the budgeted learning model based on algorithms for the multi-armed bandit problem. All of our approaches outperformed the current state of the art. Furthermore, we present a new means for selecting an example to purchase after the attribute is selected, instead of selecting an example uniformly at random, which is typically done. Our new example selection method improved performance of all the algorithms we tested, both ours and those in the literature.

Formato

application/pdf

Identificador

http://digitalcommons.unl.edu/cseconfwork/126

http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1123&context=cseconfwork

Publicador

DigitalCommons@University of Nebraska - Lincoln

Fonte

CSE Conference and Workshop Papers

Palavras-Chave #Computer Sciences
Tipo

text