1 resultado para Stationary bandit
em DigitalCommons@University of Nebraska - Lincoln
Filtro por publicador
- Academic Archive On-line (Karlstad University; Sweden) (1)
- AMS Tesi di Laurea - Alm@DL - Università di Bologna (1)
- Aquatic Commons (12)
- ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha (1)
- Archimer: Archive de l'Institut francais de recherche pour l'exploitation de la mer (1)
- Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco (14)
- Aston University Research Archive (13)
- Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (7)
- Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP) (13)
- Biblioteca Digital de Teses e Dissertações Eletrônicas da UERJ (22)
- BORIS: Bern Open Repository and Information System - Berna - Suiça (12)
- Boston University Digital Common (4)
- Brock University, Canada (2)
- Brunel University (1)
- Bulgarian Digital Mathematics Library at IMI-BAS (2)
- CaltechTHESIS (24)
- Cambridge University Engineering Department Publications Database (55)
- CentAUR: Central Archive University of Reading - UK (21)
- Center for Jewish History Digital Collections (2)
- Chinese Academy of Sciences Institutional Repositories Grid Portal (167)
- Coffee Science - Universidade Federal de Lavras (2)
- CORA - Cork Open Research Archive - University College Cork - Ireland (9)
- Dalarna University College Electronic Archive (3)
- DI-fusion - The institutional repository of Université Libre de Bruxelles (1)
- Digital Commons at Florida International University (1)
- DigitalCommons@The Texas Medical Center (1)
- DigitalCommons@University of Nebraska - Lincoln (1)
- Duke University (8)
- eResearch Archive - Queensland Department of Agriculture; Fisheries and Forestry (1)
- Gallica, Bibliotheque Numerique - Bibliothèque nationale de France (French National Library) (BnF), France (1)
- Greenwich Academic Literature Archive - UK (5)
- Harvard University (1)
- Helda - Digital Repository of University of Helsinki (13)
- Indian Institute of Science - Bangalore - Índia (198)
- INSTITUTO DE PESQUISAS ENERGÉTICAS E NUCLEARES (IPEN) - Repositório Digital da Produção Técnico Científica - BibliotecaTerezine Arantes Ferra (1)
- Massachusetts Institute of Technology (5)
- National Center for Biotechnology Information - NCBI (5)
- Plymouth Marine Science Electronic Archive (PlyMSEA) (7)
- Publishing Network for Geoscientific & Environmental Data (8)
- QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast (110)
- Queensland University of Technology - ePrints Archive (137)
- Repositório Científico da Universidade de Évora - Portugal (1)
- Repositório digital da Fundação Getúlio Vargas - FGV (5)
- Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho" (13)
- SAPIENTIA - Universidade do Algarve - Portugal (1)
- Universidad de Alicante (1)
- Universidad del Rosario, Colombia (1)
- Universidad Politécnica de Madrid (7)
- Universidade Complutense de Madrid (2)
- Universidade Federal do Pará (1)
- Universitat de Girona, Spain (1)
- Universitätsbibliothek Kassel, Universität Kassel, Germany (1)
- Université de Lausanne, Switzerland (2)
- Université de Montréal, Canada (3)
- University of Connecticut - USA (1)
- University of Michigan (17)
- University of Queensland eSpace - Australia (8)
Resumo:
We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples’ labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples in the budgeted learning model based on algorithms for the multi-armed bandit problem. All of our approaches outperformed the current state of the art. Furthermore, we present a new means for selecting an example to purchase after the attribute is selected, instead of selecting an example uniformly at random, which is typically done. Our new example selection method improved performance of all the algorithms we tested, both ours and those in the literature.