Open problems in the security of learning


Autoria(s): Barreno, M.; Bartlett, P.L.; Chi, F.J.; Joseph, A.D.; Nelson, B.; Rubinstein, B.I.P.; Saini, U.; Tygar, J.D.
Data(s)

2008

Resumo

Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.

Identificador

http://eprints.qut.edu.au/43984/

Publicador

Association for Computing Machinery

Relação

DOI:10.1145/1456377.1456382

Barreno, M., Bartlett, P.L., Chi, F.J., Joseph, A.D., Nelson, B., Rubinstein, B.I.P., Saini, U., & Tygar, J.D. (2008) Open problems in the security of learning. In Proceedings of the 1st ACM workshop on Workshop on AISec - AISec '08, Association for Computing Machinery, Alexandria, VA, pp. 19-26.

Direitos

Copyright 2008 Association for Computing Machinery

Fonte

Faculty of Science and Technology; Mathematical Sciences

Palavras-Chave #080300 COMPUTER SOFTWARE #Adversarial Learning #Machine Learning #Computer Security #Secure Learning #Security Metrics
Tipo

Conference Paper