On Learning and Recognition of Secure Patterns (Invited keynote at AISec '14)

TitleOn Learning and Recognition of Secure Patterns (Invited keynote at AISec '14)
Publication TypeConference Paper
Year of Publication2014
AuthorsBiggio, B
Conference NameAISec'14: Proceedings of the 2014 ACM Workshop on Artificial Intelligence and Security, co-located with CCS '14
Pagination1-2
PublisherACM
Conference LocationScottsdale, Arizona, USA
Abstract

Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.

In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.

Notes

Citation Keybiggio14-aisec-keynote
Download: 
AttachmentSize
biggio14-aisec-keynote.pdf110.67 KB