Adversarial Feature Selection Against Evasion Attacks

TitleAdversarial Feature Selection Against Evasion Attacks
Publication TypeJournal Article
Year of Publication2016
AuthorsZhang, F, Chan, PPK, Biggio, B, Yeung, DS, Roli, F
JournalIEEE Transactions on Cybernetics
Volume46
Issue3
Pagination766-777
Abstract

Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.

Citation Keyzhang15-tcyb
Download: 
AttachmentSize
zhang15-tcyb.pdf2.12 MB