Poisoning attacks against support vector machines
Title | Poisoning attacks against support vector machines |
Publication Type | Conference Paper |
Year of Publication | 2012 |
Authors | Biggio, B, Nelson, B, Laskov, P |
Editor | Langford, J, Pineau, J |
Conference Name | 29th Int'l Conf. on Machine Learning (ICML) |
Pagination | 1807–1814 |
Publisher | Omnipress |
Keywords | adversarial machine learning, poisoning attacks, support vector machines |
Abstract | We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data.
The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.
|
Notes |
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines" from PRA Group @ University of Cagliari
This paper was presented by Battista Biggio at ICML 2012. The talk is available at http://techtalks.tv/talks/poisoning-attacks-against-support-vector-machines/57350/
The source code for replicating the experiments of this paper can be found here. |
Citation Key | biggio12-icml |
Attachment | Size |
---|---|
![]() | 452.94 KB |