Adversarial Machine Learning



Visit also our research blog on the security of machine learning

What is Adversarial Learning?

Adversarial Learning is a novel research area that lies at the intersection of machine learning and computer security. It aims at gaining a deeper understanding of the security properties of current machine learning algorithms against carefully targeted attacks, and at developing suitable countermeasures for the design of more secure learning algorithms.  

 
 

Security evaluation

One critical issue in adversarial settings is to understand whether and to what extent a classifier may resist to specifically targeted attacks. To this end, we have proposed a framework based on the idea of proactively anticipating the attacker through the simulation of realistic attack scenarios, that may also suggest one how to mitigate the attacks' impact.

 

Evasion attacks

Evasion attacks are the most popular kind of attack incurred in adversarial settings during system operation. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware code. We have recently devised an evasion attack that can target linear and non-linear classifiers, and shown that popular learning algorithms such as Support Vector Machines and Neural Networks can be evaded with only few modifications to the attacker's samples.

 
 

Poisoning attacks

Machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. Within this scenario, an attacker may thus poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. We have shown that Support Vector Machines can be severely compromised by this kind of attack, as well as adaptive biometric systems that automatically update their clients' templates.

 
 
 

Adversarial Clustering

Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities. However, they have not been originally devised to deal with deliberate attack attempts that aim to subvert the clustering process itself. Our experimental findings on clustering of malware samples and handwritten digits show that single- and complete-linkage hierarchical clustering can be significantly compromised by carefully targeted attacks.  

 
 

Adversarial Feature Selection

Despite feature selection algorithms are often used in security-sensitive applications, only few authors have considered the impact of using reduced feature sets on classifier security against evasion and poisoning attacks. Within this research area, we first show that feature selection algorithms can significantly worsen classifier security against well-crafted attacks. We then propose novel adversary-aware feature selection procedures to counter these threats.