Code

In this page you can find some of the code projects I have been working on, as well as some code and datasets that can be exploited to replicate the experiments reported in some of my publications.

Adversarialib. I am currently involved in the development of Adversarialiban open-source python library that implements a set of carefully targeted attacks against machine learning algorithms. These attacks can be exploited to assess the security of machine learning algorithms deployed in adversarial settings.

Poisoning attacks against SVMs (ICML 2012). Here you can find the matlab code for replicating experiments related to my ICML 2012 publication. I would like to acknowledge Paul Temple for helping me publicly releasing this source code. Code updated on June 21, 2017. In this updated version 1.1 we corrected a small bug pointed out by Nathalie Baracaldo and Jaehoon Safavi (IBM Almaden Research Center, San Jose, CA, USA) -thanks!- and included the MNIST data as part of the package.

Is Data Clustering in Adversarial Settings Secure? (AISec 2013). Here you can find the matlab code for replicating experiments related to my AISec 2013 publication. I would like to acknowledge my colleague Ignazio Pillai for implementing most of the code and running the experiments for that paper.