Publications

Export 11 results:
Filters: Author is Marco Melis  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
W
A. Demontis, Melis, M., Pintor, M., Jagielski, M., Biggio, B., Oprea, A., Nita-Rotaru, C., e Roli, F., «Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks», in 28th Usenix Security Symposium, Santa Clara, California, USA, 2019, vol 28th {USENIX} Security Symposium ({USENIX} Security 19), pag 321--338. (1.09 MB)
S
A. Demontis, Melis, M., Biggio, B., Fumera, G., e Roli, F., «Super-sparse Learning in Similarity Spaces», IEEE Computational Intelligence Magazine, vol 11, n° 4, pagg 36-45, 2016. (555.22 KB)
B. Biggio, Melis, M., Fumera, G., e Roli, F., «Sparse Support Faces», in Int'l Conf. on Biometrics (ICB), 2015, pagg 208-213. (702.84 KB)
M. Pintor, Demetrio, L., Sotgiu, A., Melis, M., Demontis, A., e Biggio, B., «secml: A Python Library for Secure and Explainable Machine Learning», SoftwareX, 2022.
F
M. Melis, Piras, L., Biggio, B., Giacinto, G., Fumera, G., e Roli, F., «Fast Image Classification with Reduced Multiclass Support Vector Machines», in 18th Int'l Conf. on Image Analysis and Processing, Genova, Italy, 2015, vol Image Analysis and Processing (ICIAP 2015), pagg 78-88. (829.37 KB)
F. Crecchi, Melis, M., Sotgiu, A., Bacciu, D., e Biggio, B., «FADER: Fast adversarial example rejection», Neurocomputing, vol 470, pagg 257-268, 2022.
E
M. Melis, Maiorca, D., Biggio, B., Giacinto, G., e Roli, F., «Explaining Black-box Android Malware Detection», in 26th European Signal Processing Conference (EUSIPCO '18), Rome, Italy, 2018, pagg 524-528. (431.78 KB)
D
M. Melis, Scalas, M., Demontis, A., Maiorca, D., Biggio, B., Giacinto, G., e Roli, F., «Do Gradient-Based Explanations Tell Anything About Adversarial Robustness to Android Malware?», International Journal of Machine Learning and Cybernetics, vol 13, pagg 217-232, 2022. (1.2 MB)
A. Sotgiu, Demontis, A., Melis, M., Biggio, B., Fumera, G., Feng, X., e Roli, F., «Deep Neural Rejection against Adversarial Examples», EURASIP Journal on Information Security, vol 5, 2020.
M. Melis, Demontis, A., Biggio, B., Brown, G., Fumera, G., e Roli, F., «Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid», in ICCV 2017 Workshop on Vision in Practice on Autonomous Robots (ViPAR), Venice, Italy, 2017, vol 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pagg 751-759. (3.16 MB)