Pixel Based Adversarial Attacks on Convolutional Neural Network Models - Computational Intelligence in Data Science
Conference Papers Year : 2021

Pixel Based Adversarial Attacks on Convolutional Neural Network Models

Abstract

Deep Neural Networks (DNN) has found their applications in the real time, for example, facial recognition for security in ATMs and self-driving cars. A major security threat to DNN is through adversarial attacks. An adversarial sample is an image that has been changed in such a way that it is imperceptible to human eye but causes the image to be misclassified by a Convolutional Neural Networks (CNN). The objective of this research work is to devise pixel based algorithms for adversarial attacks on images. For validating the algorithms, untargeted attack is performed on MNIST and CIFAR-10 dataset using techniques such as edge detection, Gradient weighted Class Activation Mapping (GRAD-CAM) and noise addition whereas targeted attack is performed on MNIST dataset using Saliency maps. These adversarial images thus generated are then passed to a CNN model and the misclassification results are analyzed. From the analysis, it has been inferred that it is easier to fool CNNs using untargeted attacks than the targeted attacks. Also, grayscale images (MNIST) are preferred to generate robust adversarial examples compared to colored images (CIFAR-10).
Fichier principal
Vignette du fichier
512058_1_En_14_Chapter.pdf (297.18 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03772929 , version 1 (08-09-2022)

Licence

Identifiers

Cite

Kavitha Srinivasan, Priyadarshini Jello Raveendran, Varun Suresh, Nithya Rathna Anna Sundaram. Pixel Based Adversarial Attacks on Convolutional Neural Network Models. 4th International Conference on Computational Intelligence in Data Science (ICCIDS), Mar 2021, Chennai, India. pp.141-155, ⟨10.1007/978-3-030-92600-7_14⟩. ⟨hal-03772929⟩
47 View
22 Download

Altmetric

Share

More