Pixel Based Adversarial Attacks on Convolutional Neural Network Models
Abstract
Deep Neural Networks (DNN) has found their applications in the real time, for example, facial recognition for security in ATMs and self-driving cars. A major security threat to DNN is through adversarial attacks. An adversarial sample is an image that has been changed in such a way that it is imperceptible to human eye but causes the image to be misclassified by a Convolutional Neural Networks (CNN). The objective of this research work is to devise pixel based algorithms for adversarial attacks on images. For validating the algorithms, untargeted attack is performed on MNIST and CIFAR-10 dataset using techniques such as edge detection, Gradient weighted Class Activation Mapping (GRAD-CAM) and noise addition whereas targeted attack is performed on MNIST dataset using Saliency maps. These adversarial images thus generated are then passed to a CNN model and the misclassification results are analyzed. From the analysis, it has been inferred that it is easier to fool CNNs using untargeted attacks than the targeted attacks. Also, grayscale images (MNIST) are preferred to generate robust adversarial examples compared to colored images (CIFAR-10).
Origin | Files produced by the author(s) |
---|