A Simple Structure for Building a Robust Model - Intelligence Science IV
Conference Papers Year : 2022

A Simple Structure for Building a Robust Model

Abstract

As deep learning applications, especially programs of computer vision, are increasingly deployed in our lives, we have to think more urgently about the security of these applications. One effective way to improve the security of deep learning models is to perform adversarial training, which allows the model to be compatible with samples that are deliberately created for use in attacking the model. Based on this, we propose a simple architecture to build a model with a certain degree of robustness, which improves the robustness of the trained network by adding an adversarial sample detection network for cooperative training. At the same time, we design a new data sampling strategy that incorporates multiple existing attacks, allowing the model to adapt to many different adversarial attacks with a single training. We conducted some experiments to test the effectiveness of this design based on Cifar10 dataset, and the results indicate that it has some degree of positive effect on the robustness of the model. Our code could be found at https://github.com/dowdyboy/simple_structure_for_robust_model .
Embargoed file
Embargoed file
0 0 10
Year Month Jours
Avant la publication
Wednesday, January 1, 2025
Embargoed file
Wednesday, January 1, 2025
Please log in to request access to the document

Dates and versions

hal-04666419 , version 1 (01-08-2024)

Licence

Identifiers

Cite

Xiao Tan, Jingbo Gao, Ruolin Li. A Simple Structure for Building a Robust Model. 5th International Conference on Intelligence Science (ICIS), Oct 2022, Xi'an, China. pp.417-424, ⟨10.1007/978-3-031-14903-0_45⟩. ⟨hal-04666419⟩
7 View
3 Download

Altmetric

Share

More