A Lipschitz - Shapley Explainable Defense Methodology Against Adversarial Attacks
Abstract
Every learning algorithm, has a specific bias. This may be due to the choice of its hyperparameters, to the characteristics of its classification methodology, or even to the representation approach of the considered information. As a result, Machine Learning modeling algorithms are vulnerable to specialized attacks. Moreover, the training datasets are not always an accurate image of the real world. Their selection process and the assumption that they have the same distribution as all the unknown cases, introduce another level of bias. Global and Local Interpretability (GLI) is a very important process that allows the determination of the right architectures to solve Adversarial Attacks (ADA). It contributes towards a holistic view of the Intelligent Model, through which we can determine the most important features, we can understand the way the decisions are made and the interactions between the involved features. This research paper, introduces the innovative hybrid Lipschitz - Shapley approach for Explainable Defence Against Adversarial Attacks. The introduced methodology, employs the Lipschitz constant and it determines its evolution during the training process of the intelligent model. The use of the Shapley Values, offers clear explanations for the specific decisions made by the model.
Origin | Files produced by the author(s) |
---|