Everyone Matters: Customizing the Dynamics of Decision Boundary for Adversarial Robustness

Year
2022
Type(s)
Author(s)
Yuancheng Xu, Yanchao Sun, Furong Huang
Source
accepted at the ICML workshop for Continuous Time Methods for Machine Learning @ICML2022.
BibTeX
BibTeX

The adversarial robustness of a deep classifier can be measured by the robust radii: the decision boundary’s distances to natural data points. However, it is unclear whether current adversarial training (AT) methods effectively improves the robust radius for each individual vulnerable point. To understand this, we propose a continuous-time framework that studies the relative speed of the decision boundary with respect to each individual point. Through visualizing the speed, a surprising conflicting moving-behavior is revealed: the decision boundary under AT moves away from some vulnerable points but simultaneously moves closer to other vulnerable ones. To alleviate this conflicting dynamics of the decision boundary, we propose Dynamical Customized Adversarial Training (Dyna-CAT) which directly controls the decision boundary to move away from the training data points. Moreover, in order to further encourage the robustness improvement for more vulnerable points, Dyna-CAT controls the decision boundary to move faster away from points with smaller robust radii, achieving customized manipulation of the decision boundary. As a result, Dyna-CAT achieves fairer robustness to individuals, leading to better overall robustness under limited model capacity. Experiments verify that Dyna-CAT alleviates the conflicting dynamics and obtains improved robustness compared with the state-of-the-art defenses.