Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-Uniform Attacks

Year
2021
Type(s)
Author(s)
Huimin Zeng, Chen Zhu, Tom Goldstein and Furong Huang
Source
35th AAAI Conference on Artificial Intelligence (AAAI), 2021.
Url
https://arxiv.org/abs/2010.12989
BibTeX
BibTeX

Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks. However, traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution, which is apparently unrealistic as the attacker could choose to focus on more vulnerable examples. We present a weighted minimax risk optimization that defends against non-uniform attacks, achieving robustness against adversarial examples under perturbed test data distributions. Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly. The designed risk allows the training process to learn a strong defense through optimizing the importance weights. The experiments show that our model significantly improves state-of-the-art adversarial accuracy under non-uniform attacks without a significant drop under uniform attacks.

Code Link: https://github.com/huiminzeng/WeightedTraining_AAAI

Paper Link: https://arxiv.org/abs/2010.12989

Distribution-aware attacker

An attacker may not choose to attack the input examples uniformly. They might attack more vulnerable points.

 

We propose a weighted minimax risk to improve the robustness of the model against adversarial perturbations that are distributed different from the training examples.