Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

Year
2022
Type(s)
Author(s)
Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang
Source
The Tenth International Conference on Learning Representations (ICLR), 2022.
Url
https://openreview.net/forum?id=JM2kFbJvvI&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DICLR.cc%2F2022%2FConference%2FAuthors%23your-submissions)
BibTeX
BibTeX

Evaluating the worst-case performance of a reinforcement learning (RL) agent under the strongest/optimal adversarial perturbations on state observations (within some constraints) is crucial for understanding the robustness of RL agents. However, finding the optimal adversary is challenging, in terms of both whether we can find the optimal attack and how efficiently we can find it. Existing works on adversarial RL either use heuristics-based methods that may not find the strongest adversary, or directly train an RL-based adversary by treating the agent as a part of the environment, which can find the optimal adversary but may become intractable in a large state space. This paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named “actor” and an RL-based learner named “director'”. The actor crafts state perturbations for a given policy perturbation direction, and the director learns to propose the best policy perturbation directions. Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces. Empirical results show that our proposed PA-AD universally outperforms state-of-the-art attacking methods in various Atari and MuJoCo environments. By applying PA-AD to adversarial training, we achieve state-of-the-art empirical robustness in multiple tasks under strong adversaries.

We theoretically characterize the essence of evasion attacks in RL, and propose a novel attack algorithm for RL agents, which achieves state-of-the-art performance on both attacking and robustifying RL agents in many Atari and MuJoCo tasks.