Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics

Year
2021
Type(s)
Author(s)
Yanchao Sun, Da Huo, Furong Huang
Source
9th International Conference on Learning Representations (ICLR), 2021.
Url
https://arxiv.org/abs/2009.00774
BibTeX
BibTeX

Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm’s vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL agents, closing the gap that no poisoning method exists for policy-based RL agents. VA2C-P uses a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy or teaches the agents to converge to a target policy, with a limited attacking budget.

Comparison of our proposed poisoning algorithm and the random poisoning method.

We train 3 A2C agents with the same hyper-parameters on Hopper-v2, under different poisoning methods with the same attack budget and power. The videos show how the trained agents perform in the test phase.

*Note: the goal of the agent in Hopper is to hop forward as fast as possible.*

 

(1) a baseline showing the original not poisoned agent

 

(2) agent under random poisoning attack with $\epsilon=0.1$, $C/K=0.3$.

 

(3) agent under our proposed VA2C-P attack with $\epsilon=0.1$, $C/K=0.3$.

 

Paper link: https://arxiv.org/abs/2009.00774

Github link: https://github.com/umd-huang-lab /poison-rl