Sign in

On the Sensitivity of Adversarial Robustness to Input Data Distributions

By Gavin Weiguang Ding and others
Neural networks are vulnerable to small adversarial perturbations. Existing literature largely focused on understanding and mitigating the vulnerability of learned models. In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the... Show more
February 22, 2019
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
On the Sensitivity of Adversarial Robustness to Input Data Distributions
Click on play to start listening