Thu 19 Nov 2020 01:00 - 01:20 at SPLASH-I - W-4 Chair(s): Sophia Drossopoulou, Jan Vitek
Neural models of code have shown impressive results when performing tasks such as predicting method names and identifying certain kinds of bugs.
We show that these models are vulnerable to \emph{adversarial examples}, and introduce a novel approach for \emph{attacking} trained models of code using adversarial examples.
The main idea of our approach is to force a given trained model to make an incorrect prediction, as specified by the adversary, by introducing small perturbations that do not change the program's semantics, thereby creating an adversarial example.
To find such perturbations, we present a new technique for Discrete Adversarial Manipulation of Programs (DAMP). DAMP works by deriving the desired prediction with respect to the model's \emph{inputs}, while holding the model weights constant, and following the gradients to slightly modify the input code.
We show that our DAMP attack is effective across three neural architectures: code2vec, GGNN, and GNN-FiLM, in both Java and C#.
Our evaluations demonstrate that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (a targeted attack) and a success rate of up to 94% in changing a given prediction to any incorrect prediction (a non-targeted attack).
To defend a model against such attacks, we empirically examine a variety of possible defenses and discuss their trade-offs.
We show that some of these defenses can dramatically drop the success rate of the attacker, with a minor penalty of 2% relative degradation in accuracy when they are not performing under attack.
Our code, data, and trained models are available at \url{https://github.com/tech-srl/adversarial-examples} .