Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems. Recent works have shown the possibility of generating imperceptibly perturbed image inputs (a.k.a., adversarial examples) to fool well-trained DNN classifiers into making arbitrary predictions. To address this problem, we propose a training recipe named "deep defense". Our core idea is to integrate an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. The whole optimization problem is solved just like training a recursive network. Experimental results demonstrate that our method outperforms training with adversarial/Parseval regularizations by large margins on various datasets (including MNIST, CIFAR-10 and ImageNet) and different DNN architectures. Code and models for reproducing our results will be made publicly available.
Authors
Changshui Zhang
Ziang Yan
Related Content
Generalizing to Unseen Domains via Adversarial Data Augmentation
We are concerned with learning models that generalize well to different unseen domains. We consider a worst-case formulation over data...
HeNet: A Deep Learning Approach on IntelR Processor...
This paper presents HeNet, a hierarchical ensemble neural network, applied to classify hardware-generated control flow traces for malware detection. Deep...
Semi-Parametric Image Synthesis
We present a semi-parametric approach to photographic image synthesis from semantic layouts. The approach combines the complementary strengths of parametric...
A Demonstration of the BigDAWG Polystore System
This paper presents BigDAWG, a reference implementation of a new architecture for “Big Data” applications. Such applications not only call...