Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed...
Authors
Yinpeng Dong
Fangzhou Liao
Tianyu Pang
Related Content
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in....
Network Sketching: Exploiting Binary Structure in Deep CNNs
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are....
Efficient, Sparse Representation of Manifold Distance Matrices for....
Geodesic distance matrices can reveal shape properties that are largely invariant to non-rigid deformations, and thus are often used to....
Motion Segmentation by Exploiting Complementary Geometric Models
Many real-world sequences cannot be conveniently categorized as general or degenerate; in such cases, imposing a false dichotomy in using....