For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations...
Authors
Debbie Marr
Related Content
Efficient, Sparse Representation of Manifold Distance Matrices for...
Geodesic distance matrices can reveal shape properties that are largely invariant to non-rigid deformations, and thus are often used to...
Motion Segmentation by Exploiting Complementary Geometric Models
Many real-world sequences cannot be conveniently categorized as general or degenerate; in such cases, imposing a false dichotomy in using...
Boosting Adversarial Attacks with Momentum
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe...
HeNet: A Deep Learning Approach on IntelR Processor...
This paper presents HeNet, a hierarchical ensemble neural network, applied to classify hardware-generated control flow traces for malware detection. Deep...