Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained...
Authors
Jongsoo Park
Sheng Li
Wei Wen
Hai Li
Yiran Chen
Related Content
Dynamic Network Surgery for Efficient DNNs
Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally...
Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets
The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and...
A Multilevel Framework for Sparse Optimization with Application...
Solving l1 regularized optimization problems is common in the fields of computational biology, signal processing and machine learning. Such l1...
On Large-Batch Training for Deep Learning: Generalization Gap...
The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods...