The Role and Potential of CPUs in Deep Learning

ID 672899
Updated 5/21/2021
Version Latest
Public

author-image

By

Sparsh Mittal, assistant professor, Indian Institute of Technology, Roorkee, India
@IntelDevTools

Get the Latest on All Things CODE
Sign Up


[This article originally appeared in HPCwire and is reprinted with permission.]

In this invited guest piece, Sparsh Mittal provides perspective on the role of the central processing unit (CPU) for deep-learning workloads in an increasingly diverse processor space, reviewing use cases where the performance of the CPU excels, and noting some of the architectural changes and directions spurred by deep-learning applications. The article serves as an introduction to a new survey research paper (written by Mittal et al) published this April in IEEE Transactions on Neural Networks and Learning Systems.

Deep-learning applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the deep-learning pie a user can finally enjoy. Even though accelerators may provide higher throughput than general-purpose computing systems (CPUs), there are several other metrics and usage scenarios on which CPUs are preferred or are superior. A recent survey paper I’ve coauthored with Poonam Rajput and Sreenivas Subramoney (A Survey of Deep Learning on CPUs: Opportunities and Co-optimizations) highlights the strengths of CPUs in deep learning and identifies opportunities for further optimization.

 

CPU Has Its Forte, and Accelerator Is Not a Panacea

Sparse DNNs are inefficient on massively parallel processors because of their irregular memory accesses and inability to leverage optimizations such as cache tiling and vectorization. Further, RNNs are difficult to parallelize due to the dependencies between the steps. Similarly, DNNs such as InceptionNet variants have filter shapes of 1×1, 3×3, 1×3, 3×1, and so on, which lead to irregular memory accesses and variable amount of parallelism across the layers. CPUs are more suitable for such applications with limited parallelism because of their advanced memory management techniques. For example, researchers from Rice University have shown that for fully connected networks over sparse datasets such as Amazon-670K, and Delicious-200K, the deep-learning training problem can be modeled as a search problem. This allows replacing matrix multiplication operation with hash tables. Their technique on CPUs provides higher performance than a implementation based on TensorFlow* for GPUs.

3D CNNs and even 2D CNNs with large batch sizes require a massive amount of memory. Since CPU-managed hosts in the cloud and data center scenarios have much larger memory capacities than accelerators, running memory-hungry operations on CPUs is not merely attractive but often imperative. Accelerators such as TPU provide high throughput for large batch sizes; however, for applications requiring real-time inference, the use of large batch size is not preferred. At small batch sizes, CPUs generally provide competitive latency. There are a host of techniques that can be applied to further tune the deep-learning applications on CPUs, for example, hardware-aware pruning, vectorization, cache tiling, and approximate computing. Our survey paper summarizes many such techniques.

 

Across the Board: From Tiny Wearables to Large Data Centers

IoT devices and wearables have tight power and area budgets, which precludes over-specialization. For example, a smartwatch chip cannot host separate accelerators for speech, audio, image, or video processing. In smartphones running on Android*, the programming support for mobile GPU or DSP is not fully mature. In fact, on a typical mobile system system-on-a-chip (SoC), the theoretical peak performance of mobile CPUs equals that of mobile GPUs. Further, data centers supporting web services, such as social networks, see a significant fluctuation in computing demand over time. CPUs can meet this variability in demand due to their high availability and efficiency for both deep-learning and non-deep-learning tasks. Finally, in extreme environments such as defense and medical, which require security certifications, CPUs are sometimes the only platform of choice.

 

Not Missing the Obvious: Economy and Ease of Use

Accelerators require long design cycles and massive investment. Integrating them into existing ecosystems requires high costs and engineering work. By contrast, the hardware and software stack of CPUs is already well established and understood. They can provide reasonable speedups on a broad range of applications. While large-scale companies have the resources to build and maintain their custom accelerators, CPUs (or GPUs) remain the most feasible platform for other companies.

 

Future Outlook: Brighter Than You Think

Going forward, merely increasing peak performance will not be sufficient; more revolutionary improvements are required to boost the performance of a broad range of deep-learning applications, such as reinforcement learning and generative adversarial networks. Recent CPUs have begun to provide hardware support for low-precision computing. Once in-memory computing reaches maturity, the large caches of CPUs would turn into massive compute units. Development of open-source ISA, such as RISC-V, would further break the portability and proprietary barriers of accelerators.

The metrics of interest are numerous and varied, and so are the state-of-the-art deep-learning models. We believe that instead of a “general-purpose processor versus accelerator” debate, the future will see a CPU-accelerator heterogeneous computing approach that brings together the best of both worlds.

 

About the Author

Dr. Sparsh Mittal is currently working as an assistant professor at IIT Roorkee, India. He received a Bachelor of Technology degree from IIT, Roorkee, India, and a PhD degree from Iowa State University (ISU), USA. He has worked as a post-doctoral research associate at Oak Ridge National Lab (ORNL), USA, and as an assistant professor at CSE, IIT Hyderabad. He was the graduating topper of his batch in B.Tech and his BTech project received the best project award. He has received a fellowship from ISU and a performance award from ORNL. He has published more than 90 papers at top venues. He is an associate editor of Elsevier’s Journal of Systems Architecture. He has given invited talks at ISC Conference at Germany, New York University, University of Michigan and Xilinx (Hyderabad).