The exponential growth in use of large, deep neural networks (DNN) has accelerated the need for training these networks in hours—even minutes.

This kind of speed cannot be achieved on a single machine—a single node cannot satisfy the compute, memory, and I/O requirements of today’s state-of-the-art DNNs.

The way to do it is through scalable and efficient distributed training, which is facilitated by deep-learning frameworks.

Join Intel® software engineer and deep-learning expert, Mikhail Smorkalov, for an overview of three Intel®-optimized deep-learning frameworks—Caffe*, Horovod* (for TensorFlow*), and nGraph—that boost communication performance on distributed workloads compared to existing approaches.


Other Resources

Find out more about these optimized frameworks, including how to get them.


Mikhail Smorkalov
Software engineer, Intel Corporation

Mikhail specializes in deep-learning technologies. His responsibilities include defining deep-learning architecture, developing and deploying new features for the Intel® Machine Learning Scaling Library (Intel® MLSL) and scaling deep-learning workloads to some of the fastest supercomputers in the world.

Before joining Intel in 2014, Mikhail spent years developing software and middleware for the telecom industry. He holds a master of science in computational mathematics and cybernetics from the State University of Nizhni Novgorod.

 

Intel® Distribution of OpenVINO™ Toolkit

Deploy deep learning inference with unified programming models and broad support for trained neural networks from popular deep learning frameworks.

Get It Now   

See All Tools