Based on Python*, this deep learning framework is designed for flexible implementation and extensibility on modern deep neural networks. In collaboration with Google*, TensorFlow has been directly optimized for Intel® architecture to achieve high performance on Intel® Xeon® Scalable processors.
Intel recently launched the second generation Intel® Xeon® Scalable processors (formerly Cascade Lake) adding Intel® Deep Learning Boost (Intel DL Boost) technology. Fast math to take advantage of these hardware advances has been added to Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). The Intel MKL-DNN optimizations are abstracted and integrated directly into the PyTorch* framework. End users can take advantage of this technology with minimum changes to their code.
Seamlessly scale your AI models to big data clusters with thousands of nodes for distributed training or inference. Built on top of the open source frameworks of Apache Spark*, TensorFlow*, PyTorch*, OpenVINO, and Ray*, this unified analytics and AI platform has an extensible architecture to support more libraries and frameworks.
Intel® oneAPI products will deliver the tools needed to deploy applications and solutions across these architectures. Its set of complementary toolkits—a base kit and specialty add-ons—simplify programming and help developers improve efficiency and innovation.