Get Started

  • 2021.4
  • 09/27/2021
  • Public Content

Get Started with the
Intel® oneAPI AI Analytics Toolkit

The following instructions assume you have installed the Intel® oneAPI software. Please see the Intel oneAPI Toolkits page for installation options.
Follow these steps for the
Intel® oneAPI AI Analytics Toolkit
(AI Kit):

Migrating Existing Projects

No special modifications to your existing projects are required to start using them with this toolkit.

Components of This Toolkit

The AI Kit includes:
  • The Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is included in PyTorch as the default math kernel library for deep learning. See this article on the Intel® Developer Zone for more details.
  • This version integrates primitives from the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) into the TensorFlow runtime for accelerated performance.
  • Get faster Python application performance right out of the box, with minimal or no changes to your code. This distribution is integrated with Intel® Performance Libraries such as the Intel® oneAPI Math Kernel Library and the Intel®oneAPI Data Analytics Library. The distribution also includes daal4py, a Python module integrated with the Intel® oneAPI Data Analytics Library as well as the Python Data Parallel Processing Library (PyDPPL), a light weight Python wrapper for Data Parallel C++ and SYCL that provides a data parallel interface and abstractions to efficiently tap into device management features of CPUs and GPUs running on Intel® Architecture.
  • Intel® Distribution of Modin*
    , which enables you to seamlessly scale preprocessing across multi nodes using this intelligent, distributed dataframe library with an identical API to pandas. This distribution is only available by Installing the Intel® AI Analytics Toolkit with the Conda* Package Manager.
    Standard Python installations are fully compatible with the AI Kit, but the Intel® Distribution for Python* is preferred.
  • Model Zoo for Intel® Architecture
    : Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
  • Low Precision Optimization Tool
    : Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open-source Python library.
  • : a seamless way to speed up your Scikit-learn application using using of the Intel® oneAPI Data Analytics Library (oneDAL).
    Patching scikit-learn makes it a well-suited machine learning framework for dealing with real-life problems.
Although not required to run projects, additional programming options and instructions specific to other programming languages are available through the Intel
Data Analytics Acceleration Library

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at