Get Started with the
Intel® AI Analytics Toolkit
Intel® AI Analytics Toolkit
The following instructions assume you have installed the Intel® oneAPI software.
Please see the
Intel AI Analytics Toolkit page for installation options.
Follow these steps to build and run a sample with the
Intel® AI Analytics Toolkit
(AI Kit):
Standard Python installations are fully compatible with the AI Kit, but the
Intel® Distribution for Python* is preferred.
No special modifications to your existing projects are required to start using them with this toolkit.
Components of This Toolkit
The AI Kit includes:
- The Intel® oneAPI Deep Neural Network Library (oneDNN) is included in PyTorch as the default math kernel library for deep learning.
- This version integrates primitives from oneDNN into the TensorFlow runtime for accelerated performance.
- Get faster Python application performance right out of the box, with minimal or no changes to your code. This distribution is integrated with Intel® Performance Libraries such as the Intel® oneAPI Math Kernel Library and the Intel®oneAPI Data Analytics Library.
- Intel® Distribution of Modin* (available through Anaconda only), which enables you to seamlessly scale preprocessing across multi nodes using this intelligent, distributed dataframe library with an identical API to pandas. This distribution is only available by Installing the Intel® AI Analytics Toolkit with the Conda* Package Manager.
- Model Zoo for Intel® Architecture: Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source machine learning models optimized by Intel to run onIntel® Xeon® Scalable processors.
- Intel® Neural Compressor: quickly deploy low-precision inference solutions on popular deep-learning frameworks such as TensorFlow*, PyTorch*, MXNet*, and ONNX* (Open Neural Network Exchange) runtime.