Get Started

  • 2022.2
  • 04/11/2022
  • Public Content

Build and Run a Sample Using the Command Line

Intel® AI Analytics Toolkit
In this section, you will run a simple "Hello World" project to familiarize yourself with the process of building projects, and then build your own project.
If you have not already configured your development environment, go to Configure your system then return to this page. If you have already completed the steps to configure your system, continue with the steps below.
You can use either a terminal window or Visual Studio Code* when working from the command line. For details on how to use VS Code locally, see Basic Usage of Visual Studio Code with oneAPI on Linux*. To use VS Code remotely, see Remote Visual Studio Code Development with oneAPI on Linux*.

Build and Run a Sample Project

The samples below must be cloned to your system before you can build the sample project:
Name of Sample
Description
How to Clone and Build
PyTorch Hello World
How to train a PyTorch model and run the inference with the Intel® Deep Neural Network Library (Intel® DNNL) enabled.
Clone PyTorch_HelloWorld, then follow the directions in README.md to build and run the sample.
TensorFlow Hello World
How TensorFlow optimized on Intel hardware enables Intel® DNNL calls by default. It implements an example neural network with one convolution layer and one ReLU layer.
Clone TensorFlow_HelloWorld, then follow the directions in README.md to build and run the sample.
Intel® Distribution of Modin* Getting Started
This Getting Started sample code shows how to use distributed Pandas using the Modin package.
To get the Intel® Distribution of Modin*, you must install the AI Kit using the Conda* package manager.
After the AI Kit is installed with Conda, clone Intel® Distribution of Modin* Getting Started, then follow the directions in README.md to build and run the sample.
Model Zoo for Intel® Architecture
  • Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
  • Show how to efficiently execute, train, and deploy models optimized for Intel Architecture
  • Make it easy to get started running optimized models on Intel hardware in the cloud or on bare metal
Model Zoo for Intel® Architecture can be found in your installation of Intel® oneAPI AI Analytics Toolkit, typically found at
/opt/intel/oneapi/modelzoo/latest/models
. Instructions for navigating the zoo, using the samples, and running the benchmarks are here: https://github.com/IntelAI/models/blob/v2.4.0/docs/general/tensorflow/AIKit.md#navigate-to-the-model-zoo
Intel® Neural Compressor
Intel® Neural Compressor is an open-source Python* library designed to help you quickly deploy low-precision inference solutions on popular deep-learning frameworks such as TensorFlow*, PyTorch*, MXNet*, and ONNX* (Open Neural Network Exchange) runtime.
Clone neural-compressor, then follow the directions in README.md to build and run the sample.
Intel® Extension for Scikit-learn*
Provide a seamless way to speed up your Scikit-learn application using using of the Intel® oneAPI Data Analytics Library (oneDAL).
Clone Intel® Extension for Scikit-learn*, then follow the directions in the README.md to build and run the sample.
For more samples, browse the full GitHub repository: .
To see a list of components that support CMake, see Use CMake to with oneAPI Applications.

Build Your Own Project

No special modifications to your existing Python projects are required to start using them with this toolkit. For new projects, the process closely follows the process used for creating sample Hello World projects. Refer to the Hello World README files for instructions.
Maximizing Performance
You can get documentation to help you maximize performance for either TensorFlow or PyTorch.
Configure Your Environment
If your virtual environment is not available, or if you wish to add packages to your virtual environment, ensure you have completed the steps in Use the Conda Clone Function to Add Packages as a Non-Root User.
If you are developing outside of a container, source the following script to use the Intel® Distribution for Python*:
. <install_dir>/setvars.sh
where
<
install_dir
>
is where you installed this toolkit. By default the install directory is:
Root or sudo installations:
/opt/intel/oneapi
Local user installations:
~/intel/oneapi
The
setvars.sh
script can be managed using a configuration file, which is especially helpful if you need to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version. For more details, see Using a Configuration File to Manage Setvars.sh.
If you need to setup the environment in a non-POSIX shell, see oneAPI Development Environment Setup for more configuration options.
To switch environments, you must first deactivate the active environment.
The following example demonstrates configuring the environment, activating TensorFlow, and then switching to PyTorch:
. <install_dir>/setvars.sh conda activate tensorflow conda deactivate conda activate pytorch
To return to the Intel Distribution for Python after activating PyTorch or TensorFlow:
conda activate root

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.