Get Started

Get Started with the Intel® AI Tools for Linux*

ID 766885
Date 11/07/2023
Public

Build and Run a Sample Using the Command Line

AI Tools

In this section, you will run a simple "Hello World" project to familiarize yourself with the process of building projects, and then build your own project.

NOTE:
If you have not already configured your development environment, go to Configure your system then return to this page. If you have already completed the steps to configure your system, continue with the steps below.

You can use either a terminal window or Visual Studio Code* when working from the command line. For details on how to use VS Code locally, see Basic Usage of Visual Studio Code with oneAPI on Linux*. To use VS Code remotely, see Remote Visual Studio Code Development with oneAPI on Linux*.

Build and Run a Sample Project

The samples below must be cloned to your system before you can build the sample project:

Name of Sample Description How to Clone and Build
Intel Extension for PyTorch Getting Started, Intel oneCCL Bindings for PyTorch Train a PyTorch model and run the inference with the Intel® Deep Neural Network Library (Intel® DNNL) enabled.

Intel® Extension for PyTorch* extends PyTorch* with optimizations for extra performance boost on Intel hardware.

Clone oneCCL Bindings for PyTorch or Intel Extension for PyTorch sample, then follow the directions in README.md to build and run the sample.
TensorFlow Hello World, Intel Extension for TensorFlow Getting Started TensorFlow optimized on Intel hardware enables Intel® DNNL calls by default. It implements an example neural network with one convolution layer and one ReLU layer.

Intel® Extension for TensorFlow* is a heterogeneous, high performance deep learning extension plugin based on TensorFlow  PluggableDevice  interface. This extension plugin brings Intel XPU (GPU, CPU, etc) devices into  the TensorFlow  open source community for AI workload acceleration.

Clone TensorFlow_HelloWorld or Intel Extension for TensorFlow sample, then follow the directions in README.md to build and run the sample.
Intel® Distribution of Modin* Getting Started This Getting Started sample code shows how to use distributed Pandas using the Modin package.

To get the Intel® Distribution of Modin*, you must install the AI Tools using the Conda* package manager.

After the AI Tools are installed with Conda, clone Intel® Distribution of Modin* Getting Started, then follow the directions in README.md to build and run the sample.
Intel® AI Reference Models
  • Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
  • Show how to efficiently execute, train, and deploy models optimized for Intel Architecture
  • Make it easy to get started running optimized models on Intel hardware in the cloud or on bare metal

Intel® AI Reference Models can be found in your installation of AI Tools, typically found at /opt/intel/oneapi/modelzoo/latest/models. Instructions for navigating the models, using the samples, and running the benchmarks are here: https://github.com/IntelAI/models/blob/v2.4.0/docs/general/tensorflow/AIKit.md#navigate-to-the-model-zoo

Intel® Neural Compressor Intel® Neural Compressor is an open-source Python* library designed to help you quickly deploy low-precision inference solutions on popular deep-learning frameworks such as TensorFlow*, PyTorch*, MXNet*, and ONNX* (Open Neural Network Exchange) runtime. Clone neural-compressor, then follow the directions in README.md to build and run the sample.
Intel® Extension for Scikit-learn* Provide a seamless way to speed up your Scikit-learn application using using of the Intel® oneAPI Data Analytics Library (oneDAL). Clone Intel® Extension for Scikit-learn*, then follow the directions in the README.md to build and run the sample.
For more samples, browse the full GitHub repository: AI Tools Code Samples.

To see a list of components that support CMake, see Use CMake to with oneAPI Applications.

Build Your Own Project

No special modifications to your existing Python projects are required to start using them with these tools. For new projects, the process closely follows the process used for creating sample Hello World projects. Refer to the Hello World README files for instructions.

Maximizing Performance

You can get documentation to help you maximize performance for either TensorFlow or PyTorch.

Configure Your Environment

NOTE:
If your virtual environment is not available, or if you wish to add packages to your virtual environment, ensure you have completed the steps in GUID-3273049C-43C9-4125-AA10-E890F6613379.html#CONDA-CLONE.

Source the following script to use the Intel® Distribution for Python*:

Component Directory Layout

For system wide installations:

. /opt/intel/oneapi/setvars.sh

For private installations:

. ~/intel/oneapi/setvars.sh

Unified Directory Layout

For system wide installations:

. /opt/intel/oneapi/<toolkit-version>/oneapi-vars.sh

For private installations:

. ~/intel/oneapi/<toolkit-version>/oneapi-vars.sh

NOTE:
The setvars.sh script can be managed using a configuration file, which is especially helpful if you need to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version. For more details, see Using a Configuration File to Manage Setvars.sh. If you need to setup the environment in a non-POSIX shell, see oneAPI Development Environment Setup for more configuration options.

To switch environments, you must first deactivate the active environment.

The following example demonstrates configuring the environment, activating TensorFlow*, and then returning to the Intel Distribution for Python:

. 
conda activate tensorflow
conda deactivate
conda activate root