Configure Your System - Intel® AI Tools
If you have not already installed the AI Tools, refer to Installing the Intel® AI Tools.
Activate AI Tools Base Environment
Linux
Open a terminal window and type the following:
If the default path is used during the installation:
source $HOME/intel/oneapi/intelpython/bin/activate
If a non-default path is used:
source <custom_path>/bin/activate
Verify that conda is installed and running on your system, and list environments, by typing:
conda --version
conda env list
Intel® AI Reference Models folder will be located in $HOME/intel/oneapi/ai_reference_models.
If a custom path was used, Intel® AI Reference Models will be installed one level below: <custom_path>/..
Next Steps
For Conda users, continue on to the next section.
For developing on a GPU, continue on to GPU Users
Conda Environments in the AI Tools
The following conda environments are included in the AI Tools. For additional information, please explore each environment's related Getting Started Sample linked in the table below.
Conda Environment Name | Note | Getting Started Sample |
tensorflow | Intel Extension for TensorFlow* (XPU) | Sample |
tensorflow-gpu | Intel Extension for TensorFlow* (XPU) | Sample |
pytorch | Intel Extension for PyTorch* (CPU) Intel oneCCL Bindings for PyTorch (CPU) | Intel Extension for PyTorch Sample,Intel oneCCL Bindings for PyTorch Sample |
Pytorch-gpu | Intel Extension for PyTorch* (GPU) Intel oneCCL Bindings for PyTorch (GPU) | Intel Extension for PyTorch Sample,Intel oneCCL Bindings for PyTorch Sample |
base | Intel Distribution for Python* | Sample |
modin | Intel Distribution of Modin* | Sample |
For more samples, browse the full GitHub repository: Intel® AI Tools Code Samples.
- From the same terminal window where the AI Tools Base Environment was activated, identify the Conda environments on your system:
You will see results similar to this:conda env list
# conda environments: # base * $HOME/intel/oneapi/intelpython/ pytorch $HOME/intel/oneapi/intelpython/envs/pytorch Pytorch-gpu $HOME/intel/oneapi/intelpython/envs/pytorch-gpu tensorflow $HOME/intel/oneapi/intelpython/envs/tensorflow tensorflow-gpu $HOME/intel/oneapi/intelpython/envs/tensorflow-gpu modin $HOME/intel/oneapi/intelpython/envs/modin
- Additional environments can be activated with:
For example, to activate the TensorFlow* or PyTorch* environment:conda activate <environment>
TensorFlow:
conda activate tensorflow
PyTorch:
conda activate pytorch
Verify the new environment is active. An asterisk will be displayed next to the active environment.
conda env list
Additionally, the components installed on the active environment can be listed with:
conda list
GPU Users
For those who are developing on a GPU, follow these steps:
1. Install GPU drivers
If you followed the instructions in the Installation Guide to install GPU Drivers, you may skip this step. If you have not installed the drivers, follow the directions in the Installation Guide.
2. Add User to Video Group
For GPU compute workloads, non-root (normal) users do not typically have access to the GPU device. Make sure to add your normal user(s) to the video group; otherwise, binaries compiled for the GPU device will fail when executed by a normal user. To fix this problem, add the non-root user to the video group:
sudo usermod -a -G video <username>
3. Disable Hangcheck
For applications with long-running GPU compute workloads in native environments, disable hangcheck. This is not recommended for virtualizations or other standard usages of GPU, such as gaming.
A workload that takes more than four seconds for GPU hardware to execute is a long running workload. By default, individual threads that qualify as long-running workloads are considered hung and are terminated. By disabling the hangcheck timeout period, you can avoid this problem.
- Open a terminal.
- Open the grub file in /etc/default.
- In the grub file, find the line GRUB_CMDLINE_LINUX_DEFAULT="" .
- Enter this text between the quotes (""):
i915.enable_hangcheck=0
- Run this command:
sudo update-grub
- Reboot the system. Hangcheck remains disabled.
Now that you have configured your system, proceed to Build and Run a Sample Project.