Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Alder Lake
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.
  1. Download the Intel® AI Analytics Toolkit

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox
🡨 Back to Overview

AI Tools Selector (Preview)

Achieve End-to-End Performance for AI Workloads, Powered by oneAPI
  • Overview
  • Download
  • Documentation
This site is protected by reCAPTCHA and the Google <a href="https://policies.google.com/privacy">Privacy Policy</a> and <a href="https://policies.google.com/terms"> Terms of Service</a> apply.

AI Tools: Data Analytics Classical Machine Learning Deep Learning Inference Optimization pip conda

All packages are for Linux* only.

Install a conda* Package

conda install -c intel -c conda-forge xgboost scikit-learn-intelex intel-extension-for-tensorflow=2.13.0 pytorch=2.0.1 intel-extension-for-pytorch=2.0.110 modin-all neural-compressor python=3.9 python=3.10 --override-channels

Installation Instructions for conda

Intel provides access to the AI software through a public Anaconda* repository. If you do not have an existing conda-based Python* environment, install conda or Miniconda* before running the installation command.

Libmamba Solver Configuration

Libmamba is defined as the solver used to install Intel packages. To ensure you are using this solver, follow these instructions:

  1. Activate the base conda environment.

    source <conda PATH>/bin/activate
  2. Update conda.

    conda update -n base conda
  3. Configure libmamba as the solver.

    conda config --set solver libmamba
  4. Verify that libmamba is set by typing:

    conda config --show

We strongly recommend installing into a new conda environment first.

Create a new conda environment

Install packages in a new environment using conda create. A list of available packages is located in the Intel repository at Anaconda.org. Not all packages in the repository have the current release. If the repository contains an outdated version of a required component, install a newer one via the command line or GUI.

Install with pip

pip install xgboost scikit-learn-intelex tensorflow==2.13.0 intel-extension-for-tensorflow[gpu]==2.13.0 torch==2.0.1 intel-extension-for-pytorch==2.0.110 -f https://developer.intel.com/ipex-whl-stable-xpu modin[ray] neural-compressor==2.2 cnvrgv2

Installation Instruction for pip

If you do not have pip, use the following instructions to install it. After installation, make sure that you can run pip from the command line.

Installation Instructions

Install a Docker* Container

docker pull intel/data-analytics:classical-ml:deep-learning:inference-optimization:2023.2-py3.92023.2-py3.10

cnvrg.io™

cnvrg.io™ is a full-service machine learning operating system. The platform enables you to manage all your AI projects from one place. Using cnvrg.io requires a separate license and download.

Note cnvrg.io is only available for pip packages.

Sign Up

Learn More

Add on packages by selecting the options you need.

Installation Instructions for Docker*

You must install Docker to run the containers. For complete instructions, visit the Docker website.

Installation Instructions

Working with Preset Containers

Prerequisites for First-Time Users: Intel® Distribution for Python*

Do one of the following:

  • If you don't have Python installed, install Intel® Distribution for Python*.
  • If Python is installed, we recommend updating to the corresponding version of Intel Distribution for Python for maximum performance.

Make sure that you can run Python from the command line.

Tutorial to Check Your Python Version

Prerequisites for First-Time Users: Intel® Distribution for Python*

Do one of the following:

  • If you don't have Python installed, install Intel® Distribution for Python*.
  • If Python is installed, we recommend updating to the corresponding version of Intel Distribution for Python for maximum performance.

Make sure that you can run Python from the command line.

Tutorial to Check Your Python Version



Offline Installer

All AI tools are available for offline installation using a stand-alone installer. Choose this option if your target installation environments are behind a firewall, you need to manage versions, or for other purposes.

Download

In This Package

  • Intel® Distribution for Python* is a cluster of packages, including the Python Interpreter and compilers, that are optimized via Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library (oneDAL) to make Python applications more efficient.
  • Intel® Distribution for Python* is a cluster of packages, including the Python Interpreter and compilers, that are optimized via Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library (oneDAL) to make Python applications more efficient.
  • Intel® Optimization for XGBoost* is an upstreamed optimization from Intel that provides superior performance on Intel® CPUs. This well-known machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel® architecture to significantly speed up model training and improve accuracy for better predictions.
  • Intel® Extension for Scikit-learn* seamlessly speeds up your scikit-learn* applications on Intel® CPUs and GPUs across single nodes and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms.
  • Intel® Extension for TensorFlow* is a heterogeneous, high-performance, deep learning extension plug-in based on a TensorFlow* PluggableDevice interface that enables access to Intel CPU and GPU devices with TensorFlow for AI workload acceleration.
  • Intel® Extension for PyTorch* extends PyTorch with up-to-date feature optimizations for an extra performance boost on Intel hardware.
  • Intel® Distribution of Modin* is a distributed DataFrame library with an identical API to pandas. The library integrates with OmniSci* in the back end for accelerated analytics.
  • Intel® Neural Compressor reduces AI model size via compression to increase inference speed.

Additional AI Tool Resources

  • System Requirements
  • AI Tools Selector Guide
  • Working with Preset Containers
  • Intel® Distribution of Modin* Get Started Sample
  • XGBoost for Intel® Distribution for Python* Get Started Sample
  • Intel® Extension for PyTorch* Get Started Sample
  • Intel® Extension for Scikit-learn* Get Started Sample
  • Intel® Extension for TensorFlow* Get Started Sample
  • AI Reference Kits

Accessing Docker Hub From This Website

By accessing, downloading, or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third-party software included with the Software Package. Preset containers are published under Apache License 2.0.

Docker containers are available only for preset bundles. To download additional components, choose a package manager and select your component.

  • Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.13.0), Intel® Extension for PyTorch* (version 2.0.110), Intel® Optimization for XGBoost* (version 1.7.6), Intel® Extension for Scikit-learn* (version 2023.2), Intel® Distribution for Python* (version 3.9 and 3.10), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
  • Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.9.1), Intel® Extension for PyTorch* (version 2.1), Intel® Optimization for XGBoost* (version 1.2.2), Intel® Extension for Scikit-learn* (version 2023), Intel® Distribution for Python* (version 2023.2.0), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
  • Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.13.0), Intel® Extension for PyTorch* (version 2.0.110), Intel® Optimization for XGBoost* (version 1.7.6), Intel® Extension for Scikit-learn* (version 2023.2), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.

  

Support

Start-up support is available if there is an issue with the AI Tools Selector functionality.

Report an Issue

Feedback Welcome

Share your thoughts about the preview version of this AI tools selector, and provide suggestions for improvement. Responses are anonymous, and Intel will not contact you unless you grant permission by providing an email address.

Start Survey
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Diversity & Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Do Not Share My Personal Information

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration and other factors. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

Intel Footer Logo