Skip To Main Content
Intel logo - Return to the home page

Sign In

Your username is missing
Your password is missing

By signing in, you agree to our Terms of Service.

Forgot your Intelusername orpassword?

Frequently Asked Questions

Do you work for Intel? Sign in here.

Don’t have an Intel account? Sign up here for a basic account.

My Tools

Select Your Region

Asia Pacific

  • Asia Pacific (English)
  • Australia (English)
  • India (English)
  • Indonesia (Bahasa Indonesia)
  • Japan (日本語)
  • Korea (한국어)
  • Mainland China (简体中文)
  • Taiwan (繁體中文)
  • Thailand (ไทย)
  • Vietnam (Tiếng Việt)

Europe

  • France (Français)
  • Germany (Deutsch)
  • Ireland (English)
  • Italy (Italiano)
  • Poland (Polski)
  • Spain (Español)
  • Turkey (Türkçe)
  • United Kingdom (English)

Latin America

  • Argentina (Español)
  • Brazil (Português)
  • Chile (Español)
  • Colombia (Español)
  • Latin America (Español)
  • Mexico (Español)
  • Peru (Español)

Middle East/Africa

  • Israel (עברית)

North America

  • United States (English)
  • Canada (English)
  • Canada (Français)
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Alder Lake
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Intel® AI Analytics Toolkit (AI Kit)

Achieve End-to-End Performance for AI Workloads Powered by oneAPI

Accelerate Data Science & AI Pipelines

The AI Kit gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architecture. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine learning, and provides interoperability for efficient model development.

Using this toolkit, you can:

  • Deliver high-performance, deep learning training on Intel® XPUs and integrate fast inference into your AI development workflow with Intel®-optimized, deep learning frameworks for TensorFlow* and PyTorch*, pretrained models, and low-precision tools. 
  • Achieve drop-in acceleration for data preprocessing and machine learning workflows with compute-intensive Python packages, Modin*, scikit-learn*, and XGBoost.
  • Gain direct access to analytics and AI optimizations from Intel to ensure that your software works together seamlessly.
Download the Toolkit

Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.

Get It Now
Develop in the Cloud

Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel® oneAPI and AI tools, and test your workloads across Intel® CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.

 

Get Access

Release Notes

Get Started Guide Linux*

Get Started Guide Windows*

Code Samples

See All Toolkits

What's New

  • Accelerate your deep learning training and inference workloads with support in Intel® oneAPI Deep Neural Network Library (oneDNN) for Intel® Xe Matrix Extensions on Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series.
  • Run Intel® Extension for TensorFlow* and Intel® Extension for PyTorch* on discrete Intel GPUs.
  • Scale your DataFrame processing to large or distributed compute resources with the Heterogeneous Data Kernels (HDK) back end for Intel® Distribution of Modin*.
  • Get your AI projects started quickly with open source pretrained reference kits that include models, training data, end-to-end pipeline user guides, and Intel oneAPI components.
  • Runs natively on Windows* with full feature parity to Linux* (except for distributed training).

Features

Optimized Deep Learning 

  • Leverage popular, Intel-optimized frameworks—including TensorFlow and PyTorch—to use the full power of Intel architecture and yield high performance for training and inference.
  • Expedite development by using the open source, pretrained, machine learning models that are optimized by Intel for best performance. 
  • Take advantage of automatic accuracy-driven tuning strategies along with additional objectives like performance, model size, or memory footprint using low-precision optimizations.
     

Data Analytics and Machine Learning Acceleration

  • Increase machine learning model accuracy and performance with algorithms in scikit-learn and XGBoost, optimized for Intel architecture.
  • Scale out efficiently to clusters and perform distributed machine learning by using Intel® Extension for Scikit-learn*.

High-Performance Python*

  • Take advantage of the most popular and fastest growing programming language for AI and data analytics with underlying instruction sets optimized for Intel architecture.
  • Process larger scientific data sets more quickly using drop-in performance enhancements to existing Python code.
  • Achieve highly efficient multithreading, vectorization, and memory management, and scale scientific computations efficiently across a cluster.

 

Simplified Scaling across Multi-node DataFrames

  • Seamlessly scale and accelerate pandas workflows to multicores and multi-nodes with only one line of code change using the Intel Distribution of Modin, an extremely lightweight parallel DataFrame.
  • Accelerate data analytics with high-performance back ends, such as OmniSci*.

Benchmarks

These benchmarks illustrate the performance capabilities of the AI Kit.

In the News

CERN Uses Intel® Deep Learning Boost & oneAPI to Juice Inference without Accuracy Loss

Researchers at CERN and Intel showcase promising results with low-precision optimizations that exploit heterogeneous operations on CPUs for convolutional Generative Adversarial Networks (GAN).

Learn More

LAIKA Studios* & Intel Join Forces to Expand the Possibilities in Stop-Motion Film Making

See how LAIKA Studios* and the Intel Applied Machine Learning team used tools from the AI Kit to realize the limitless scope of stop-motion animation.

Learn More

Accelerate PyTorch* with oneAPI Libraries

Harnessing Intel® Deep Learning Boost and oneAPI libraries, Intel and Facebook* collaboratively improved PyTorch CPU performance across multiple training and inference workloads.

PyTorch with oneDNN

PyTorch with Intel® oneAPI Collective Communications Library (oneCCL)

MLPerf* Results for Deep Learning Training and Inference

Reflecting the broad range of AI workloads, Intel submitted results for MLPerf* Release v.0.7 for training and inference. Results in each use case demonstrated that Intel continues to improve standards for Intel® Xeon® Scalable processors as universal platforms for CPU-based machine learning training and inference.

MLPerf Training | MLPerf Inference

An Open Road to Swift DataFrame Scaling

This podcast looks at the challenges of data preprocessing, especially time-consuming, data-wrangling tasks. It discusses how Intel and OmniSci are collaborating to provide integrated solutions that improve DataFrame scaling.

Listen

Superior Machine Learning Performance on the Latest Intel® Xeon® Scalable Processors

Intel Extension for Scikit-learn gives data scientists the performance and ease-of-use they need to run machine learning algorithms with a simple drop-in replacement for the stock scikit-learn. This article showcases the speedups achieved on the latest Intel Xeon Scalable processors when compared to processors from NVIDIA* and AMD*.

Learn More

Optimize Performance of Gradient Boost Algorithms

Intel has been constantly improving training and inference performance for XGBoost algorithms. The following blogs compare the training performance of XGBoost 1.1 on a CPU with third-party GPUs, and showcase how to speed up inference with minimal code changes and no loss of quality.

Training | Inference

Accelerate Lung Disease Diagnoses with Intel® AI

Accrad developed CheXRad, an AI-powered solution to rapidly detect COVID-19 and 14 other thoracic diseases in the clinics and hospitals of Africa. With the help of Intel, they were able to train, optimize, and deploy in less time and at a lower operational cost than available alternatives.

Learn More

Show more Show less

What’s Included

Intel® Optimization for TensorFlow*

In collaboration with Google*, TensorFlow has been directly optimized for Intel architecture using the primitives of oneDNN to maximize performance. This package:

  • Provides the latest TensorFlow binary version compiled with CPU-enabled settings
  • Adds extensions to further boost TensorFlow training and inference
  • Takes advantage of the latest Intel hardware features

 

Intel® Optimization for PyTorch*

In collaboration with Facebook*, this popular deep learning framework is now directly combined with many optimizations from Intel to provide superior performance on Intel architecture. This package provides the binary version of the latest PyTorch release for CPUs, and further adds extensions and bindings from Intel with oneCCL for efficient distributed training.

 

Model Zoo for Intel® Architecture

Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open-source, machine learning models optimized by Intel to run on Intel Xeon Scalable processors.

 

Intel® Neural Compressor

Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open source Python library.

Intel® Extension for Scikit-learn*

Seamlessly speed up your scikit-learn applications on Intel® CPUs and GPUs across single nodes and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms. The toolkit also includes stock scikit-learn to provide a comprehensive Python environment installed with all required packages. The extension supports up to the last four versions of scikit-learn, and provides flexibility to use with your existing packages.

 

Intel® Optimization for XGBoost*

In collaboration with the XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. This well-known machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel architecture to significantly speed up model training and improve accuracy for better predictions.

 

Intel® Distribution of Modin*

Accelerate your pandas workflows and scale data preprocessing across multi-nodes using this intelligent, distributed DataFrame library with an identical API to pandas. The library integrates with OmniSci in the back end for accelerated analytics. This component is available only via the Anaconda* distribution of the toolkit. To download and install it, refer to the Installation Guide.

 

Intel® Distribution for Python*

Achieve greater performance through acceleration of core Python numerical and scientific packages that are built using Intel® Performance Libraries. This package includes Numba Compiler*, a just-in-time compiler for decorated Python code that allows the latest Single Instruction Multiple Data (SIMD) features and multicore execution to fully use modern CPUs. You can program multiple devices using the same programming model, DPPy (Data Parallel Python) without rewriting CPU code to device code.

Data Science Workstations Powered by the AI Kit

Original equipment manufacturer (OEM) partners offer Intel®-based data science workstations, which are laptop, desktop, or tower configurations that include:

  • Intel® Core™ or Intel® Xeon® processors that are matched for data science work
  • Large memory capacities to enable in-memory processing of large datasets, which shortens the time required to sort, filter, label, and transform your data
  • Intel® Optane™ persistent memory that provides an affordable alternative to DRAM for extremely large-capacity workloads and in-memory databases
  • AI Kit software with applications and libraries that accelerate end-to-end AI and data analytics pipelines on Intel architecture


Data Science Workstations

The AI Kit comes preinstalled on select OEM data science workstations. Use the Installation Guide to download the AI Kit on your workstation.

  • Dell Precision* data science workstations: See the Installation Guide.
  • Z by Hewlett Packard Enterprise* data science workstations: AI Kit components are preloaded through the Z by HP Data Science Stack, an application for customizing data science environments.
  • Lenovo ThinkStation* and ThinkPad* P series workstations: factory installed.

Documentation & Code Samples

 Documentation
  • Installation Guides:
    Intel | Anaconda | Docker* | Dell Precision Data Science Workstation
  • Package Managers: Conda | APT | YUM/DNF/Zypper
  • Get Started Guides:
    Linux | Windows | Containers | scikit-learn
  • Release Notes
  • Maximize TensorFlow Performance on CPUs: Considerations and Recommendations for Inference Workloads


View All Documentation

Code Samples
  • Get Started:
    TensorFlow
    | PyTorch | Modin | XGBoost | scikit-learn
  • End-to-End Machine Learning for Census Workload
  • TensorFlow Performance Analysis
  • Multi-node Training with PyTorch
  • PyTorch Training with Intel® Advanced Matrix Extensions and bfloat16 Data


View All Code Samples

Training

Accelerate End-to-End AI Pipelines Using the Intel® AI Analytics Toolkit
Optimize the Latest Deep Learning Workloads Using Intel® Extension for PyTorch*
Achieve AI Performance from the Data Center to the Edge Using oneAPI Toolkits
AI Analytics Part 1: Optimize End-to-End Data Science and Machine Learning Acceleration
AI Analytics Part 2: Enhance Deep Learning Workloads on 3rd Generation Intel Xeon Scalable Processors
AI Analytics Part 3: Walk through the Steps to Optimize End-to-End Machine Learning Workflows
Maximize CPU Resources for XGBoost Training and Inference
Intel® Extension for TensorFlow*: Tips & Tricks for AI & HPC Convergence
Achieve High-Performance Scaling for End-to-End Machine-Learning and Data Analytics Workflows

Specifications

Processors:
  • Intel Xeon processors
  • Intel Xeon Scalable processors
  • Intel Core processors
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

 

Language:
  • Python

 

Operating systems:
  • Linux
  • Windows

 

Development environments:
  • Compatible with Intel® compilers and others that follow established language standards
  • Linux: Eclipse* IDE

 

Distributed environments:
  • MPI (MPICH-based, Open MPI)


Support varies by tool. For details, see the system requirements.

 

Get Help

Your success is our success. Access these support resources when you need assistance.

  • AI Kit Support Forum
  • Deep Learning Frameworks Support Forum
  • Machine Learning and Data Analytics Support Forum


For more help, see our general oneAPI Support.

Stay in the Know with All Things CODE

Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.

 

Sign Up
  • Features
  • What's Included
  • Documentation & Code Samples
  • Training
  • Specifications
  • Help
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Diversity & Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Do Not Share My Personal Information
  • Recycling

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration and other factors. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

Intel Footer Logo