Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

🡐 AI Overview

 

AI Frameworks and Tools

 

Software tools at all levels of the AI stack unlock the full capabilities of your Intel® hardware. All Intel® AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get the most performance from your end-to-end pipeline on all your available hardware.

 

 

 

  • Overview
  • Download
  • Documentation

AI Tool Selector

Customize your download options by use case (data analytics, machine learning, deep learning, or inference optimization) or individually from conda*, pip, or Docker* repositories. Download using a command line installation or offline installer package that is compatible with your development environment.

 

Configure & Download

Get Started Guide | Documentation

Featured

Productive, easy-to-use AI tools and suites span multiple stages of the AI pipeline, including data engineering, training, fine-tuning, optimization, inference, and deployment.

 

 

OpenVINO™ Toolkit

Write Once, Deploy Anywhere

Deploy high-performance inference applications from device to cloud, powered by oneAPI. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:

  • A repository of open source, pretrained, and preoptimized models ready for inference
  • A model optimizer for your trained model
  • An inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency

 

 

Intel® Gaudi® Software

Speed Up AI Development

Get access to the Intel® Gaudi® software stack:

  • Optimized for deep learning training and inference
  • Integrates with popular frameworks TensorFlow* and PyTorch*
  • Provides a custom graph compiler
  • Supports custom kernel development
  • Enables an ecosystem of software partners
  • Access resources on GitHub* and a community forum

 


 

 

Intel® Tiber™ Solutions

Intel® Tiber™ AI Cloud

Build, test, and optimize multiarchitecture applications and solutions—and get to market faster—with an open AI software stack built on oneAPI. 

 

Intel® Tiber™ Edge Platform

Build, deploy, run, manage, and scale edge and AI solutions on standard hardware with cloud-like simplicity. 

 

Deep Learning & Inference Optimization

Open source deep learning frameworks run with high performance across Intel devices through optimizations powered by oneAPI, along with open source contributions by Intel.

PyTorch*

Reduce model size and workloads for deep learning and inference in apps.

Learn More | Get Started

TensorFlow*

Increase training, inference, and performance on Intel® hardware.

Learn More | Get Started

ONNX Runtime

Accelerate inference across multiple platforms.

Learn More | Get Started

JAX*

Perform complex numerical computations on high-performance devices using Intel® Extension for TensorFlow.

Learn More | Get Started

DeepSpeed*

Automates parallelism, optimizing communication, managing heterogeneous memory, and model compression.

Learn More | Get Started

PaddlePaddle*

Built using Intel® oneAPI Deep Neural Network Library (oneDNN), get fast performance on Intel Xeon Scalable processors.

Learn More | Get Started

Intel® AI Reference Models

Access a repository of pretrained models, sample scripts, best practices, and step-by-step tutorials.

Learn More

Intel® Neural Compressor

Reduce model size and speed up inference with this open source library.

Learn More

Show more Show less

Machine Learning & Data Science

Classical machine learning algorithms in open source frameworks use oneAPI libraries. Intel also offers further optimizations in extensions to these frameworks.

scikit-learn*

Dynamically speed up scikit-learn* applications on Intel CPUs and GPUs.

Learn More

XGBoost

Speed up gradient boosting training and inference on Intel hardware.

Learn More | Get Started

Intel® Distribution for Python*

Get near-native code performance for numerical and scientific computing.

Learn More

Modin*

Accelerate pandas workflows and scale data using this DataFrame library.

Learn More | Get It Now

Show more Show less

Libraries

oneAPI libraries deliver code and performance portability across hardware vendors and accelerator technologies.

Intel® oneAPI Deep Neural Network Library

Deliver optimized neural network building blocks for deep learning applications.

Learn More

Intel® oneAPI Data Analytics Library

Build compute-intense applications that run fast on Intel® architecture.

Learn More

Intel® oneAPI Math Kernel Library

Experience high performance for numerical computing on CPUs and GPUs.

Learn More

Intel® oneAPI Collective Communications Library

Train models more quickly with distributed training across multiple nodes.

Learn More

Show more Show less

Developer Resources from AI Ecosystem Members

Browse All

Hugging Face*

Intel collaborates with Hugging Face* to develop Optimum for Intel, which simplifies training, fine-tuning, and inference optimization of Hugging Face Transformers and Diffusers models on Intel hardware.

PyTorch Foundation

Intel is a premier member of and a top contributor to the PyTorch Foundation. Intel contributions optimize PyTorch training and inference across Intel CPUs, GPUs, and AI accelerators.

Red Hat*

Red Hat* and Intel collaborate to ensure that Red Hat OpenShift AI* works seamlessly with Intel® AI hardware and software in an end-to-end enterprise AI platform across a hybrid cloud infrastructure.

Microsoft*

Microsoft* and Intel collaborate to optimize the AI stack from cloud services to AI PCs, spanning solutions such as Microsoft Azure*, DirectML*, Phi open models, ML.NET, and more.

Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel. 

Stay Up to Date on AI Workload Optimization.


Sign Up

All fields are required unless marked optional.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
By submitting this form, you are confirming you are age 18 years or older. Intel will process your Personal Data for the purpose of this business request. To learn more about Intel's practices, including how to manage your preferences and settings, visit Intel's Privacy Notice.
By submitting this form, you are confirming you are age 18 years or older. Intel may contact you for marketing-related communications. You can opt out at any time. To learn more about Intel's practices, including how to manage your preferences and settings, visit Intel's Privacy Notice.

You’re In!

Thank you for signing up. Watch for a welcome email to get you started.

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo