AI Frameworks and Tools
Software tools at all levels of the AI stack unlock the full capabilities of your Intel® hardware. All Intel® AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get the most performance from your end-to-end pipeline on all your available hardware.
AI Tool Selector
Customize your download options by use case (data analytics, machine learning, deep learning, or inference optimization) or individually from conda*, pip, or Docker* repositories. Download using a command line installation or offline installer package that is compatible with your development environment.
Featured
Productive, easy-to-use AI tools and suites span multiple stages of the AI pipeline, including data engineering, training, fine-tuning, optimization, inference, and deployment.
Write Once, Deploy Anywhere
Deploy high-performance inference applications from device to cloud, powered by oneAPI. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:
- A repository of open source, pretrained, and preoptimized models ready for inference
- A model optimizer for your trained model
- An inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency
Speed Up AI Development
Get access to the Intel® Gaudi® software stack:
- Optimized for deep learning training and inference
- Integrates with popular frameworks TensorFlow* and PyTorch*
- Provides a custom graph compiler
- Supports custom kernel development
- Enables an ecosystem of software partners
- Access resources on GitHub* and a community forum
Intel® Tiber™ Solutions
Build, test, and optimize multiarchitecture applications and solutions—and get to market faster—with an open AI software stack built on oneAPI.
Build, deploy, run, manage, and scale edge and AI solutions on standard hardware with cloud-like simplicity.
Deep Learning & Inference Optimization
Open source deep learning frameworks run with high performance across Intel devices through optimizations powered by oneAPI, along with open source contributions by Intel.
PyTorch*
Reduce model size and workloads for deep learning and inference in apps.
TensorFlow*
Increase training, inference, and performance on Intel® hardware.
JAX*
Perform complex numerical computations on high-performance devices using Intel® Extension for TensorFlow.
DeepSpeed*
Automates parallelism, optimizing communication, managing heterogeneous memory, and model compression.
PaddlePaddle*
Built using Intel® oneAPI Deep Neural Network Library (oneDNN), get fast performance on Intel Xeon Scalable processors.
Intel® AI Reference Models
Access a repository of pretrained models, sample scripts, best practices, and step-by-step tutorials.
Intel® Neural Compressor
Reduce model size and speed up inference with this open source library.
Machine Learning & Data Science
Classical machine learning algorithms in open source frameworks use oneAPI libraries. Intel also offers further optimizations in extensions to these frameworks.
XGBoost
Speed up gradient boosting training and inference on Intel hardware.
Intel® Distribution for Python*
Get near-native code performance for numerical and scientific computing.
Modin*
Accelerate pandas workflows and scale data using this DataFrame library.
Libraries
oneAPI libraries deliver code and performance portability across hardware vendors and accelerator technologies.
Intel® oneAPI Deep Neural Network Library
Deliver optimized neural network building blocks for deep learning applications.
Intel® oneAPI Data Analytics Library
Build compute-intense applications that run fast on Intel® architecture.
Intel® oneAPI Math Kernel Library
Experience high performance for numerical computing on CPUs and GPUs.
Intel® oneAPI Collective Communications Library
Train models more quickly with distributed training across multiple nodes.
Developer Resources from AI Ecosystem Members
Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel.