AI Frameworks
Get performance gains ranging up to 10x to 100x for popular deep learning and machine learning frameworks through drop-in Intel® optimizations.
AI frameworks provide data scientists, AI developers, and researchers the building blocks to architect, train, validate, and deploy models through a high-level programming interface. All major frameworks for deep learning and classical machine learning have been optimized by using oneAPI libraries that provide optimal performance across Intel® CPUs and XPUs. These Intel software optimizations help deliver orders of magnitude performance gains over stock implementations of the same frameworks. As a framework user, you can reap all performance and productivity benefits through drop-in acceleration without the need to learn new APIs or low-level foundational libraries.
Deep Learning Frameworks
TensorFlow*
TensorFlow* is used widely for production AI development and deployment. Its primary API is based on Python*, and it also offers APIs for a variety of languages such as C++, JavaScript*, and Java*. Intel collaborates with Google* to optimize TensorFlow for Intel processors. The newest optimizations and features are often released in Intel® Extension for TensorFlow* before they become available in open source TensorFlow.
PyTorch*
PyTorch* is an AI and machine learning framework based on Python, and is popular for use in both research and production. Intel contributes optimizations to the PyTorch Foundation to accelerate PyTorch on Intel processors. The newest optimizations, as well as usability features, are first released in Intel® Extension for PyTorch* before they are incorporated into open source PyTorch.
JAX
JAX is an open source Python library designed for complex numerical computations on high-performance devices like GPUs and TPUs (tensor processing units). It supports NumPy functions and provides automatic differentiation, as well as a composable function transformation system to build and train neural networks. JAX is supported on Intel processors using Intel Extension for TensorFlow.
Apache MXNet*
This open source, deep learning framework is highly portable, lightweight, and designed to offer efficiency and flexibility through imperative and symbolic programming. MXNet* includes built-in support for Intel optimizations to achieve high performance on Intel® Xeon® Scalable processors.
PaddlePaddle*
This open source, deep learning Python framework from Baidu* is known for user-friendly, scalable operations. Built using Intel® oneAPI Deep Neural Network Library (oneDNN), this popular framework provides fast performance on Intel Xeon Scalable processors and a large collection of tools to help AI developers.
Machine Learning Frameworks
scikit-learn*
scikit-learn* is one of the most widely used Python packages for data science and machine learning. Intel® Extension for Scikit-learn* provides a seamless way to speed up many scikit-learn algorithms on Intel CPUs and GPUs, both single- and multi-node.
XGBoost
XGBoost is an open source, gradient boosting, machine learning library that performs well across a variety of data and problem types. Intel contributes software accelerations powered by oneAPI directly to open source XGBoost, without requiring any code changes..

Stay Up to Date on AI Workload Optimizations
Sign up to receive hand-curated technical articles, tutorials, developer tools, training opportunities, and more to help you accelerate and optimize your end-to-end AI and data science workflows.
Take a chance and subscribe. You can change your mind at any time.