Frameworks

Explore resources available for popular AI frameworks optimized on Intel® Architecture, including installation guides and other learning material. We are continuously expanding our list of supported frameworks.

Development Resources

Intel® Optimization for TensorFlow*

This Python*-based deep learning framework is designed for ease of use and extensibility on modern deep neural networks and has been optimized for use on Intel® Xeon® processors.

Learn more

MXNet*

The open-source, deep learning framework MXNet* includes built-in support for the Intel® Math Kernel Library (Intel® MKL) and optimizations for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Advanced Vector Extension 512 (Intel® AVX-512) instructions.

Learn more

Intel® Optimization for Caffe*

The Intel® Optimization for Caffe* provides improved performance for of the most popular frameworks when running on Intel® Xeon® processors.

Learn more

PyTorch*

Intel continues to accelerate and streamline PyTorch on Intel architecture, most notably Intel® Xeon® Scalable processors, both using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) directly and making sure PyTorch is ready for our next generation of performance improvements both in software and hardware through the nGraph Compiler.

Learn more

BigDL

BigDL is a distributed deep learning library for Apache Spark*. With BigDL, users can write their deep learning applications as standard programs, which can run directly on top of existing Apache Spark or Hadoop clusters.

Learn more

Intel® Optimizations for Theano*

Theano*, a numerical computation library for Python, has been optimized for Intel® architecture and enables Intel® Math Kernel Library (Intel® MKL) functions.

Learn more

Intel® Optimization for Chainer*

Chainer* is a Python*-based deep learning framework for deep neural networks. Intel’s optimization for Chainer is integrated with the latest release of Intel® MKL-DNN.

Learn more

Related Content

Blog Post

01.21.20

Addressing the Memory Bottleneck in AI Model Training

Healthcare workloads, particularly in medical imaging, may use more memory than other AI workloads because they often use higher resolution...

Read More

Solution

01.13.20

Memory Bottleneck AI for Healthcare

Intel, Dell, and researchers at the University of Florida have collaborated to help data scientists optimize the analysis of healthcare...

News

12.02.19

AWS DeepComposer Enables Developers to Get Hands-On with…

The AWS DeepComposer keyboard announced at AWS re:Invent 2019. The machine learning-enabled keyboard helps developers in the field of generative...

Read More

Blog Post

10.21.19

Accelerating INT8 Inference Performance for Recommender Systems

Most inference applications today require low latency, high memory bandwidth, and large compute capacity. With the increasing use and growing...

Read more

Blog Post

10.10.19

Supporting Open Technology in a New Era of…

In my over 20 years of working with Intel, I’ve learned something very important: moving an industry forward is not...

Read more

Blog Post

09.05.19

CVAT: Speeding Up Image Annotation since 2018

We began the Computer Vision Annotation Tool (CVAT) project a few years ago in order to speed up the annotation...

Read more

Blog Post

08.28.19

Apache* MXNet* v1.5.0 Gets a Lift with Intel®…

The Apache MXNet community recently announced the v1.5.0 release of the Apache MXNet* deep learning framework. This version of Apache...

Read more

Video

08.22.19

Introducing Intel® Nervana™ Neural Network Processors for Training

With an all-new architecture that maximizes the re-use of on-die data, the Intel® Nervana™ NNP-T was purpose-built to train complex...

Watch Video

Stay Connected


Keep tabs on all the latest news with our monthly newsletter.