Tools

Through delivery of robust toolsets featuring powerful development resources, Intel is helping AI developers establish innovative AI solutions.

Intel® AI DevCloud

Free cloud compute for machine learning and deep learning training & inference powered by Intel® Xeon® Scalable processors, available for Intel® AI Developer Program members.

Learn more

Nauta

Nauta is an integrated deep learning (DL) platform built on Kubernetes, and includes carefully selected open source components and Intel-developed custom applications, tools, and scripts—all validated together to deliver an easy to use, and flexible deep learning environment. To learn more, please visit our Github Repo.

Learn more

PlaidML

PlaidML is an open source tensor compiler. Combined with Intel’s nGraph graph compiler, it gives popular deep learning frameworks performance portability across a wide range of CPU, GPU and other accelerator processor architectures.

Read more

nGraph

nGraph is an open-source C++ library and runtime / compiler suite for Deep Learning ecosystems. With nGraph, data scientists can use their preferred deep learning framework on any number of hardware architectures, for both training and inference.

Learn more

CVAT

CVAT is an open source tool for annotating digital images and videos. It can annotate data automatically using deep learning models and prepare datasets in public formats. The tool is a client-server application for both individuals and teams which works in a browser.

Learn more

OpenVINO™ Toolkit

Explore the OpenVINO™ toolkit (formerly the Intel® Computer Vision SDK). Make your vision a reality on Intel® platforms—from smart cameras and video surveillance to robotics, transportation, and more.

Learn more

Intel® Movidius™ MDK

Software development kit for Intel® Movidius™ Myriad™ 2 & Myriad™ X VPUs that includes Powerful development tools and libraries for vision applications.

Learn more

Volume Controller for Kubernetes

This project provides basic volume and data management in Kubernetes v1.9+ using custom resource definitions (CRDs), custom controllers, volumes and volume sources.

Learn more

Intel® Movidius™ Neural Compute Stick

The Intel® Movidius™ Neural Compute Stick (NCS) is a tiny fanless deep learning development device that you can use to learn AI programming at the edge. NCS is powered by the Intel Movidius Myriad 2 VPU – the same low power high performance Intel Movidius Vision Processing Unit (VPU) that can be found in millions of smart security cameras, gesture controlled drones, industrial machine vision equipment, and more.

Learn more

Intel® Data Analytics Acceleration Library

Intel® Data Analytics Acceleration Library (Intel® DAAL) boosts C++ and Java performance in all stages of the data analytics pipeline: Pre-processing, Transformation, Analysis, Modeling, Validation, and Decision Making. Free 30 day evaluation or purchase the Intel Data Analytics Acceleration Library.

Learn more

Intel® Distribution for Python*

Accelerate and scale your application performance using Intel® Distribution for Python* powered by Anaconda*. Supercharge Python* applications and speed up core computational packages with this performance-oriented distribution.

Learn more

Intel® Math Kernel Library for Deep Neural Networks

Intel® MKL-DNN is a library of DNN performance primitives optimized for Intel® architectures. This is a set of highly optimized building blocks intended to accelerate compute-intensive parts of deep learning applications, particularly DNN frameworks such as Caffe, Tensorflow, Theano and Torch.

Learn more

Stay Connected


Keep tabs on all the latest news with our monthly newsletter.