We encounter artificial intelligence in almost all our daily tasks: speech-to-text, photo tagging technology, fingerprint recognition, spam classification. We see it contributing to cutting-edge innovations: precision medicine, injury prediction, use-cases like predicting diabetic retinopathy and autonomous cars.
By allowing machines to learn, reason, act and adapt in the real world, artificial intelligence and machine learning are helping businesses unlock deeper levels of knowledge and insights from massive amounts of data. Most AI algorithms need huge computing power to accomplish tasks from the huge amounts of data. For this reason, they rely on cloud servers to perform their computations, and aren’t capable of accomplishing much at the edge, the mobile phones, computers and other devices where the applications that use them run. There is another reason why AI developers are switching to cloud service providers these days: reliability. However, despite the enormous speed at processing reams of data and providing valuable output, artificial intelligence applications still have one key weakness: their brains are located at thousands of miles away. This limitation makes current AI algorithms useless or inefficient in settings where connectivity is sparse or non-present, and where operations need to be performed in a time-critical fashion.
Intel® Movidius™ Neural Compute Stick
In order to develop deep learning inference application at the edge, Intel came up with both energy efficient and low cost Intel® Movidius™ Neural Compute Stick - a tiny fanless deep learning device powered by the same low power high performance Intel Movidius Vision Processing Unit (VPU) that can be found in millions of smart security cameras, gesture controlled drones, industrial machine vision equipment, and more.
Intel® Movidius™ Neural Compute Stick makes it simple to run the trained model optimally on the stick. The stick currently supports two Deep Learning Neural Network Frameworks: TensorFlow* and Caffe*. We can easily run complex deep learning models like SqueezeNet, GoogLeNet and AlexNet on your computer with low processing capability.
To test the performance of Intel® Movidius™ Neural Compute Stick on low processing powered devices, I tried my hands on Intel's UP Squared* Grove* IoT Development Kit and Intel Movidius NCS. UP² (Squared) is currently the world’s fastest x86 maker board based on Intel Apollo Lake platform and the successor of 2015 Kickstarter supported UP board.
UP2 Squared board comes with pre-installed Ubuntu 16.04 operating system (command-line interface). For my testing, I removed Ubuntu 16.04 and installed Ubuntu 18.04 LTS. Post successful installation of operating system, install NC SDK by running following commands on the terminal window:
mkdir -p ~/workspace cd ~/workspace sudo apt install git git clone https://github.com/movidius/ncsdk.git cd ~/workspace/ncsdk sudo apt install make make install
Now, let's test the installation by running built-in examples. Plug the Intel Movidius NCS to your system's USB port and run these commands on a new terminal window:
cd ~/workspace/ncsdk make examples python ~/workspace/ncsdk/examples/apps/hello_ncs_py
If successful, you will get following message:
Hello NCS! Device opened normally. Goodbye NCS! Device closed normally. NCS device working.
Image Classifier on NCS
Till now we have just tested the working on NCS, let's dive into deep learning inferences on NCS. It is so simple to run an image classification demo on NCS. We can use NC App Zoo repo for classifying an image. You can also refer to Build an Image Classifier in 5 steps by Ashwin Vijayakumar. Let me just fast-track the process to get final output, run these steps in command window:
cd ~/workspace/ git clone https://github.com/movidius/ncappzoo cd ~/workspace/ncappzoo/caffe make all cd ncappzoo/apps/image-classifier sudo apt-get install python3-tk python3 image-classifier.py
There are many such examples in ncappzoo/apps directory to run from. Here is live image classifier demo running on GenderNet (requires webcam):
MNIST using TensorFlow on NCS
When one learns how to program, there's a tradition that the first thing you do is print "Hello World." Just like programming has Hello World, machine learning has MNIST. MNIST is a simple computer vision dataset. It consists of images of handwritten digits like these:
It also includes labels for each image, telling us which digit it is. For example, the labels for the above images are 5, 0, 4, and 1. The NCAppZoo repo provides a Makefile that does the following:
- Downloads a trained model
- Downloads test images
- Compiles the network using the Neural Compute SDK.
- There is a python example (run.py) which runs an inference for all of the test images to show how to use the network with the Neural Compute API that's provided in the Neural Compute SDK.
I tried running a pre-trained model of MNIST dataset using TensorFlow on NCS using following commands:
cd ~/workspace/ncappzoo/tensorflow make all cd ~/workspace/ncappzoo/tensorflow/mnist make all python3 run.py
I even tried giving some random images to test the accuracy of the model (other than the ones downloaded by the makefile) and the accuracy is almost 98.9%.
Getting started with NCS and running inferences is very simple and it makes very easy for AI developers to test their prototypes on low processing powered devices by following just these 5 simple steps:
Step 1: Open the enumerated device
Step 2: Load a graph file onto the NCS
Step 3: Offload a single image onto the Intel Movidius NCS to run inference
Step 4: Read and print inference results from the NCS
Step 5: Unload the graph and close the device
I’m quite impressed with the NCS capabilities so far — works great with UP Squared board and I think it is a great value for offline AI prototyping as well as connecting AI and IoT . And not to miss, it pairs quite well with the Raspberry Pi. Happy tinkering with NCS!
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.