Building Open Source OpenVINO™ toolkit for Raspbian* OS and Intel® Neural Compute Stick 2


Install & Setup



  • All steps are required in the installation.
  • These steps have been tested with Raspberry Pi 4* board and Raspbian* Buster, 32-bit.
  • An Internet connection is required to follow the steps in this guide.
  • The article was verified using the 2021.3 release of the open-source distribution of the OpenVINO™ toolkit.

The OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. The toolkit extends computer vision (CV) workloads across Intel® hardware based on Convolutional Neural Networks (CNN), which maximizes performance. These steps generally follow this article about Intel® Neural Compute Stick 2 and Open Source OpenVINO™ toolkit, but they include specific changes to get everything running on your board.

This guide provides you with steps for building open-source distribution of the OpenVINO™ toolkit for Raspbian* OS and use with Intel® Neural Compute Stick 2 (Intel® NCS2).

Note The original Intel® Movidius™ Neural Compute Stick is also compatible with the OpenVINO™ toolkit, and that device can be used instead of the Intel® Neural Compute Stick 2 throughout this article.

Click or the topic for details:

System requirements
Note This guide assumes you have your Raspberry Pi* board up and running with an operating system listed below.


  • Raspberry Pi* 4 (Raspberry Pi* 3 Model B+ should work.)
  • At least an 8-GB microSD Card
  • Intel® Neural Compute Stick 2
  • Ethernet Internet connection or compatible wireless network
  • Dedicated DC Power Adapter

Target operating system

  • Raspbian* Stretch, 32-bit

  • Raspbian* Buster, 32-bit
Setting up your build environment
Note This guide contains commands that need to be executed as root or sudo access to install correctly.

Make sure your device software is up to date:

sudo apt update && sudo apt upgrade -y

Some of the toolkit’s dependencies do not have prebuilt ARMv7 binaries and need to be built from source. This can increase the build time significantly compared to other platforms. Preparing to build the toolkit requires the following steps:

Installing build tools

sudo apt install build-essential

Installing CMake* from source

Fetch CMake from the Kitware* GitHub* release page, extract it, and enter the extracted folder:

cd ~/


tar xvzf cmake-3.14.4.tar.gz

cd ~/cmake-3.14.4

Run the bootstrap script to install additional dependencies begin the build:


make -j4

sudo make install

Note The number of jobs the make command uses can be adjusted with the -j flag. It is recommended to set the number of jobs at the number of cores on your platform.

You can check the number of cores on your system by using the command:

grep -c ^processor /proc/cpuinfo

Be aware that setting the number too high can lead to memory overruns, failing the build. If time permits, it is recommended to run 1 to 2 jobs.

Installing OpenCV from source

Intel® OpenVINO™ toolkit uses the power of OpenCV* to accelerate vision-based inferencing. While the CMake process for Intel® OpenVINO™ toolkit downloads OpenCV* if no version is installed for supported platforms, no specific version exists for ARMv7 platforms. As such, you must build OpenCV from source.

OpenCV requires some additional dependencies. Install the following from your package manager:

sudo apt install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libatlas-base-dev python3-scipy 

Note It is recommended to specify the latest and stable branch or tag when cloning repository from OpenCV* GitHub page instead of directly cloning the master branch by default. 

Clone the repository from OpenCV* GitHub page, prepare the build environment, and build:

cd ~/

git clone --depth 1 --branch 4.5.2-openvino

cd opencv && mkdir build && cd build


make -j4

sudo make install

Downloading source code and installing dependencies
Note It is recommended to specify the latest and stable branch or tag when cloning repository from openvinotoolkit GitHub page instead of directly cloning the master branch by default. 

The open-source version of Intel® OpenVINO™ toolkit is available through GitHub. The repository folder is titled openvino.

cd ~/

git clone --depth 1 --branch 2021.3

The repository also has submodules that must be fetched:

cd ~/openvino

git submodule update --init --recursive

Intel® OpenVINO™ toolkit has a number of build dependencies. The script fetches them for you. If any issues arise when trying to run the script, you must install each dependency individually.

Run the script to install the dependencies for Intel® OpenVINO™ toolkit:

sh ./

If the script finished successfully, you are ready to build the toolkit. If something has failed at this point, make sure that you install any listed dependencies and try again.


The first step to beginning the build is telling the system where the installation of OpenCV is. Use the following command:

export OpenCV_DIR=/usr/local/lib/cmake/opencv4

To build the Python API wrapper, install all additional packages listed in the /inference-engine/ie_bridges/python/requirements.txt file:

cd ~/openvino/inference-engine/ie_bridges/python/

pip3 install -r requirements.txt


Use the -DENABLE_PYTHON=ON option. To specify an exact Python version, use the following options:

-DPYTHON_EXECUTABLE=`which python3.7` \

-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/ \


Use the -DNGRAPH_ONNX_IMPORT_ENABLE=ON option to enable the building of the nGraph ONNX importer.

Use the -DNGRAPH_PYTHON_BUILD_ENABLE=ON option to enable the nGraph binding.

Use the -DCMAKE_INSTALL_PREFIX={BASE_dir}/openvino_dist to specify the directory for the CMake building to build in:

for example, -DCMAKE_INSTALL_PREFIX=/home/pi/openvino_dist

The toolkit uses a CMake building system to guide and simplify the building process. To build both the inference engine and the MYRIAD plugin for Intel® Neural Compute Stick 2, use the following commands:

Note Remove all the backslashes (\) when running the commands below. The backslashes are used to inform that these commands are not separated. 

cd ~/openvino

mkdir build && cd build

cmake -DCMAKE_BUILD_TYPE=Release \

-DCMAKE_INSTALL_PREFIX=/home/pi/openvino_dist \










-DPYTHON_EXECUTABLE=$(which python3.7) \

-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/ \

-DPYTHON_INCLUDE_DIR=/usr/include/python3.7 \

-DCMAKE_CXX_FLAGS=-latomic ..

make -j4

sudo make install

If the make command fails because of an issue with an OpenCV library, make sure that you’ve told the system where your installation of OpenCV is. If the build completes at this point, Intel® OpenVINO™ toolkit is ready to run. It should be noted that builds are placed in the ~/openvino/inference-engine/bin/armv7l/Release folder.

Verifying installation

After successfully completing the inference engine build, you should verify that everything is set up correctly. To verify that the toolkit and Intel® Neural Compute Stick 2 work on your device, complete the following steps:

  1. Run the sample program object_detection_sample_ssd to confirm that all libraries load correctly.
  2. Download a trained model.
  3. Select an input for the neural network.
  4. Configure the Intel® Neural Compute Stick 2 Linux* USB driver.
  5. Run object_detection_sample_ssd with selected model and input.

Sample applications

The Intel® OpenVINO™ toolkit includes some sample applications that utilize the Inference Engine and Intel® Neural Compute Stick 2. One of the programs is object_detection_sample_ssd, which can be found in:


Run the following commands to test object_detection_sample_ssd:

cd ~/openvino/bin/armv7l/Release

./object_detection_sample_ssd -h

It should print a help dialog, describing the available options for the program.

Downloading a model

The program needs a model to pass the input through. You can obtain models for Intel® OpenVINO™ toolkit in IR format by:

  • Using the Model Optimizer to convert an existing model from one of the supported frameworks into IR format for the Inference Engine
  • Using the Model Downloader tool to download from the Open Model Zoo
  • Download the IR files directly from

For our purposes, downloading directly is easiest. Use the following commands to grab a person-vehicle-bike detection model:

cd ~/Downloads



Note The Intel® Neural Compute Stick 2 requires models that are optimized for the 16-bit floating point format known as FP16. Your model, if it differs from the example, may require conversion using the Model Optimizer to FP16.

Input for the neural network

The last item needed is input for the neural network. For the model we’ve downloaded, you need an image with 3 channels of color. Download the necessary files to your board:

cd ~/Downloads
wget -O walk.jpg

Configuring the Intel® Neural Compute Stick 2 Linux USB Driver

Some udev rules need to be added to allow the system to recognize Intel® NCS2 USB devices. 

Note If the current user is not a member of the users group, run the following command and reboot your device.

sudo usermod -a -G users "$(whoami)"

Set up the OpenVINO™ environment:

source /home/pi/openvino_dist/bin/

To perform inference on the Intel® Neural Compute Stick 2, install the USB rules by running the script:

sh /home/pi/openvino_dist/install_dependencies/

The USB driver should be installed correctly now. If the Intel® Neural Compute Stick 2 is not detected when running demos, restart your device and try again.

Running object_detection_sample_ssd

When the model is downloaded, an input image is available, and the Intel® Neural Compute Stick 2 is plugged into a USB port, use the following command to run the object_detection_sample_ssd:

cd ~/openvino/bin/armv7l/Release

./object_detection_sample_ssd -i ~/Downloads/walk.jpg -m ~/Downloads/person-vehicle-bike-detection-crossroad-0078.xml -d MYRIAD

This will run the application with the selected options. The -d flag tells the program which device to use for inferencing. -MYRIAD activates the MYRIAD plugin, utilizing the Intel® Neural Compute Stick 2. After the command successfully executes, the terminal will display statistics for inferencing and produce an image output.

[ INFO ] Image out_0.bmp created!
[ INFO ] Execution successful
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

Use Raspbian* default image viewing program to open the resulting image output:

gpicview out_0.bmp

If the application ran successfully on your Intel® NCS2, OpenVINO™ toolkit and Intel® Neural Compute Stick 2 are set up correctly for use on your device.

Verifying nGraph module binding to Python

Run the object_detection_sample_ssd Python demo:

source /home/pi/openvino_dist/bin/

cd /home/pi/openvino_dist/deployment_tools/inference_engine/samples/python/object_detection_sample_ssd

python3 -i ~/Downloads/walk.jpg -m ~/Downloads/person-vehicle-bike-detection-crossroad-0078.xml -d MYRIAD

If the application ran successfully on your Intel® NCS2, the nGraph module is binding correctly to the Python.

Environment variables

You must update several environment variables before you can compile and run OpenVINO toolkit applications. Run the following script to temporarily set the environment variables:

source /home/pi/openvino_dist/bin/

**(Optional)** The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:

echo "source /home/pi/openvino_dist/bin/" >> ~/.bashrc

To test your change, open a new terminal. You will see the following:

[] OpenVINO environment initialized

This completes the build procedure for the open-source distribution of OpenVINO™ toolkit for Raspbian* OS and usage with Intel® Neural Compute Stick 2.

Related topics
Building Open Model Zoo Demos on Raspberry Pi*
Workflow for Raspberry Pi*
The ncappzoo now supports the Intel® NCS 2 and the OpenVINO™ toolkit
OpenVINO™ toolkit Open Model Zoo
Optimize Networks for the Intel® Neural Compute Stick (Intel® NCS 2) Device
Community Forum and Technical Support