Intel® Distribution of OpenVINO™ Toolkit Release Notes

ID 780177
Updated 3/19/2020
Version
Public

A newer version of this document is available. Customers should click here to go to the newest version.

author-image

By

NOTE: For the Release Notes for the 2018 version, refer to Release Notes for Intel® Distribution of OpenVINO™ Toolkit 2018

Introduction

The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNNs), the toolkit extends CV workloads across Intel® hardware, maximizing performance.

The Intel® Distribution of OpenVINO™ toolkit:

  • Enables CNN-based deep learning inference on the edge.
  • Supports heterogeneous execution across Intel CV accelerators, using a common API for the CPU, Intel® Integrated Graphics, Intel® Movidius™ Neural Compute Stick (NCS), Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs and Intel® FPGAs.
  • Speeds time-to-market through an easy-to-use library of CV functions and pre-optimized kernels.
  • Includes optimized calls for CV standards, including OpenCV*, OpenCL™, and OpenVX*.

New and Changed in the Release 3.2

Executive summary

Intel® Distribution of OpenVINO™ Toolkit 2019 R3.2 includes updates and bug fixes.

Packages are available in the 2019 R3.1 download record (Related downloads section)

IMPORTANT: By downloading and using these packages, you agree to the terms and conditions of the software license agreements located here. Please, review content inside the <openvino_install_root>/licensing folder for more details.

  • l_openvino_toolkit_ubuntu16_p_2019.3.551.tgz (105 MB)
  • l_openvino_toolkit_ubuntu18_p_2019.3.551.tgz (109 MB)
  • m_openvino_toolkit_p_2019.3.551.tgz (103 MB)
  • w_openvino_toolkit_p_2019.3.551.zip (158 MB)

Inference Engine

  • libtbbmalloc_proxy.so is now a part of the OpenVINO™ binary package. On replacing a system memory allocator to TBB allr, refer to https://github.com/oneapi-src/oneTBB
  • Crash fix for extra memory usage for convolution 1x1 with strides.
  • Fix for Inference Engine exceptions handling on 3rd Generation Intel® Core™ Processors (formerly Ivy Bridge) system.
  • Wide chars support for a library path in plugins.xml in Core object.

New and Changed in the Release 3.1

Executive Summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R3.1 includes bug fixes. Users should update to the latest version.
  • Introduced Model Optimizer support within Deep Learning Workbench (DL Workbench). Now you can start using DL Workbench with an original pre-trained model and proceed to model profiling and optimization through an intuitively clear conversion step. The conversion step is simplified by the internal analysis of the provided model and suggests required Model Optimizer parameters (normalization, shapes, inputs).
  • Simplified the model import process in the Deep Learning Workbench (DL Workbench). The new flow does not require that you set parameters for accuracy measurement during the import step. The parameters are required only if you want to measure accuracy and/or do Int-8 calibration. This significantly simplifies user journey for first inference experiments and decreases the learning curve while keeping all the necessary functionality for advanced users.
  • Added bitstreams for Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA (Intel® PAC with Intel® Arria® 10 GX FPGA).

New and Changed in the Release 3

Executive Summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R3 includes functional and security updates. Users should update to the latest version.
  • Added support for 10th generation Intel® Core™ processors, which are purpose-built for accelerating AI workloads through Intel® Deep Learning Boost. They include a GPU, as well as Intel® Gaussian & Neural Accelerator (Intel® GNA) for offloading critical workloads. Easily deploy inference applications optimized by Intel® Distribution of OpenVINO™ toolkit on these processors with minimal to no code changes and achieve high performance.
  • Introduced a new Command Line Deployment Manager tool to help you generate the optimal, minimized runtime package for your selected target device. With this tool, the Inference Engine can be deployed with pre-compiled application-specific data, such as models, configuration, and a subset of required hardware plugins. It makes the deployment footprint several times smaller than the development footprint. For more details, see Introduction to CLI Deployment Manager.
  • Improved performance through network loading optimizations and sped inference by reducing model loading time. This is useful when shape size changes between inferences.
  • Added support for Ubuntu* 18.04 as a primary platform for developers. Ubuntu 16.04 is still supported, but with reduced validation scope.
  • Added three new pre-trained models for vision:
    • Detect precise boundaries of objects (for example, in crowded scenes where bounding boxes are insufficient) with Mask-RCNN based instance segmentation.
    • Retrieve images to solve problems with artificial patterns (for example, textiles).
    • Use super resolution to increase quality of printed documents while upscaling the image.

Model Optimizer

Common Changes

Implemented conversion of 0D tensors to 1D with one item, which results in successful conversion and inference of some models.

ONNX*

Added support for the following ONNX operations:

  • ConstantOfShape
  • Expand
  • Floor
  • Not
  • ReduceMin
  • TopK
  • Sqrt
  • Equal
  • Less
  • Greater
  • And
  • Or
  • NonMaxSuppression
  • Slice (opset 10)

TensorFlow*

  • Added support for the following TensorFlow operations:
    • Swish
    • AddV2
    • BiasAdd with data_format = ‘NCHW’
  • Added support for the following TensorFlow topologies:
    • Latest version of YOLO* models
    • EfficientNet
  • Fixed several bugs in TensorFlow Object Detection API SSD models conversion and modes with FakeQuantize.

MXNet*

  • Added support for the following MXNet operations:
    • expand_dims
    • broadcast_div
    • broadcast_sub
    • elemwise_div
    • broadcast_maximum
    • broadcast_minimum
    • broadcast_greater
    • broadcast_greater_equal
    • broadcast_equal
    • broadcast_not_equal
    • broadcast_lesser
    • broadcast_lesser_equal
    • broadcast_power
    • broadcast_logical_and
    • broadcast_logical_or
    • _greater_equal_scalar
    • _equal_scalar
    • _not_equal_scalar
    • _lesser_equal_scalar
    • _lesser_scalar
    • _maximum_scalar
    • _minimum_scalar
  • Fixed bug in the Deconvolution output shape calculation.

Kaldi*

Added support for the following Kaldi operations: pnormcomponent.

Inference Engine

CPU Plugin

Optimized network loading: it takes less time compared to R2 for most topologies (~2.5X speedup on average, up to 12X, depending on a hardware and topology). This enables efficient network reloading if its input shape changes from inference to inference (for example, when an input image is produced by the output of the previous network in the pipeline, such as in the case of object detection/classification), and resizing is undesirable (for example, if it causes accuracy degradation).

GPU Plugin

  • Added support for 0D, or Scalar, tensors.
  • Introduced support for the following new layers with Shape Infer option:
    • NonMaxSuppression
    • ScatterUpdate

MYRIAD Plugin

  • Aligned VPU firmware with Intel® Movidius™ Myriad™ X Development Kit (MDK) R9 release.
  • Added OSX support for Intel® Neural Compute Stick 2.
  • Added support for miniPCIe* device with Intel® Movidius™ Vision Processing Unit (VPU) on Linux* in Preview Feature mode.

HDDL Plugin

Added OpenCL™ custom layer support for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs in the Preview Feature mode:

  • OpenCL™ compiler, targeting SHAVE processor only, is redistributed with OpenVINO™. OpenCL support is provided by ComputeAorta, and is distributed under a license agreement between Intel and Codeplay Software Ltd.

GNA Plugin

  • Added support for more than two inputs for Concat layer.
  • Fixed issue with GNA device reopening.
  • Removed limitation of Convolution layer: now it can have less than 48 filters.
  • Fixed issue with Convolution layer padding, when kernel size is not a multiple of 8.

FPGA Plugin

Boards:

  • Bitstreams for the Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA (Mustang-F100-A10) Speed Grade 1 and Speed Grade 2 are part of OpenVINO R3 release.
  • Bitstreams for the Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA are not included in the OpenVINO R3 release. We recommend Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA  users continue to use OpenVINO R2 2019 until the bitstreams are added to the distribution.
    • Bitstreams for this board might be published separately.

Deep Learning Workbench

  • Added support for runtime graph visualization for VPU targets.
  • Moved to new version of Model Downloader API (model descriptions, selection of precision).
  • Introduced One-Page Import Wizard (UI improvement for importing and selecting models/datasets).

OpenCV*

Version changed to 4.1.2.

OpenVX*

No updates.

Examples and Tutorials

Python*

  • Created a Python version of the object_detection_sample_ssd.
  • Moved Python version of the benchmark_app to Python tools.

Tools

Refactored the latest version of Python Benchmark Application to make it a part of Python tools. Now the Python Benchmark Application is a common tool, which can be used for all performance measurements. C++ version of the tool is still a part of C++ samples.

Open Models Zoo

  • Extended the Open Model Zoo, which includes additional CNN-pretrained models and pre-generated Intermediate Representations (.xml + .bin):
    • [NEW] image-retrieval-0001: Ranks gallery images according to their similarity to a probe image.
    • [NEW] text-image-super-resolution-0001: Upscales scanned images with text.
  • Added three new demo applications:
    • [NEW] Interactive Face Recognition Demo (Python): Face Detection coupled with Head-Pose, Facial Landmarks, and Face Recognition detectors. Detects faces and their keypoints and recognizes persons using provided faces database. Supports video and camera inputs.
    • [NEW] Multi-Camera Multi-Person Tracking Demo (Python): Person Detection coupled with Person Re-identification models.Tracks multiple persons on multiple cameras.
    • [NEW] Image Retrieval Demo (Python): Demonstrates how to run Image Retrieval models to find images from a provided image gallery that are the most similar to an input video frame.
  • Moved Model Downloader tool configuration files to separate per-model folders in order to improve user experience and simplify contribution process (less merge conflicts while developing/merging several models at the same time). The list is extended to support the following public models in Caffe, TensorFlow, MXNet, and PyTorch* formats:
    Model Name Framework
    caffenet Caffe
    mask_rcnn_inception_resnet_v2_atrous_coco TensorFlow
    mask_rcnn_inception_v2_coco TensorFlow
    mask_rcnn_resnet50_atrous_coco TensorFlow
    mask_rcnn_resnet101_atrous_coco TensorFlow
    ssdlite_mobilenet_v2 TensorFlow
    ssd_mobilenet_v1_fpn_coco TensorFlow
    mobilenet-v2-1.0-224 TensorFlow
    octave-densenet-121-0.125 MXNet
    octave-resnet-26-0.25 MXNet
    octave-resnet-50-0.125 MXNet
    octave-resnet-101-0.125 MXNet
    octave-resnet-200-0.125 MXNet
    octave-se-resnet-50-0.125 MXNet
    octave-resnext-50-0.25 MXNet
    octave-resnext-101-0.25 MXNet
    face-recognition-mobilefacenet-arcface MXNet
    face-recognition-resnet100-arcface MXNet
    face-recognition-resnet34-arcface MXNet
    face-recognition-resnet50-arcface MXNet
    mobilenet-v2-pytorch PyTorch
    googlenet-v3-pytorch PyTorch
    resnet-50-pytorch PyTorch
  • Introduced a common way of specifying input for OMZ demos with a -i/--input parameter that significantly simplifies test automation.
  • Extended unit test coverage for demos.

New and Changed in the Release 2.0.1

Executive summary

  • Fixed Inference Engine samples usability issue.
  • Fixed issue with missing OpenVX* samples.

New and Changed in the Release 2

Executive summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R2 includes functional and security updates. Users should update to the latest version.
  • Added Deep Learning Workbench Profiler for neural network topologies and layers. DL Workbench provides the following features: 
    • Visualization of key metrics such as latency, throughput and performance counters
    • Easy configuration for inference experiments including INT8 calibration 
    • Accuracy check and automatic detection of optimal performance settings.
    • To learn more, see the documentation.
  • Added new non-vision topologies: GNMT, BERT, TDNN (NNet3), ESPNet, etc. to enable machine translation, natural language processing and speech use cases.
  • Introduced new Inference Engine Core APIs. Core APIs automate direct mapping to devices, provide Query API for configuration and metrics to determine best deployment platform.
  • Added Multi Device Inference with automatic load-balancing across available devices for higher throughput. To learn more, see the documentation.
  • Serialized FP16 Intermediate Representation to work uniformly across all platforms to reduce model size by 2x compared to FP32, improve utilization of device memory and portability of models.

Model Optimizer

Common changes

  • Updated the IR version from 5 to 6.
  • Extended --input command-line parameters to specify shapes and values for freezing arbitrary nodes (not only model inputs). The command-line parameter --freeze_placeholder_with_value is deprecated.
  • Implemented fusing of a Softmax layer pattern from Pytorch*.
  • New model-transformation API to write better MO extensions.
  • Renamed Intel® experimental layer Quantize to FakeQuantize and ONNX Intel® experimental operator Quantize to
  • FakeQuantize.

ONNX*

  • Added support of the following ONNX operations:
    • DeformableConvolution
    • Upsample (7th and 9th versions of opset)
    • Gemm (alpha and beta attributes)
  • Added support of the following Pytorch* topologies through conversion to ONNX: ESPNet models from https://github.com/sacmehta/ESPNet/tree/master/pretrained.

TensorFlow*

  • Added support of the following TensorFlow operations: Erf, BatchMatMul, SpaceToDepth, Fill, Select, OneHot, TopK, GatherTree, LogicalAnd, LogicalOr, Equal, NotEqual, Less, LessEqual, Greater, GreaterEqual, Squeeze and ExpandDims (not converted to Reshape layer anymore).
  • Added support of the following TensorFlow topologies:
  • Dynamic sequence lengths support in TensorFlow recurrent models

Caffe*

Added support of SSH model (Single Stage Headless Face Detector) from https://github.com/mahyarnajibi/SSH.

MXNet*

Added support of the following MXNet operations: DeformableConvolution, DeformablePSROIPooling, Where, exp, slice_like, div_scalar, minus_scalar, greater_scalar, elemtwise_sub.

Kaldi*

Added support for nnet3 format TDNN network.

Inference Engine

Common changes

  • Inference Engine Core API is implemented:
    • Manages Inference Engine plugins internally, no need to manually load plugins
    • Repeats Plugin API and provides the same functionality
    • Provides Query API to get information about available devices, their configuration and metrics
  • Deprecated old Inference Engine API:
    • Inference Engine plugin obsolete methods (LoadNetwork without providing ExecutableNetwork, Infer, GetPerformanceCounts)
    • Creation of blobs with reversed dimensions
    • Hetero plugin class. Replaced by InferenceEngine::Core class with HETERO device
    • TargetDevice enumeration and all methods that use it
    • Some minor functions
  • Extended Inference Engine Blob API:
    • Blob now represents a universal data container in the Inference Engine
    • Introduced MemoryBlob to represent a tensor in memory. TBlob now directly derives from MemoryBlob
    • Introduced CompoundBlob to represent a vector of blobs
    • Introduced NV12Blob to represent an input in the NV12 color format
  • Extended automatic preprocessing to support input color formats (RGB/BGR, RGBX/BGRX, NV12), batches are supported for all formats except NV12

[NEW] Multi-Device plugin

  • Automatic inference load-balancing between multiple devices

CPU plugin

  • Added support for FP16 IRs. This allows saving disk space for IRs, keeping the accuracy approximately the same as for FP32 calculations.
  • Improved performance in "latency" mode on multi-core machines.
  • Improved multi-socket (NUMA) support. Much better performance for the CPU "throughput" mode on the NUMA machines out of the box.
  • Introduced support for GNMT topology.
  • Improved performance of 3D networks in FP32 precision.
  • Introduced support of the following new layers:
    • 'Expand' layer was renamed to 'Broadcast'
    • Abs
    • Acos
    • Acosh
    • Add
    • And
    • Asin
    • Asinh
    • Atan
    • Atanh
    • BinaryConvolution
    • Ceil
    • Cos
    • Cosh
    • DeformableConvolution
    • DeformablePSROIPooling
    • Erf
    • FakeQuantize
    • Floor
    • GatherTree
    • HardSigmoid
    • Log
    • LogSoftmax
    • Neg
    • OneHot
    • PowerFile
    • Prod
    • Reciprocal
    • ReduceAnd
    • ReduceL1
    • ReduceL2
    • ReduceLogSum
    • ReduceLogSumExp
    • ReduceMax
    • ReduceMean
    • ReduceMin
    • ReduceOr
    • ReduceProd
    • ReduceSum
    • ReduceSumSquare
    • Selu
    • Sign
    • Sin
    • Sinh
    • Softplus
    • Softsign
    • Tan
    • TopK

GPU plugin

  • Added support of GPU streams. This feature aims to improve GPU throughput by reducing stalls on smaller networks or smaller input sizes.
  • Added support of binary convolutions.
  • Fixed a hang during multiple simultaneously executed synchronous and asynchronous inference requests.
  • Added support of topologies that contain 3D convolutions, deconvolutions and pooling, 5D and 6D input and output tensors.
  • Kernels are non-optimized yet, performance will be improved in a future release.
  • Improved performance of the LSTMCell layer.
  • Introduced support of the following new layers:
    • Gemm
    • Abs
    • Acos
    • Acosh
    • ArgMin
    • Asin
    • Asinh
    • Atan
    • Atanh
    • Ceil
    • Cos
    • Cosh
    • DeformableConvolution
    • DeformablePSROIPooling
    • Broadcast
    • Floor
    • FloorMod
    • HardSigmoid
    • Log
    • LogSoftmax
    • Neg
    • OneHot
    • Pow
    • ReduceAnd
    • ReduceL1
    • ReduceL2
    • ReduceLogSum
    • ReduceLogSumExp
    • ReduceMax
    • ReduceMean
    • ReduceMin
    • ReduceOr
    • ReduceProd
    • ReduceSum
    • ReduceSumSquare
    • Selu
    • Sign
    • Sin
    • Sinh
    • Softplus
    • Softsign
    • Tan
    • TopK
    • BinaryConvolution
    • FakeQuantize

FPGA Plugin

  • Deprecated support of Intel® Arria® 10 GX FPGA Development Kit
  • Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA is continued to be supported
  • Bitstreams for Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10) Speed Grade 1 and Speed Grade 2 are not included in the OpenVINO R2 release. These bitstreams will be added at a future date.

    We recommend Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10) users continue to use OpenVINO R1 2019 until the bitstreams are added to the distribution.

  • Resample primitive support emulating via convolution primitive.

  • Properly handled paddings for max pooling (exclude-pads=true)

MYRIAD Plugin

  • VPU firmware aligned with MDK R8 release
  • Support of IE Query API, including the functionality to report devices available in the system and get device temperature
  • Improved performance of several networks to close performance gap with the R5'18 release
  • Fixed and improved stability and accuracy
  • Introduced support of the following new layers:
  • ReverseSequence
  • Gather
  • GEMM
  • Log
  • Mean
  • Select
  • Support of Eltwise extended with following operations:
    • Eltwise-Div
    • Eltwise-Equal
    • Eltwise-FloorMod
    • Eltwise-Greater
    • Eltwise-GreaterEqual
    • Eltwise-Less
    • Eltwise-LessEqual
    • Eltwise-LogicalAnd
    • Eltwise-LogicalOr
    • Eltwise-LogicalXor
    • Eltwise-Min
    • Eltwise-NotEqual
    • Eltwise-Pow
    • Eltwise-SquaredDiff
  • Added OpenCL custom layer support in the Preview Feature mode:
    • OpenCL compiler, targeting SHAVE processor only, is redistributed with OpenVINO™. OpenCL support is provided by ComputeAorta*, and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.

HDDL Plugin

  • Added SGAD(Single Graph to All Device) Scheduler
  • Added configs/metrics APIs to get info like VPU ID, thermal, memory usage, network configs, and others
  • UX: 
    • Optimized security barrier camera demo to fully use all VPUs on accelerator card
    • Added tag scheduler usage code to the Security Barrier Camera Demo
    • Added -d HDDL support to OpenVINO™ installation verification samples
    • Reduced Linux* dependencies installation commands from 16 to 1
  • Fixed bugs 

GNA plugin

  • Introduced support of TDNN topology
  • Added support for FP16 IRs
  • Introduced support for LSTM Cell layer

Deep Learning Workbench

  • The component is in the Feature Preview mode
  • Deployment scheme - Docker* file
  • Import of models and datasets for Classification and Object Detection tasks
  • Import of models from Intel® Open Model Zoo
  • Easy configuration of inference experiments: type of target, number of parallel streams, number of batches, API type (sync and async) etc.
  • Support of inference on CPU, GPU, and VPU
  • Support of synchronous and asynchronous inferences
  • Visualization of key performance metrics: latency, throughput and detailed performance counters
  • Support of comparison between performance experiments
  • IR graph and execution graph for CPU and GPU, only IR graph for VPU
  • Representation of layers that were fused during inference on CPU and GPU
  • Support of INT8 calibration scenarios
  • Support of Winograd algorithms optimization scenarios for AVX512 targets
  • Support of measuring model accuracy
  • Support of automatic detection of the best combination of batch and number of parallel requests within maximum acceptable latency on the target device

OpenCV*

  • Version changed to 4.1.1
  • Merged all Python* 3 wrappers into one universal binary: <openvino>/python/python3/cv2.<ext>
  • Added MediaSDK plugin for opencv_videoio module
  • Enabled TBB backend for parallel computations (on Linux platforms)
  • Removed opencv_pvl module, please consider using corresponding algorithms from the Open Model Zoo

OpenVX*

  • No updates for R2

Examples and Tutorials

Python*

  • Added a new hello_query_device sample to show how to work with new Query API in the Inference Engine
  • Removed dynamic_batch_demo
  • Renamed affinity_setting_demo to affinity_setting_sample
  • Removed FPS measurement from the following samples: classification_sample, classification_sample_async, segmentation_demo, style_transfer_sample. You can use benchmark_app to measure FPS for the required models.

C++

  • Added a new hello_query_device sample to show how to work with new Query API in the Inference Engine
  • Added a new hello_nv12_input_classification sample to show how to work with NV12 input in the Inference Engine
  • Merged hello_request_classification sample to classification_sample_async
  • Merged hello_autoresize sample to hello_classification sample
  • Renamed hello_shape_infer_ssd sample to hello_reshape_ssd
  • Removed classification_sample. Instead, you can use classification_sample_async, which demonstrates Asynchronous API, or hello_classification sample, which is a simplified version using Synchronous API.
  • Renamed object_detection_demo to object_detection_demo_faster_rcnn to make it clear for a user that this demo only works with Caffe FasterRCNN models containing three separate output layers instead of one DetectionOutput layer.
  • Removed end2end_video_analytics demos,  as they were not supported for a long time.
  • Removed FPS measurement from the following samples: classification_sample_async, lenet_graph_builder, mask_rcnn_demo, object_detection_demo_faster_rcnn, object_detection_sample_ssd, segmentation_demo, style_transfer_sample, super_resolution_demo. You can use benchmark_app to measure FPS for the required models.

Common

  • Updated all the samples and demos with support of new Inference Engine Core API.
  • Completely migrated all Open Model Zoo demos to Open Source development (https://github.com/opencv/open_model_zoo). Now they are shipped in the package as a separate demos folder.
  • Fixed bugs.

Tools

  • Python Calibration tool
    • Improved performance with default parameters (up to 4X)
    • Substantially decreased disk space and memory consumption
    • Introduced a "simplified" mode for quick check of potential performance gain in INT8 mode
  • FP32 models calibration can achieve better results then FP16 models. If FP16 model is not calibrated successfully then try to calibrate FP32 model.
  • Removed unnecessary C++ tools (calibration_tool, validation_app):
    • Use the Python version of the calibration_tool instead of the old C++ version.
    • Use accuracy_checker instead of old validation_app.

Open Model Zoo

Extended the Open Model Zoo, which includes additional CNN pre-trained models and pre-generated Intermediate Representations (.xml + .bin):

  • [NEW] 2019R2 release features INT8 and sparse versions of the public models. Sparse versions v1 and v2 differ in the resulting density (below and over 50% of null elements):
    • Inceptionv3-int8-sparse-v1-tf-0001
    • Inceptionv3-int8-sparse-v2-tf-0001
    • Inceptionv3-int8- tf-0001
    • Mobilenetv2-int8-sparse-v1-tf-0001
    • Mobilenetv2-int8-sparse-v2-tf-0001
    • Mobilenetv2-int8- tf-0001
    • Resnet-50-int8-sparse-v1-tf-0001
    • Resnet-50-int8-sparse-v2-tf-0001
    • Resnet-50-int8- tf-0001
  • person-detection-action-recognition-0006: provides higher accuracy in both detection and action recognition as well as features more action classes. Replaces the 2019R1 model.

  • instance-segmentation-security-0010, instance-segmentation-security-0050, instance-segmentation-security-0083: updates to the existing instance segmentation networks with better performance/accuracy trade-offs and new layers, like, group norms and deformable convolutions. Replaces the 2019R1 model.

  • person-vehicle-bike-detection-crossroad-1016: training dataset was extended by, roughly, 30% to cover more diverse scenes and scenarios. Replaces the 2019R1 model.

  • face-detection-retail-0005: update to the infamous 0004 counterpart - slightly improved accuracy via better hyperparameters choice. Replaces the 2019R1 model.

  • text-detection-0003, text-detection-0004: updated text-detection models providing various performance/accuracy trade-offs. Replaces the 2019R1 model.

  • handwritten-score-recognition-0003: model trained on MNIST-like data with support for a decimal point. Replaces the 2019R1 model.

Model Downloader

Model Downloader tool has been reworked to support more flexible format of a configuration file. Few extra tools have been added for better user experience:

  • convert.py - Converts from source format to IR using parameters stated in the configuration file)
  • info_dumper.py -  Dumps information about the models to human-readable format that is also supported by DL Workbench. 

The configuration file is extended to support the following public models in Caffe*, TensorFlow* and MXNet* formats:

Model Format
densenet-121-tf TensorFlow
densenet-161-tf TensorFlow
densenet-169-tf TensorFlow
face-detection-retail-0044 Caffe
facenet-20180408-102900 TensorFlow
faster_rcnn_inception_resnet_v2_atrous_coco TensorFlow
faster_rcnn_resnet50_coco TensorFlow
inception-resnet-v2-tf TensorFlow
license-plate-recognition-barrier-0007 TensorFlow
mobilenet-v1-0.25-128 TensorFlow
mobilenet-v1-0.50-160 TensorFlow
mobilenet-v1-0.50-224 TensorFlow
mobilenet-v1-1.0-224-tf TensorFlow

New and Changed in the Release 1.1

Executive summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R1.1 includes functional and security updates. Users should update to the latest version.
  • Intel® Distribution of OpenVINO™ Toolkit 2019 R1.1 is aligned with Intel® Movidius™ Myriad™ X Development Kit R7 release.
  • mPCIe and M.2 form factor versions of Intel® Vision Accelerator Design with Intel® Movidius™ VPUs are now supported.
  • Intel® Vision Accelerator Design with Intel® Movidius™ VPUs support on CentOS* 7.4 is added.

Inference Engine

MYRIAD Plugin

  • VPU firmware aligned with the Intel® Movidius™ Myriad™ X Development Kit R7 release.
  • Accuracy fixes.
  • Performance and stability improvements.

New and Changed in the Release 1.0.1

Executive summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R1.0.1 includes functional and security updates. Users should update to the latest version.
  • Added support for Microsoft Visual Studio* 2019. Now the build_samples_msvc.bat script builds samples using the newest Microsoft Visual Studio version by default. Also, it’s possible to specify a desired Microsoft Visual Studio version for the build script manually:
    build_samples_msvc.bat [VS2015|VS2017|VS2019]
  • Added debug binaries to the Intel® Distribution of OpenVINO™ toolkit for macOS* package.
  • FPGA support files were updated. Please, follow the updated Installation Guide to get better usability experience.

New and Changed in the Release 1

Executive summary

  • Intel® Distribution of OpenVINO™ Toolkit 2019 R1 includes functional and security updates. Users should update to the latest version.

  • This release extends the list of supported OSes to include Apple macOS* for inference on CPU in preview mode. Install all the key components of the toolkit such as Model Optimizer, Inference Engine, OpenCV and others for accelerated inference on systems with macOS.
  • Parallelism schemes are switched from OpenMP* to Threading Building Blocks (TBB) to increase performance in a multi-network scenario. Most customers' deployment pipeline executes multiple network combinations and TBB gives the most optimal performance in such use cases.
  • Added support for many new operations in ONNX*, TensorFlow* and MXNet* frameworks. Topologies like Tiny YOLO v3, full DeepLab v3, bi-directional LSTMs now can be run using Deep Learning Deployment toolkit for optimized inference.
  • More than 10 new pre-trained models are added including gaze estimation, action recognition encoder/decoder,  text recognition, instance segmentation networks to expand to newer use cases. Few models with binary weights are introduced to further boost the performance for these networks.
  • FPGA plugin migrated to DLA 2019 R1 with new bitstreams for Intel® Arria® 10 FPGA GX Development Kit, Intel® Programmable Acceleration Card with Intel® Arria® 10 FPGA GX and Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA boards. The toolkit now supports automatic scheduling between multiple FPGA devices.

Backward incompatible changes in compare with v.2018

  • Changed default installation locations to /opt/intel/openvino_<version> for Linux* OS and C:\Program Files (x86)\IntelSWTools\openvino_<version> for Windows* OS.  For simplicity, symbolic links to the latest installation are also created: /opt/intel/openvino/ for Linux and C:\Program Files (x86)\IntelSWTools\openvino for Windows.
  • Changed default environment variable from the INTEL_CVSDK_DIR to INTEL_OPENVINO_DIR.
  • Deprecated and removed Computer Vision Algorithms (CVA) component from the toolkit.
  • Removed product documentation from the package. For documentation for the latest releases, see the new documentation site: https://docs.openvinotoolkit.org/
  • Removed Open Model Zoo models binaries from the package. To get the latest models, use the Model Downloader script from <INSTALL_DIR>/openvino_<version>/deployment_tools/model_downloader/.

Model Optimizer

Common changes

  • Updated the IR version from 4 to 5. The IR of version 2 can be generated using the `--generate_deprecated_IR_V2` command line parameter.
  • Implemented experimental feature to generate the IR with "Shape" layers. The feature makes possible to change model input shapes in the Inference Engine (using dedicated reshape API) instead of re-generating models in the Model Optimizer. The feature is enabled by providing command line parameter "--keep_shape_ops" to the Model Optimizer.
  • Introduced new graph transformation API to perform graph modifications in the Model Optimizer.
  • Added ability to enable/disable the Model Optimizer extension using the environment variables MO_ENABLED_TRANSFORMS and MO_DISABLED_TRANSFORMS respectively.
  • Fixed issue with Deconvolution shape inference for case with stride not equal to 1.
  • Added Concat optimization pass that removes excess edges between Concat operations.

ONNX*

  • Added support of the following ONNX* operations: ArgMax, Clip, Exp, DetectionOutput, PriorBox, RNN, GRU with parameters "direction", "activations", "clip", "hidden_size", "linear_before_reset". LSTM parameters "activations", "clip", "direction" are now supported.
  • Extended support of the ConvTranspose operation to support ND.
  • Resolved issue with the Gemm operation with biases.

TensorFlow*

  • Added support of the following TensorFlow* operations: ReverseSequence, ReverseV2, ZerosLike, Exp, Sum.
  • Added support of the following TensorFlow* topologies: quantized image classification topologies, TensorFlow Object Detection API RFCN version 1.10+, Tiny YOLO v3, full DeepLab v3 without need to remove pre-processing part.
  • Added support of batch size more than 1 for TensorFlow Object Detection API Faster/Mask RCNNs and RFCNs.

Caffe*

  • Added support of the following Caffe* operations: StridedSlice, Bias.
  • Caffe fallback for shape inference is deprecated.

MXNet*

  • Added support of the following MXNet* operations: Embedding, Zero with "shape" equal to 0, RNN with mode="gru", "rnn_relu", "rnn_tanh" and parameters "num_layer", "bidirectional", "clip". 
  • Added support of bidirectional LSTMs.
  • Added support of LSTMs with batch size more than 1 and multiple layers.
  • Fixed loading Gluon models with attributes equal to "None".

Inference Engine

Common changes

  • Automatic preprocessing (resize + layout conversion) now fully supports batches.
  • Added support for LSTM, GRU and RNN sequence layers to the NN Builder API.

CPU plugin

  • Parallelism schemes are switched from OpenMP* to Threading Building Blocks (TBB) to increase performance in a multi-network scenario. TBB gives the most optimal performance when running multiple networks at once (testing showed up to 3.5X improvement). However, for some particular networks, this change may lead to either performance degradation or improvement. Please see documentation for more details.
  • Improved support for Low-Precision 8-bit Integer inference:
    • Introduced a new Python* calibration tool. It expands support for 8-bit Integer inference to new domains of neural networks. The support of dataset formats and network accuracy metrics are highly customized and allow you to calibrate, for example, semantic segmentation models (Unet2d).
    • Introduced support for ReLU6, Fully Connected, resample layers in INT8.
    • Significantly decreased size of memory used for storing intermediate tensors data in INT8 data type. This decreased virtual memory footprint for 2x-4x depending on topology.
  • Updated Intel® MKL-DNN version to v0.18.
  • Added support for layers:
    • DepthToSpace
    • Expand
    • Fill
    • Range
    • ReverseSequence
    • ShuffleChannels
    • SpaceToDepth
    • Squeeze
    • StridedSlice
    • Unsqueeze
    • GRUCell
    • RNNCell
    • LSTMSequence
    • GRUSequence
    • RNNSequence
  • Added new Activation types:
    • Exp
    • Not
  • Added new Eltwise operations:
    • Min
    • Max
    • Sub
    • Div
    • Squared_diff
    • Floor_mod
    • Pow
    • Logical_AND
    • Logical_OR
    • Logical_XOR
    • Less
    • Less_equal
    • Greater
    • Greater_equal
    • Equal
    • Not_equal
  • Eltwise supports broadcasting.
  • Fixed memory allocation for Int8 topologies.
  • Improved logic which detects necessary reorders
  • Improved support of Split layer with multiple connections to outputs
  • Added fuse of FullyConnected and ReLU
  • Fixed SEGFAULT for machines with the big number of possible CPUs

GPU plugin

  • Added support for recurrent networks, including RNN, GRU and LSTM layers. The functionality now is fully aligned with CPU plugin functionality.
  • Added support for layers: DepthToSpace, Gather, ShuffleChannels, StridedSlice, ReverseSequence
  • Added support for new parameters in existing layers: 
    • Activation with type "not".
    • Eltwise new operations: "squared_diff, "greater", "greater_equal", "less", "less_equal", "or", "xor", "and", "sub", "div", "min", "max".
    • New clip options for DetectionOutput and Proposal layers.
    • "bilinear" mode for PSROIPooling.
  • Updated Compute Library for Deep Neural Networks (clDNN) version to 13.1.
  • Fixed infinite performance regression on subsequent runs which occurred on several topologies.
  • Fixed minor memory leak on every inference iteration.


FPGA Plugin

  • Migrated to DLA 2019R1:
    • New bitstreams for Intel® Arria® 10 FPGA GX Development Kit, Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 FPGA GX and Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA boards.
    • Added dilated convolution primitive support which enables dilation, DSSD, semantic-segmentation-adas-0001, road-segmentation-adas-0001 topologies.
    • Numerous bug fixes.
  • Migrated to OpenCL™ 18.1.1 for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA, Intel® Arria® 10 FPGA GX Development Kit boards.
  • Updated BSP for Intel® PAC with Intel® Arria® 10 FPGA GX board from DCP 1.1 to DCP 1.2 version, OpenCL™ RTE 17.1.1.
  • Implemented multi FPGA device automatic scheduling feature in the Inference Engine FPGA plugin. The feature is demonstrated on security_camera_barrier_demo app and shows 1.7x scalability on two Intel® Arria® 10 FPGA GX Development Kit boards.

MYRIAD Plugin

  • Improved performance for a number of networks to close performance gap with 2018R5 release.
  • Reduced time required for networks compilation.
  • Support for IR-integrated statistics (if available) to improve inference accuracy. This feature is described in the MYRIAD plugin documentation.
  • Improved network allocation process and memory management on VPU.
  • Stability and accuracy fixes and improvements.
  • Hardware accelerated Deconvolution layer.
  • Hardware accelerated Clamp layer.
  • Support for Eltwise Sub layer.
  • Support for Argmax layer.
  • Support for LSTMCell layer for LSTM:CTPM model.

HDDL Plugin

  • Supported accelerator card with F75114 USB IO expander reset device, previously supported TCA9535 I2C IO expander only.
  • Added bypass scheduler.
  • Updated OpenSSL* with CVE fixing.
  • Optimized HDDL Service output.
  • Added KEY_VPU_HDDL_BIND_DEVICE to configure whether the network should bind to a VPU.
  • Auto detect VPU number (total_device_num=0 in hddl_autoboot.config).
  • Run without reset device (abort_if_hw_reset_failed=false in hddl_autoboot.config) .

GNA plugin

  • The GNA plugin status switched from preview to gold.
  • Introduced support for heterogeneous mode, so the layers that are not supported on GNA (such as normalize) can be run on CPU.

OpenCV*

  • Version updated to 4.1.0
  • opencv_dnn module now integrates Inference Engine via the NN Builder API introduced in the OpenVINO 2018R5 release.
  • opencv_videoio module introduces plugin-based backends on Linux platforms.

OpenVX*

  • Optical Flow Pyramid (LK) accuracy issues on GPU are fixed.
  • Bits pack/unpack nodes functionality is corrected.
  • Major Color Copy Pipeline (CCP) sample update and refactoring.

Samples and Demos

Python*

  • Introduced the following demos: 3d_segmentation_demo, action_recognition, instance_segmentation_demo.
  • greengrass_samples were removed from the distribution. Actual version can be found on GitHub: https://github.com/intel/Edge-Analytics-FaaS/releases
  • Improved samples/demos documentation.

C++

  • Removed warnings on Windows, switched to a more strict warnings level (-Wall) on Linux. Fixed klockwork issues.
  • benchmark_app was updated with support of statistics reports collecting (including per-layer performance management counters). Latency calculation was added for asynchronous mode. New serialization logic for executable graph info was added (currently for CPU only).
  • Added support for multiple FPGA and THROUGHPUT mode for CPU in the  security_barrier_camera_demo.
  • interactive_face_detection_demo was updated with smoothing of personal attributes, improvement of emotions visualization, bug fixes.
  • HDDL plugin was mentioned in the documentation for all supported samples/demos.
  • multichannel_demo was updated with corrected handling of number of connected web cameras.
  • mask_rcnn_demo was updated to support a new version of the TensorFlow Object Detection API Mask-RCNNs topologies.
  • Improved build scripts for samples and demos. For Windows OS, the scripts now not only generate Microsoft Visual Studio* solutions but also build them. Scripts output was unified for all supported operating systems. Build script is improved to support Raspbian* OS.

Common Changes

  • Python and C++ Classification samples/demos are updated with unified Classification Output.
  • Samples and Demos documentation was updated.
  • Bug fixes.

Tools

  • Python* Calibration Tool is implemented. Sample configuration files for the Calibration Tool are available for download from the Intel® Developer Zone.

Open Model Zoo

Extended the Open Model Zoo, which includes additional CNN pre-trained models and pre-generated Intermediate Representations (.xml + .bin):

  • [NEW] action-recognition-0001-decoder. This is an general-purpose action recognition model for Kinetics-400 dataset. The model uses Video Transformer approach with ResNet34 encoder.Please refer to the kinetics dataset specification to see list of action that are recognised by this model.This model is only decoder part of the whole pipeline. It accepts stack of frame embeddings, computed by action-recognition-0001-encoder, and produces prediction on input video. Video frames should be sampled to cover ~1 second fragment (i.e. skip every second frame in 30 fps video).
  • [NEW] action-recognition-0001-encoder. This is an general-purpose action recognition model for Kinetics-400 dataset. The model uses Video Transformer approach with ResNet34 encoder.Please refer to the kinetics dataset specification to see list of action that are recognised by this model.This model is only encoder part of the whole pipeline. It accepts video frame and produces embedding.Use action-recognition-0001-decoder to produce prediction from embeddings of 16 frames.Video frames should be sampled to cover ~1 second fragment (i.e. skip every second frame in 30 fps video).
  • [NEW] driver-action-recognition-adas-0002-encoder. This is an action recognition model for the driver monitoring use case. The model uses Video Transformer approach with MobileNetv2 encoder. It is able to recognize the following actions: drinking, doing hair or making up, operating the radio, reaching behind, safe driving, talking on the phone, texting.This model is only encoder part of the whole pipeline. It accepts video frame and produces embedding. Use driver-action-recognition-adas-0002-decoder to produce prediction from embeddings of 16 frames. Video frames should be sampled to cover ~1 second fragment (i.e. skip every second frame in 30 fps video).
  • [NEW] driver-action-recognition-adas-0002-decoder. This is an action recognition model for the driver monitoring use case. The model uses Video Transformer approach with MobileNetv2 encoder. It is able to recognize the following actions: drinking, doing hair or making up, operating the radio, reaching behind, safe driving, talking on the phone, texting.This model is only decoder part of the whole pipeline. It accepts stack of frame embeddings, computed by driver-action-recognition-adas-0002-encoder, and produces prediction on input video. Video frames should be sampled to cover ~1 second fragment (i.e. skip every second frame in 30 fps video).
  • [NEW] gaze-estimation-adas-0002. This is a custom VGG-like convolutional neural network for gaze direction estimation.
  • [NEW] instance-segmentation-security-0033. This model is an instance segmentation network for 80 classes of objects.It is a Mask-RCNN-like model with ResNeXt152 backbone and Feature Pyramid Networks block for feature maps refinement.
  • [NEW] instance-segmentation-security-0049. This model is an instance segmentation network for 80 classes of objects.It is a Mask-RCNN-like model with ResNet50 backbone, Feature Pyramid Networks block for feature maps refinement and relatively light segmentation head.
  • [NEW] person-detection-action-recognition-teacher-0002. This is an action detector for the Smart Classroom scenario. It is based on the RMNet backbone that includes depth-wise convolutions to reduce the amount of computations for the 3x3 convolution block. The first SSD head from 1/16 scale feature map has four clustered prior boxes and outputs detected persons (two class detector). The second SSD-based head predicts actions of the detected persons. Possible actions: standing, writing, demonstrating.
  • [NEW] text-recognition-0012. This is a network for text recognition scenario. It consists of VGG16-like backbone and bidirectional LSTM encoder-decoder.The network is able to recognize case-insensitive alpha-numeric text (36 unique symbols).
  • [NEW] text-detection-0002. This is a text detector based on PixelLink architecture with MobileNetV2 as a backbone for indoor/outdoor scenes.
  • [NEW] face-detection-adas-binary-0001. This is a face detector for driver monitoring and similar scenarios. The network features a default MobileNet backbone that includes depth-wise convolutions to reduce the amount of computation for the 3x3 convolution block.
  • [NEW] pedestrian-detection-adas-binary-0001. Pedestrian detection network based on SSD framework with tuned MobileNet v1 as a feature extractor.Some layers of MobileNet v1 are binary and use I1 arithm.
  • [NEW] vehicle-detection-adas-binary-0001. This is a vehicle detection network based on an SSD framework with tuned MobileNet v1 as a feature extractor and using binary layer for speedup.This detecector was created by binarization the vehicle-detection-adas-0002.
  • [NEW] resnet50-binary-0001. This is a classical classification network for 1000 classes trained on ImageNet.The difference is that most convolutional layers were replaced by binary once that can be implemented as XNOR+POPCOUN operations.Only input, final and shortcut layers were kept as FP32, all the rest convolutional layers are replaced by BinaryConvolution layers.

  • facial-landmarks-35-adas-0002. This is a custom-architecture convolution neural network for 35 facial landmarks estimation.

  • person-attributes-recognition-crossroad-0230. This model presents a person attributes classification algorithm analysis scenario. It produces probability of person attributions existing on the sample and a position of two point on sample, whiches can be used for color prob (like, color picker in graphical editors).

  • person-detection-action-recognition-0005. This is an action detector for the Smart Classroom scenario. It is based on the RMNet backbone that includes depth-wise convolutions to reduce the amount of computations for the 3x3 convolution block. The first SSD head from 1/16 scale feature map has four clustered prior boxes and outputs detected persons (two class detector). The second SSD-based head predicts actions of the detected persons. Possible actions: sitting, standing, raising hand.

  • single-image-super-resolution-1032. An Attention-Based Approach for Single Image Super Resolution but with reduced number of channels and changes in network architecture. It enhances the resolution of the input image by a factor of 4.

  • single-image-super-resolution-1033. An Attention-Based Approach for Single Image Super Resolution but with reduced number of channels and changes in network architecture. It enhances the resolution of the input image by a factor of 3.

Model Downloader

Model Downloader configuration file is extended to support the following public models in Caffe*, TensorFlow* and MXNet* formats:

Model Format
brain_tumor_segmentation MXNet
mobilenet-v1-1.0-224 Caffe
mobilenet-v2-1.4-224 TensorFlow
mobilenet-v2 Caffe
faster_rcnn_inception_v2_coco TensorFlow
ctpn (LSTM: CTPN) TensorFlow
deeplabv3 (DeepLab-v3+) TensorFlow
ssd_mobilenet_v1_coco TensorFlow
faster_rcnn_resnet101_coco TensorFlow

Preview Features Terminology

A preview feature is functionality that is being introduced to gain early developer feedback. Comments, questions, and suggestions related to preview features are encouraged and should be submitted to the forum.

The key properties of a preview feature are:

  •     It is intended to have a high quality implementation
  •     There is no guarantee of future existence or compatibility.

NOTE: A preview feature is subject to change in the future. It may be removed or altered in future releases. Changes to a preview feature do NOT require the a deprecation and deletion process. Using a preview feature in a production code base is discouraged.

 

Known Issues

ID Description Component Workaround
1 A number of OpenVX* issues are not addressed yet, please see "Known issue" section in the Release Notes for Intel® Distribution of OpenVINO™ toolkit v.2018 OpenVX* N/A
2 Unsupported Dynamic Shapes for Caffe* layers Model Optimizer N/A
3 Some TensorFlow operations are not supported, but only a limited set of different operations can be successfully converted. Model Optimizer Enable unsupported ops through Model Optimizer  extensions and IE custom layers
4 Only TensorFlow models with FP32 Placeholders. If there is non FP32 Placeholder, the next immediate operation after this Placeholder should be a Cast operation that converts to FP32. Model Optimizer Rebuild your model to include a FP32 placeholder only or add cast operations
5 Only TensorFlow models with FP32 weights are supported. Model Optimizer Rebuild your model to have FP32 weights only
6 Embedded preprocessing in Caffe models is not supported and ignored. Model Optimizer Pass preprocessing parameters through Model Optimizer CLI parameters
7 Releasing the the plugin's pointer before inference completion might cause a crash. Inference Engine Release the plugin pointer at the end of the application, when inference is done.
8 If Intel OpenMP was initialized before OpenCL, OpenCL will hang. This means initialization or executing the FPGA will hang too. Inference Engine Initialize FPGA or Heterogeneous with the FPGA plugin priority before the CPU plugin.
9 The performance of the first iteration of the samples for networks executing on FPGA is much lower than the performance of the next iterations. Inference Engine Use the -ni <number> -pc to tet the real performance of inference on FPGA.
10 To select the best bitstream for a custom network, evaluate all available bitstreams and choose the bitstream with the best performance and accuracy. Use validation_app to collect accuracy and performance data for the validation dataset. Inference Engine  
11 The setBatch method works only for topology which has batch as first dimension for all tensors Inference Engine Use the reshape() method. It is applicable for all topology types.
12 Multiple OpenMP runtime initialization is possible if you are using MKL and Inference Engine simultaneously Inference Engine Use apreloaded iomp5 dynamic library
13 Performance of 3D convolution/deconvolution kernels on GPU is suboptimal. Inference Engine N/A
14 While loading extension modules, the Model Optimizer reports an "No module named 'extensions.<module_name>'" internal error and does not load any extensions from the specified directory. It happens only if you use the --extensions command line option with a directory with the base name extensions but that is not the <INSTALL_DIR>/deployment_tools/model_optimizer/extensions directory. Model Optimizer Use different base name for the directory with custom extensions.
15

If you have the Intel® Media Server Studio installed on your CentOS* 7.4 machine, the installation of the OpenCV dependencies may cause a libva.so version conflict.

Installation

Remove libva and reinstall it manually from the Intel Media Server Studio RPM package:

# yum remove libva
# yum install ffmpeg-libs
# rpm -ivh rpm -ivh /path/to/mss/distributive/libva-*.rpm
16

Shape Inference for Reshape layer might not work correctly for TensorFlow models if its shape and parameters are dynamically depend on other layers (for example, for the pre-trainned vehicle-license-plate-detection-barrier-0107 model).

Inference Engine Generate IR using MO with --input_shape option
17 Models with fixed dimensions in the `dim` attribute of the Reshape layer can't be resized. Inference Engine Generate IR using MO with --input_shape option
18

Shape inference for Interp layer works for almost all cases, except for Caffe models with fixed width and height parameters (for example, semantic-segmentation-adas-0001).

Inference Engine Generate IR using MO with --input_shape option
19

Keyboard layout issue running GUI installer from vnc client (user can experience symbols mismatch by typing text in installer GUI due to QT and VNC compatibility issue)

Installer Use the CLI installer version by running ./install.sh from the package directory instead of the GUI installer (./install_GUI.sh) 
20 Media SDK samples build failure due to issue with the environment. Media SDK

Run the following command:

export PKG_CONFIG_PATH=/opt/intel/mediasdk/lib64/pkgconfig:$PKG_CONFIG_PATH

 

21 Model Optimizer does not support element-wise operations supported by CPU/GPU plugin: Div, Squared_diff, Floor_mod, Logical_AND, Logical_OR, Logical_XOR, Less, Less_equal, Greater, Greater_equal, Equal, Not_equal. Model Optimizer Implement extractor for these operations and add them to the "<MO_INSTALL_DIR>/extensions/front/<framework>/" directory.
22 InferenceEngine::CNNNetwork::reshape fails for YoloV3 Inference Engine N/A
23 Poor performance on some GEMM and FullyConnected layers Inference Engine Build IE with OMP threading  or continue using TBB threading, but build IE with mkl-dnn's SGEMM implementation (-DGEMM=JIT).
24 TensorFlow mobilenet models accuracy drop is more than 1% for INT8 in comparison with FP32 Inference Engine N/A
25 Exception at load model to plugin while import IE and pytorch at same time Inference Engine

Build PyTorch* without the MKL-DNN support:

git clone https://github.com/pytorch/pytorch.git
cd pytorch
git submodule update --init --recursive
export USE_MKLDNN=OFF
sudo setup.py install                                
26 CPU plugin shows different performance on Windows 10 and Ubuntu 16 Inference Engine N/A
27 FP32 and INT8 XceptionSoftmax return different results on AVX2 Inference Engine N/A
28 Passing empty filename to cv::VideoWriter::open causes crash in GStreamer backend OpenCV Check filename for emptiness before passing it to VideoWriter
29 Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs sometimes works unstable when executing multiple graphs on the same device Inference Engine Change the order of executable networks creation on the device.
30 Topologies containing LSTMCell layer with batch dimension > 1 demonstrate non-optimal performance on GPU Inference Engine N/A
31 Inference on FPGA freezes when run in Docker* Inference Engine N/A
32 Performance drop in 2018 R5.0.1 vs 2018 R5 on FPGA (all platforms) on a set of topologies: Caffe mobilenet v1 224, Caffe mobilenet v2, Caffe ssd512, Caffe ssd300, Caffe squeezenet 1.0, Caffe googlenet v1, DenseNet family. Inference Engine For the best performance on these topologies, use the 2018 R5 version of OpenVINO.
33 When running the speech_sample application, GNA plugin shows low accuracy on convolution based topologies if the input scale factor is about 16535 (half of the int16 maximum value). Inference Engine Run the speech_sample application with a smaller custom scale factor (-q). Values less than 1000 should improve accuracy.
34 Performance drop in 2019 R3 vs R1 on Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs for the age-gender-recognition-retail-0013 model. Inference Engine For the best performance on this topology, please use the 2019 R1 version.
35 Caffe SE-ResNext-50 and similar models may experience accuracy drop in FP16 precision on GPU plugin. Inference Engine Run these topologies in FP32 version.
36 GPU plugin doesn't support different ratios for X and Y axes in Resample layer Inference Engine N/A
37 Several applications that use Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with the MYRIAD plugin, running in the same time, may fail with USB transfer error on Linux. Inference Engine As each of VPU devices will occupy up to 5Mb memory reserved in kernel for USB operations, so running "sudo echo 30 > /sys/module/usbcore/parameters/usbfs_memory_mb" allows simultaneous execution of up to 6 Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2 sticks (or a single VPU device from Intel® Vision Accelerator Design with Intel® Movidius™ VPUs)
38 Intel® Neural Compute Stick 2 with OpenVINO shows a deterioration in FPS after a while and the FPS never recovers, due to thermal throttling. Inference Engine N/A
39 In 2019 R2, the CPU plugin might show worse performance on some networks against the R1.1 due to using static partitioning in TBB parallelization. Inference Engine Build IE with default TBB partitioner (-DTHREADING=TBB_AUTO).
40 Mask_rcnn_resnet101 shows low accuracy on the CPU plugin. Inference Engine N/A
41 Inference Engine CPU plugin may crash in TBB when used from a non-C++ application. Inference Engine N/A
42 Performance for the NCF model on CPU may be worse when running in low-precision INT8 mode than in FP32 mode. Inference Engine N/A
43 The calibration quality of the models in FP16 format may be worse than of the models in FP32 format, which may lead to worse performance in low-precision INT8 mode. Inference Engine Use FP32 format.
44 Accuracy deviation from reference for mobilenetv2* models Inference Engine N/A
45 Security Barrier Camera Demo shows low performance on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs in case of execution on 3 supported topologies running together without -tag option and corresponding config file Inference Engine Run the demo with -tag option and the config, as described in demo documentation
46 Gaze_estimation_demo, interactive_face_detection_demo shows very low performance on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs in case of execution on three supported topologies running together Inference Engine Run the demo with maximum two topologies running together
47 Any incorrect SetConfig for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs prevents further LoadNetwork Inference Engine Use only the config options documented for HDDL plugin usage
48 GNMT model can not be properly loaded to the Inference Engine in FP16 precision due to memory handling issue. Inference Engine Use FP32 format
49

GNMT model can not be properly converted to the FP16 precision using the Model Optimizer tool if some original Constant layers (which are not automatically optimized) keep values greater than maximum value of the FP16 precision. The following error message can be met:

[ ERROR ]  1 elements of 2 were clipped to infinity while converting a blob for node [['dynamic_seq2seq/decoder/decoder/while/BeamSearchDecoderStep/next_beam_probs/ExpandDims/Output_0/Data__const']] to <class 'numpy.float16'>.

Model Optimizer Use FP32 format
50 When Multi-Device is used as a static plugin instance, the internal objects release order changes, and if GPU is used in the multi-device, an application can crash on exit. Inference Engine Use the Multi-Device plugin as a non static object.
51 In certain cases (e.g. inception-v2 executed with int8) you might experience CPU performance issues on multi-socket machines in the throughput mode (number of CPU streams >1). Inference Engine Use batch in addition to streams (batch 2 is usually enough).
52 Model Optimizer and Inference Engine use default system locale when writing/reading the IR XML. This might cause IR parsing error if the floating point number format uses a comma as a decimal separator. Model Optimizer, Inference Engine Before converting a model and running inference, set a system locale that uses a period as a decimal separator .
53 Performance drop in 2019 R3 vs R2 on Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs for the following models: TF ctpn, CF googlenet-v4, TF ssd512 Inference Engine For the best performance on these topologies, please use the 2019 R2 version.
54 The GPU plugin on Ubuntu* 18 demonstrates worse performance in compare with Ubuntu* 16 in some cases. Inference Engine This problem is under investigation. For the best GPU performance, use the Inference Engine on Ubuntu* 16. If you need Ubuntu* 18, please try to use the latest version of OpenCL compute runtime: https://github.com/intel/compute-runtime
55 GPU plugin might return incorrect results and GENERAL_ERROR status code for topologies with 3d convolutions due to invalid fusing conditions in graph optimizer - a node (Eltwise or ScaleShift) can be fused to Convolution when kernels doesn't support it, which leads to invalid setArg call for opencl kernel. Inference Engine N/A
56 A node can be removed twice in the GPU plugin graph optimizer in some cases, which might lead to segmentation fault on LoadNetwork. Inference Engine N/A
57 GPU plugin might produce incorrect results for Pooling primitive in bfzyx_f16 layout due to invalid output index calculation. Inference Engine N/A
58 Accuracy issues on Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA (Intel® PAC with Intel® Arria® 10 GX FPGA) on Caffe, AlexNet, and Tensor Flow Inception-V4 topologies in FP16 precision Inference Engine For accurate results on these topologies, please use R2'19 release.

Included in This Release

The Intel® Distribution of OpenVINO™ toolkit is available in the following versions:

  • Intel® Distribution of OpenVINO™ toolkit for Windows*
  • Intel® Distribution of OpenVINO™ toolkit for Linux*
  • Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support
  • Intel® Distribution of OpenVINO™ toolkit for macOS*
Install Location/File Name Description
Deep Learning Model Optimizer Model optimization tool for your trained models
Deep Learning Inference Engine Unified API to integrate the inference with application logic
OpenCV* OpenCV Community version compiled for Intel hardware. Includes PVL libraries for computer vision
Intel® Media SDK libraries (open source version) Eases the integration between the OpenVINO™ toolkit and the Intel® Media SDK.
Intel OpenVX* runtime Intel's implementation of the OpenVX* run-time optimized for running on Intel® hardware (CPU, GPU, IPU)
Intel® Graphics Compute Runtime for OpenCL™  Enables OpenCL™ on the GPU/CPU for Intel® processors
Intel® FPGA Deep Learning Acceleration Suite, including pre-compiled bitstreams Implementations of the most common CNN topologies to enable image classification and ease the adoption of FPGAs for AI developers. Includes pre-compiled bitstream samples for the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA and the Arria® 10 GX FPGA Development Kit.
Intel® FPGA SDK for OpenCL™ software technology The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files
Intel® Distribution of OpenVINO™ toolkit documentation Developer guides and other documentation. Available from the Intel® Distribution of OpenVINO™ toolkit product site
Open Model Zoo This component includes documentation for the latest stable set of pre-trained models from https://github.com/opencv/open_model_zoo. The models in a binary form can be downloaded using the Model Downloader tool
Computer Vision Samples and Demos Samples that illustrate Inference Engine API usage and demos that demonstrate how you can use features of Intel® Distribution of OpenVINO™ toolkit in your application.
Deep Learning Workbench Web-based graphical environment that allows you to visualize a simulation of performance of deep learning models and datasets on various Intel® architecture configurations (CPU, GPU, VPU). In addition, you can automatically fine-tune the performance of an OpenVINO™ model by reducing the precision of certain model layers (quantization) from FP32 to INT8.

 

Where to Download This Release

https://software.intel.com/en-us/OpenVINO-toolkit/choose-download

 

System Requirements

Development Platform

Hardware

  • 6th-10th Generation Intel® Core™
  • Intel® Xeon® v5 family
  • Intel® Xeon® v6 family

Operating Systems

  • Ubuntu* 18.04 long-term support (LTS), 64-bit
  • CentOS* 7.4, 64-bit
  • Windows* 10, 64-bit
  • macOS* 10.14, 64-bit

Target Platform (choose one processor with one corresponding operating system)

Your requirements may vary, depending on which product version you use.

Intel® CPU processors with corresponding operating systems

  • 6th-10th Generation Intel® Core™ and Intel® Xeon® processor with operating system options:
    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
    • Windows* 10, 64-bit
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • Yocto Project* Poky Jethro* v2.0.3, 64-bit
    • macOS* 10.14, 64-bit

Intel® Integrated Graphics processors with corresponding operating systems

NOTE: This installation requires drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit package

  • 6th - 10th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics
    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
  • 6th - 8th Generation Intel® Xeon® processor with Intel® Iris® Pro graphics and Intel® HD Graphics

    NOTE: A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See https://ark.intel.com/ for information about your processor.

    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • Yocto Project* Poky Jethro* v2.0.3, 64-bit

Intel® FPGA processors with corresponding operating systems

NOTES:
Only for the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support
OpenCV* and OpenVX functions must be run against the CPU or Intel® Integrated Graphics to get all required drivers and tools

  • Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
  • Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA
    • Ubuntu* 18.04 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit

Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and  Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with corresponding operating systems

  • Ubuntu* 18.04 long-term support (LTS), 64-bit
  • CentOS* 7.4, 64-bit
  • Windows* 10, 64-bit

Helpful Links

Note: Links open in a new window.

OpenVINO™ toolkit Home Page

OpenVINO™ toolkit Documentation

 

Legal Information

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.

No computer system can be absolutely secure.

Intel, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos

*Other names and brands may be claimed as the property of others.

Copyright © 2019, Intel Corporation. All rights reserved.