Skip To Main Content
Support Knowledge Base

Why Can Caffe Models Be Used Directly with OpenVINO™ Toolkit Inference Engine API?

Content Type: Compatibility   |   Article ID: 000088387   |   Last Reviewed: 05/20/2022

Description

  • Inferenced Caffe model directly on Intel® Neural Compute Stick 2 (Intel® NCS2).
  • Unable to determine why Caffe model can be used directly with OpenVINO™ Toolkit Inference Engine API without converting to Intermediate Representation (IR).

Resolution

Only limited Caffe frameworks can be used directly with the OpenVINO™ Toolkit Inference Engine API.

Currently, the frameworks supported by OpenVINO™ Toolkit Inference Engine API are:

  • Intermediate Representation (IR)
  • ONNX

Related Products

This article applies to 4 products.
Intel® Xeon Phi™ Processor Software OpenVINO™ toolkit Performance Libraries

Discontinued Products

Intel® Developer Cloud for the Edge