Skip To Main Content
Support Knowledge Base

What Are the Differences between Running OpenVINO™ with IR Model Format and the Source Format

Content Type: Product Information & Documentation   |   Article ID: 000099171   |   Last Reviewed: 07/09/2024

Description

Unable to find information on the differences between running OpenVINO™ with IR model format and directly from the source format (ONNX*, Pytorch*).

Resolution

  • Running OpenVINO™ inference with IR model format offers the best possible results as the model is already converted. This format offers lower first-inference latency and options for model optimizations. This format is the most optimized for OpenVINO™ inference.

  • Running the inference directly from the source format, the model conversion happens automatically and is handled by OpenVINO™. This method is convenient but might not give the best performance or stability. It also does not provide optimization options.

Additional information

Refer to the Model Preparation page for more information on the supported OpenVINO™ model formats.

Related Products

This article applies to 1 products.