Skip To Main Content
Support Knowledge Base

Unable to Implement Custom OpenVINO™ Inferencing Code for Multiple Batches and Dynamic Shape

Content Type: Troubleshooting   |   Article ID: 000097234   |   Last Reviewed: 11/14/2023

Description

  • Worked on the implementation of collecting inferences for 1000 images/iterations for different models with multiple batches.
  • Success inferencing of custom code for single batch with good results.
  • Failed to get good FPS once modified the custom code into multiple batches.

Resolution

  • For a better understanding of how the shape is usually managed by OpenVINO™ Model Optimizer. & simplicity of dynamic model IR conversion, it is suggested to use OpenVINO™ PyPI.
  • Conversion command for dynamic shaped model: mo -m model.onnx -input_shape [-1,3,224,224]
  • Inferencing command for dynamic shaped model: benchmark_app -m model.xml --data_shape [5,3,224,224]
  • If the IR model's shape is changed, simply run the ONNX* model again with MO, however, bear in mind that the shape parsed MUST align with the original ONNX* shape.
  • Each time there are changes to the original model, it needs to be run again with MO to generate an IR model that reflects the latest changes.

Related Products

This article applies to 1 products.