Skip To Main Content
Support Knowledge Base

Why Inference Time of Whisper-Small-Int8-Dynamic-Inc Model Was Faster Than Whisper-Base Model?

Content Type: Troubleshooting   |   Article ID: 000101129   |   Last Reviewed: 06/16/2025

Description

  • Exported and converted whisper-small-int8-dynamic-inc into INT8 ONNX model
  • Exported and converted whisper-base into INT8 OpenVINO™ model
  • Inferred whisper-small-int8-dynamic-inc and whisper-base model
  • Inference time of whisper-small-int8-dynamic-inc model was faster than whisper-base model

Resolution

whisper-small-int8-dynamic-inc model was based on whisper-small model which is different from whisper-base model. Therefore, the inference time will be different when comparing whisper-small-int8-dynamic-inc model and whisper-base model.

Related Products

This article applies to 1 products.