Quick steps to download Intel's pre-trained models for inferencing with OpenVINO™.
- Found model.yml and .prototext files in model directory.
- Could not find .xml and .bin files.
- Initialize the OpenVINO environment:
<path_to_openvino>\bin\setupvars.bat - Go to the model downloader directory:
cd <path_to_openvino>\deployment_tools\tools\model_downloader - Download the model using downloader.py:
python downloader.py --name <model_name>
For Intel pre-trained models, three folders- FP16, FP16-INT8, FP32 - containing their respective .xml and .bin files will be created in the directory below:
<path_to_openvino>\deployment_tools\tools\model_downloader\intel\<model_name>
Public pre-trained models will be located in the directory below:
<path_to_openvino>\deployment_tools\tools\model_downloader\public\<model_name>
Refer to Converting and Preparing Models for more information on optimizing and downloading models.