Visible to Intel only — GUID: hfs1656011654049
Ixiasoft
Visible to Intel only — GUID: hfs1656011654049
Ixiasoft
6.3. Preparing a Model
A model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the FPGA AI Suite compiler (dla_compiler command) can ingest the model.
source ~/build-openvino-dev/openvino_env/bin/activate omz_downloader --name resnet-50-tf \ --output_dir $COREDLA_WORK/demo/models/ omz_converter --name resnet-50-tf \ --download_dir $COREDLA_WORK/demo/models/ \ --output_dir $COREDLA_WORK/demo/models/
The omz_downloader command downloads the trained model to $COREDLA_WORK/demo/models folder. The omz_converter command runs model optimizer that converts the trained model into intermediate representation .bin and .xml files in the $COREDLA_WORK/demo/models/public/resnet-50-tf/FP32/ directory.
- The README.md file describes background information about the model.
- The model.yml file shows the detailed command-line information given to Model Optimizer (mo.py) when it converts the model to a pair of .bin and .xml files
For a list OpenVINO™ Model Zoo models that the FPGA AI Suite supports, refer to the FPGA AI Suite IP Reference Manual .
Troubleshooting OpenVINO™ Open Model Zoo Converter Errors
ValueError: Invalid filepath extension for saving. Please add either a '.keras' extension for the native Keras format (recommended) or a '.h5' extension. Use 'model.export(filepath)' if you want to export a SavedModel for use with TFLite/TFServing/etc.
- Run the following Python code that converts MobileNetV3 to Tensorflow .savedmodel format:
import os import tensorflow as tf COREDLA_WORK = os.environ.get("COREDLA_WORK") DOWNLOAD_DIR = f"{COREDLA_WORK}/demo/models/" OUTPUT_DIR = f"{COREDLA_WORK}/demo/models/" # Set the image data format tf.keras.backend.set_image_data_format("channels_last") # Load the MobileNetV3Large model with the specified weights model = tf.keras.applications.MobileNetV3Large( weights=str( f"{DOWNLOAD_DIR}/public/mobilenet-v3-large-1.0-224-tf/weights_mobilenet_v3_large_224_1.0_float.h5" ) ) # Save the model to the specified output directory model.export(filepath=f"{OUTPUT_DIR}/mobilenet_v3_large_224_1.0_float.savedmodel")
- Run the following command to convert the TensorFlow .savedmodel format to OpenVINO model format:
mo \ --input_model=$COREDLA_WORK/demo/models/\ mobilenet_v3_large_224_1.0_float.savedmodel \ --model_name=mobilenet_v3_large_224_1.0_float \ --input_shape=[1, 224,224,3] --layout nhwc