FPGA AI Suite: Getting Started Guide

ID 768970
Date 4/21/2025
Public
Document Table of Contents

6.3. Preparing a Model

A model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the FPGA AI Suite compiler (dla_compiler command) can ingest the model.

The following commands download the ResNet-50 TensorFlow model and run the OpenVINO™ Open Model Zoo tools with the following commands:
source ~/build-openvino-dev/openvino_env/bin/activate

omz_downloader --name resnet-50-tf \
    --output_dir $COREDLA_WORK/demo/models/

omz_converter --name resnet-50-tf \
    --download_dir $COREDLA_WORK/demo/models/ \
    --output_dir $COREDLA_WORK/demo/models/

The omz_downloader command downloads the trained model to $COREDLA_WORK/demo/models folder. The omz_converter command runs model optimizer that converts the trained model into intermediate representation .bin and .xml files in the $COREDLA_WORK/demo/models/public/resnet-50-tf/FP32/ directory.

The directory $COREDLA_WORK/demo/open_model_zoo/models/public/resnet-50-tf/ contains two useful files that do not appear in the $COREDLA_ROOT/demo/models/ directory tree:
  • The README.md file describes background information about the model.
  • The model.yml file shows the detailed command-line information given to Model Optimizer (mo.py) when it converts the model to a pair of .bin and .xml files

For a list OpenVINO™ Model Zoo models that the FPGA AI Suite supports, refer to the FPGA AI Suite IP Reference Manual .

Troubleshooting OpenVINO™ Open Model Zoo Converter Errors

You might get the following error while running the omz_converter on a TensorFlow model:
ValueError: Invalid filepath extension for saving. Please add either a 
'.keras' extension for the native Keras format (recommended) or a '.h5' 
extension. Use 'model.export(filepath)' if you want to export a SavedModel 
for use with TFLite/TFServing/etc.
If you get this error, you can follow a process similar to the following example process that convert MobilenetV3 TensorFlow model to an OpenVINO model:
  1. Run the following Python code that converts MobileNetV3 to Tensorflow .savedmodel format:
    import os
    import tensorflow as tf
    
    COREDLA_WORK = os.environ.get("COREDLA_WORK")
    DOWNLOAD_DIR = f"{COREDLA_WORK}/demo/models/"
    OUTPUT_DIR = f"{COREDLA_WORK}/demo/models/"
    
    # Set the image data format
    tf.keras.backend.set_image_data_format("channels_last")
    
    # Load the MobileNetV3Large model with the specified weights
    model = tf.keras.applications.MobileNetV3Large(
        weights=str(
            f"{DOWNLOAD_DIR}/public/mobilenet-v3-large-1.0-224-tf/weights_mobilenet_v3_large_224_1.0_float.h5"
        )
    )
    # Save the model to the specified output directory
    model.export(filepath=f"{OUTPUT_DIR}/mobilenet_v3_large_224_1.0_float.savedmodel")
  2. Run the following command to convert the TensorFlow .savedmodel format to OpenVINO model format:
    mo \
      --input_model=$COREDLA_WORK/demo/models/\
    mobilenet_v3_large_224_1.0_float.savedmodel \
      --model_name=mobilenet_v3_large_224_1.0_float \
      --input_shape=[1, 224,224,3]
      --layout nhwc