☆☆☆☆☆ (0) Rate this solution
This document has instructionsto run a U-Net FP32 inference using Intel® Optimization for TensorFlow*.
Quick Start Scripts
|fp32_inference.sh||Runs inference with a batch size of 1 using a pretrained model|
To run on bare metal, the following prerequisites must be installed in your environment:
git clone https://github.com/jakeret/tf_unet.git cd tf_unet/ git fetch origin pull/276/head:cpu_optimized git checkout cpu_optimized
After installing the prerequisites, download and untar the model package. Set environment variables for the path to your
TF_UNET_DIR and an
OUTPUT_DIR where log files will be written, then run a quickstart script.
TF_UNET_DIR=<tensorflow-wavenet directory> OUTPUT_DIR=<directory where log files will be written> wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/unet-fp32-inference.tar.gz tar -xzf unet-fp32-inference.tar.gz cd unet_trained quickstart/fp32_inference.sh
Documentation and Sources
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.
Related Containers and Solutions
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.