Use Optimized Intel Products for TensorFlow*
This lab provides hand-on experience using TensorFlow* for transfer learning on a common use case and demonstrates its ease of use. Run inference on the retrained model, optimize the model for best latency, and deploy it using TensorFlow Serving on a 3rd gen Intel® Xeon® Scalable processor with AI acceleration.
A demo highlights improved AI performance with new Intel® Advanced Matrix Extensions instructions on the next generation of 4th gen Intel Xeon Scalable processors (formerly code named Sapphire Rapids).
Speaker
Sachin Muradi is part of the team that helps integrate Intel® oneAPI Deep Neural Network Library (oneDNN) optimizations for CPUs in the Google* open source machine learning framework, TensorFlow.
He has experience working on various libraries and compilers for deep learning accelerator hardware within Intel, such as the nGraph back end for deep learning inference accelerator, an Intel® Movidius™ VPU compiler, a plug-in for the OpenVINO™ toolkit, and FPGA.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.