Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
485 Discussions

Few steps for faster inferencing on Intel® Hardware

MaryT_Intel
Employee
0 0 2,256

Enabling cloud developers to seamlessly embark on a journey from Cloud-to-Edge

Key Takeaways

  • Learn how to convert TensorFlow & Keras models to OpenVINO IR using the OpenVINO™ Toolkit Model Optimizer in AWS SageMaker with 1 line of code.
  • Learn how to Benchmark your models performance across multiple Intel Hardware using the Sample Benchmark App Jupyter notebook in Intel® DevCloud with a single click.
  • Learn how to Deploy your inference applications and OpenVINO models to the Edge using Intel® Edge Software Hub with a single click.
faster inferencing steps

To support the cloud developer journey from cloud-to-edge we have built multiple accelerators; we will be showcasing three of them in this blog. You can build and train your models in the AWS cloud using AWS SageMaker and then optimize these models using the OpenVINO™ toolkit Model Optimizer. Once optimized you will be able to benchmark your models in Intel® DevCloud across all Intel® hardware present in Intel® DevCloud. In the end we’ll showcase how you can setup your edge environment for Intel® Distribution of OpenVINO™ Toolkit and AWS Greengrass and deploy applications using Greengrass Python Lambda that leverages Intel® Distribution of OpenVINO™ toolkit on the edge to perform image classification and object detection.

Overview of Intel® Distribution of OpenVINO™ Toolkit

With the recent developments in the field of AI, developers now have multiple options in frameworks, models, and hardware. However, it is the underlying hardware that helps developers to improve performance when used with the right hardware accelerators and their associated software. One such accelerator that helps developers maximize inference performance is Intel® Distribution of OpenVINO™ toolkit enabled with pre-optimized models.

Just in case you want to know more, the Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNNs), the toolkit extends CV workloads across Intel® hardware, maximizing performance. For more details, visit OpenVINO™ Toolkit Overview.

inferencing performance by model

As for hardware options that can boost performance, Intel has a scalable portfolio of CPU’s, VPU’s and FPGA’s that meet the needs of your inference solution. Table 1 shows us the high-performance output of the Intel® Core® i7 Processor. You can see the advantages of using Intel® Distribution of OpenVINO™ toolkit, where you can reach up to 1200 Frames per second for certain models. For more details, visit system configuration and more performance benchmarks.

Learn how to use the OpenVINO™ Toolkit Model Optimizer in AWS SageMaker

Now, we will showcase how simple it is to optimize your models using the OpenVINO™ toolkit model optimizer inside AWS SageMaker. In order to make model optimization easy for you, we have developed a python function that simplifies and implements inline model conversion. This function uses the OpenVINO™ toolkit docker container to convert TensorFlow and Keras models1. With OpenVINO IR conversion you write inference code once and use then use models from different frameworks in IR format. Supported TFHub models and their input shapes are provided for convenience.

sagemaker
sagemaker

Get Started

  1. Create a SageMaker Notebook and clone the Github repo into your SageMaker Notebook Instance
SageMaker optimizations

      2. Open the SageMaker Notebook and move into the aws/mo-utility directory

sagemaker notebook

      3. After moving to the aws/mo-utility directory, you will see the following files:

File Name

Description

create_ir_for_keras.ipynb

Sample notebook demonstrating how to convert Keras Application models to OpenVINO IR format

create_ir_for_tfhub.ipynb

Sample notebook demonstrating how to convert TFHUB models to OpenVINO IR format

create_ir_for_obj_det.ipynb

Sample notebook demonstrating how to convert object detection models to OpenVINO IR format

ov_utils.py

Utility code that enables model conversion

TFHub-SupportedModelList.md

List of supported TF1/TF2 models and associated input shapes from TFHub

Keras-SupportedModelList.md

List of supported Keras application models

ObjDet-SupportedModelList.md

List of supported Object Detection models

TFHub-TF1-SupportedModelList.pdf

List of supported TF1 models and associated input shapes from TFHub in pdf format

TFHub-TF2-SupportedModelList.pdf

List of supported TF2 models and associated input shapes from TFHub in pdf format

Keras-SupportedModelList.pdf

List of supported Keras application models in pdf format

ObjDet-SupportedModelList.pdf

List of supported Object Detection models

requirements.txt

List of python libraries installed into your Jupyter Notebook with pip

README.md

README file

ov-utils-arch.png

Architecture Diagram

Quick Glance at Converting Keras App Model to OpenVINO™ IR

Keras app

Quick Glance at Converting Tensorflow Hub Model to OpenVINO™ IR

TensorFlow hub

Quick Glance at Converting Object Detection Models to OpenVINO™ IR

Object detection models

Next Steps:

In the next section, we’ll talk about how to benchmark your model on a multitude of Intel® hardware using Intel® DevCloud for the Edge.

Intel® DevCloud, 1 Click – 0 Cost, way to check performance of Deep Learning models across Intel® hardware

Do you want to know how your model performs across different Intel® Hardware? Intel offers a device sandbox, Intel® DevCloud, where you can develop, test, and run your workloads for free on a cluster of the latest Intel® hardware. Now that you have access to all the latest Intel® hardware all in one place, you must be thinking which Intel® hardware is best for your model? Keeping this in mind, we have brought something so powerful, it will ensure that you know exactly which hardware is best suited for your deep learning model.

In the previous section, you were able to convert your TensorFlow and Keras image classification models and TensorFlow object detection models to OpenVINO IR format and store it in the S3 bucket.

In this section, you’ll be able to take those OpenVINO IR models right from your S3 bucket and benchmark those models on different hardware within Intel® DevCloud in just 1-Click using the provided sample Jupyter notebook.

Jupyter

Ready to try it out on your models? Check out the Benchmark Sample on Intel® DevCloud for the Edge. Just provide your AWS credentials & S3 bucket, and we’ll pull the model from the S3 bucket for you.

Quick Glance at the Sample Jupyter Notebook

Jupyter

After running all the cells in the Jupyter Notebook, you’ll receive insight on which hardware your model performs best through a detailed table output, similar to the one below:

jupyter benchmarks

Next Steps:

In the next section, we’ll talk about how to deploy inference applications and OpenVINO™ Models using Intel® Edge Software Hub.

Deploying Inference Applications and OpenVINO™ Models using Intel® Edge Software Hub

So, now that you have benchmarked your models across multiple different Intel® hardware, you must be waiting to deploy your model at the Edge. This is where Intel® Edge Software Hub comes into play. Intel® Edge Software Hub allows developers like yourself to customize, validate, and deploy use case-specific solutions faster and with greater confidence.

Intel’s Edge Software Hub hosts multiple use cases that makes life much easier for you. One such use case is the Amazon Web Services (AWS)* Cloud to Edge Pipeline. This use case allows a single-click deployment of a ready-to-use cloud to edge inferencing pipeline that uses AWS IoT Greengrass and OpenVINO™ Toolkit on the edge, and AWS IoT in the cloud. By using features included in AWS Greengrass, you can deploy to multiple edge devices. This use case also includes a sample AWS IoT Greengrass Lambda for image classification and object detection.

How it works

The use case uses the inference engine included in the Intel® Distribution of OpenVINO™ toolkit and enables cloud developers to deploy inference functionalities on Intel IoT edge devices with accelerators.  

These functions provide a seamless migration of visual analytics from cloud to edge in a secure manner using AWS Greengrass. 

greengrass

Get Started

Ready to conduct inferencing at the edge? Download the Amazon Web Services (AWS)* Cloud to Edge Pipeline from Intel® Edge Software Hub. After downloading this use case, follow the documentation to set up the cloud-to-edge pipeline and conduct edge inferencing.

cloud to edge pipeline

 

Footnotes

1 Currently, Intel® Distribution of OpenVINO™ toolkit only supports a subset of TFHub & Keras Models

Notices and Disclaimers

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.   For more complete information visit www.intel.com/benchmarks.

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates.  See backup for configuration details.  No product or component can be absolutely secure. 
Your costs and results may vary. 
Intel technologies may require enabled hardware, software or service activation.
© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  ​
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. 
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles.  Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.