Quick Start and Rapid Prototyping using Intel® DevCloud for the Edge

BUILT IN - ARTICLE INTRO SECOND COMPONENT

Author: Savitha Gandikota

Goals: Simplicity in User Experience

“Disruption” – almost every industry is being transformed by Artificial Intelligence (AI).  And now, every business, every person is being impacted by the pandemic. This will accelerate change and the need for developers to create new AI algorithms, data models, applications and workflows that solve industry challenges and aid in recovery.  This unprecedented need demands innovation. Innovative thinking, innovative solutions and innovative tools that decrease time-to-market. In the AI industry, this could mean developing the right frameworks and models, and then deploying the right applications and creative methods to quickly validate the solutions.

The processing and analysis of visual data are key drivers in almost every business from healthcare, to retail, to manufacturing, to smart cities. Whether the data processing happens on the edge or in the cloud will depend on the specific workload and solution requirements. Cloud processing comes with scale infrastructure setup that allows for large scale parallel computing with multiple AI framework support that is ideal for AI model training or application development. When it comes to inference, running it on the edge where the data is collected makes the most sense. Data is generated at a significant rate and it is longer practical or economical to push everything to the cloud. When milliseconds matter, edge processing delivers near real-time decision-making capabilities for solutions. However, developing an edge-based solution can be complicated, especially if the analytics are being run on heterogeneous devices. The challenges in development include:

  • Where to start on edge software development as well as edge device selection
  • Cost of experimenting with and validating applications on edge devices; and,
  • Time spent in setting up and maintaining devices

The challenges surmount to greater friction in developing and deployment edge-based solutions. The resources spent to identify, configure, and test a device that may or may not be appropriate for a given use case. Especially now in a virtualized work environment, the logistics of assembling and configuring physical solutions are cumbersome. As an example, prolonged shipping times and minimized time spent in the labs where the devices are located contribute to logistical burdens in development. On the other hand, there is an even greater need to rapidly conceptualize, prototype, and deliver solutions.

Try Before You Buy with the Intel® DevCloud for the Edge

In order to alleviate developers from these burdens in developing and deploying edge AI solutions, Intel introduced the Intel® DevCloud for the Edge, a remote infrastructure for developers to build, test and run AI solutions with a familiar environment that’s similar to cloud-native tools.

It was engineered from ground-up to solve developer challenges. The Intel® DevCloud for the Edge provides a try-before-you-buy and a ready-to-go infrastructure to develop AI solutions and validate across multiple Intel-based devices. With pre-installed software stacks, powered by the Intel® Distribution of OpenVINO™ toolkit, developers can bring their AI models and applications for rapid prototyping and remote deployment across Intel® architecture.  What’s more, the Intel® DevCloud for the Edge comes with the latest and greatest release of the Intel® Distribution of OpenVINO™ toolkit, which ensures the solutions are developed with the latest features and capabilities. Thus, resolving the burden of procuring hardware only to find out it doesn’t fit your use case.

In fact, because of these capabilities and innovations in the Intel® DevCloud for the Edge, it was recently awarded the 2020 Vision Product of the Year in the Developer Tools category.

With Intel® DevCloud for the Edge developers get:

  • Immediate, global access to the most up to date Inferencing hardware and software Intel has to offer.
  • Reduced time and cost to prototype AI applications with preinstalled and configured the software like the Intel® Distribution of OpenVINO™ toolkit for AI inferencing and model optimizations on Intel, and Azure IoT Edge and ONNX Runtime for cloud services and capabilities on the edge.
  • Compare performance, latency and telemetry metrics across multiple Intel-based edge hardware platforms and easily discover the best set-up for your solution.

Figure 1: How It Works: From open source repositories to devices and analyzing the performance

Our partners are utilizing the Intel® DevCloud for the Edge to check for performance for a variety of use cases, and actively prototype and experiment with AI workloads for computer vision on Intel® hardware.

The results from our exercise in Intel® DevCloud for the Edge show that neural network computing performance on a CPU now matches a GPU. These performance benchmarks are constantly improving, thanks to new, innovative tools. Now, advanced edge inference solutions like ours are feasible with CPUs, allowing us to run our solution as a real-time IoT application operating on Intel® Xeon® processors at multiple stores.

Erdem Yoruk, Chief Scientist, Vispera Information Technologies

The Intel® Distribution of OpenVINO™ toolkit was very helpful to optimize our AI model, as it helped accelerate the overall computation, while fully maintaining the model accuracy. Moreover, the ability to run it in the Intel® DevCloud for the Edge enabled us to optimize inferencing over a wide range of Intel® processors without having us invest in procuring and maintaining the physical hardware platforms.

Zia Manzur, COO, Technology for Social Impact

Getting started with Intel® DevCloud for the Edge is free and easy with multiple resources to help developers start running AI at the edge across Intel devices. Resources included are:

  • Tutorials and Get Started documentation
  • Pre-trained models, real-world reference implementations
  • Jupyter notebooks to walk through inference deployments across different Intel® hardware and measure performance
  • User guide on uploading datasets, including videos

Along with several examples, the advanced section now has a sample taking advantage of the integration between ONNX RT and the OpenVINO™ toolkit. Developers can get started with the Clean Room Worker Safety Jupyter notebook using a pre-trained, Tiny Yolo V2 ONNX model for object detection. The use case detects the presence of safety gear (bunny suit and safety glasses), robots and heads, which could be used for proximal hazard detection.

With immediate device availability, developers can deploy pre-optimized models open-sourced by Intel as well as those models generated from cloud services such as Azure Machine Learning (Azure ML). They can immediately see the performance advantages provided by Intel® Distribution of OpenVINO™ toolkit across all Intel devices CPU, GPU, Intel® Neural Compute Stick 2 VPU or other Intel accelerators. What’s more, the access includes a secure account with the convenience of 50GB of file storage.

With the edge devices selection and performance well understood, developers and solution architects can purchase the best hardware for their needs and deploy cloud-to-edge solutions integrated with services from cloud providers. For instance, Azure developers can train models with Azure ML or through the customvision.ai portal, containerize and deploy Azure IoT Hub or Azure IoT Central for additional platform capabilities use developer kits like UP Squared AI Vision or IEI TANK and add-on accelerator cards as well. This streamlined cloud-to-edge flow has been simplified with the availability of the ONNX Runtime Execution Provider for OpenVINO™ toolkit. With this capability, developers can use ONNX models trained through Azure ML capabilities through  OpenVINO™ toolkit for all inference workloads across all Intel Devices both at the Edge as well as in the Cloud.

Sample applications, such as the Ready-to-Deploy Module on Azure Marketplace, will provide a fully automated zero code experience to train in the cloud and deploy onto the same hardware kits above. Once validated on individual kits, developers can then migrate to use case-specific Intel IoT RFP Ready Kits and scale through Intel IoT Market Ready Solutions. With most of the flows already pre-validated, developers can proceed directly to focus of their applications. Partners such as Anyvision, Hitachi and other partners have already been able to meet their diverse customer needs in a much shorter timeline and deliver solutions starting with Intel® DevCloud for the Edge.

Figure 3: Solutioning with Intel: From Prototyping to Production

Conclusion

Through the Intel® DevCloud for the Edge, developers get instant access to devices in order to test AI applications from anywhere in the world. Sign up directly from the Intel website or from Azure Marketplace. Be sure to check out the Clean Room Worker Safety sample using the ONNX RT Execution Provider for OpenVINO™ toolkit and other samples under the advanced section. Give the Intel® DevCloud for the Edge a try and join the conversation!

Other Resources You Might be Like

 

Notices and Disclaimers

FTC Optimization Notice
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.