Intel® DevCloud for Edge Workloads—Developer Journeys

Intel® DevCloud for Edge Workloads is a development sandbox that facilitates prototyping, optimizing, and containerizing AI inference workloads. This video shows three distinct journeys:

  1. Build applications using the Intel® Distribution of OpenVINO™ toolkit and evaluate using a variety of edge nodes from Intel
  2. Optimization of AI model using the OpenVINO™ Deep Learning Workbench 
  3. Containerization of workloads using the Intel® DevCloud Container Playground.

Hello. I am Meghana Rao, an Edge AI evangelist at Intel.

In this video, we’ll learn about the Intel DevCloud for the Edge and how it enables prototyping, optimizing, and building deployable AI solutions using Intel® hardware and optimized software toolkits. Whether you are an application developer or a cloud-native developer, this video is for you.

Let’s start by understanding the core value of Intel DevCloud for the Edge.

The Intel DevCloud for the Edge is an online development sandbox to prototype, evaluate, optimize, and build containerized solutions.

You get free access to the latest Intel hardware with preinstalled software like the Intel Distribution of OpenVINO toolkit and software development tools that are built with productivity enhancement and performance in mind.

You can quick-start your prototyping to solution journey using a catalog of sample applications for various verticals, reference implementations, and models optimized for performance on Intel® architecture.

With that quick introduction, let’s explore how to use the Intel DevCloud for the Edge in your journey from prototyping to solution.

If you are an AI application developer or an architect who is looking to balance power, performance, cost, and form-factor to ultimately drive system architecture, you need access to a seamless development and evaluation infrastructure with access to a variety of edge nodes to run your assessments on. In this section, we’ll focus on how the Intel DevCloud for the Edge serves your application prototyping and evaluation journeys.

Start using the Intel DevCloud for the Edge by creating a free account.

Check out the list of all available edge nodes. The [Intel] DevCloud has a collection of [sic] Intel® Core™, Xeon®, and Atom® edge nodes with integrated graphics and vision processing units (VPU) accelerators.

The AI models from the OpenVINO™ Open Model Zoo that are optimized for performance on Intel architecture provides a great starting point and helps accelerate your prototyping journey.

You can either import your own code to the [sic] DevCloud or start with a sample application and begin prototyping in the JupyterLab* IDE on a development node with Intel Distribution of OpenVINO toolkit preinstalled in the back end.

The sample applications provide detailed instructions necessary to convert the AI model into the OpenVINO intermediate representation, which is a platform-agnostic abstraction to facilitate model deployment to either a CPU or to accelerators.

You can then submit your job to your choice of edge nodes.

Once execution is complete, visualize the throughput and latency across edge nodes or gain platform level telemetry insights using Grafana dashboards.

This completes the first assessment of your AI model on a variety of Intel hardware. The next step is to further optimize the AI model using the OpenVINO Deep Learning Workbench on the Intel DevCloud for the Edge.

Accessing and launching the DL Workbench from the [sic] DevCloud is a quick click of a button.

You can (a) either select your own model in the OpenVINO™ Intermediate Representation format or choose one from the Open Model Zoo and convert it to IR in the required precision–FP32 or FP16 (b) choose the Intel hardware (c) and upload a dataset to run the baseline inference which shows the throughput, latency, and the model graph.

What if you could further optimize your model without significant loss in accuracy? Deep Learning Workbench provides the ability to quantize the model to INT8 and compare the throughput, latency, and the number of layers converted from FP32 to INT8.

With an optimized application in hand, the last step is to build and evaluate a containerized solution in preparation of deployment using the Intel DevCloud Container Playground.

The Container Playground is essentially a cloud desktop that enables code editability, building containers, running them on Intel hardware, tracking projects through a library, and visualizing results through the file system.

Additionally, the container environment helps accelerate time to solution through a catalog of containerized sample applications and reference implementations.

You can import prebuilt containers from popular registries for evaluation on Intel hardware or test through HELM charts and Docker* Compose. Alternatively, you can import source code, datasets, and AI models from Git repositories or develop containers in the JupyterLab environment.

To build containers using the optimized model, launch the JupyterLab IDE through the command-line interface, open a terminal, and use OpenVINO, Dockerfile, and Buildah to build the container. This container can then be imported into the private registry, configured, and evaluated on your choice of Intel hardware to visualize results through the dashboard and the file system.

We hope that this video demonstrated how compelling the Intel DevCloud for the Edge is for AI application prototyping, optimization, and containerization.

Check out the Intel DevCloud for the Edge today and start your solution journey.

Thank you for watching.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.