Intel® DevCloud: Edge Workloads

Announcements

This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces. 

Learn More

Try your own AI workloads on the newly hosted Intel Core i5 and i7 processor systems.

Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.

View Sample

New notebooks demonstrate brain tumor segmentation, classification, object detection, and style transfer in French and German languages.

This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models. 

Learn More 

Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® DevCloud for edge workloads using the OpenVINO™ toolkit.

Launch JupyterLabs and Try It Now

Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.

Intel® DevCloud has been updated with version 2022.1 of the OpenVINO toolkit. 

Release Notes

This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.

Get Started  

This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.  

Get Started

Provides an enhanced experience when working with sample, tutorial and prototype notebooks.

Intel® DevCloud now features the Intel Distribution of the OpenVINO toolkit version 2021.4.2.

Release Notes

This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements. 

Get Started

Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware. 

Try It Now

Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.

Learn More

Intel DevCloud now supports OpenVINO toolkit version 2021.4.1.

Release Notes

Try your own AI workloads or Intel DevCloud sample applications on the newly hosted OnLogic* platforms.

Learn More

Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.

Try It Now 

Explore constructing media analytics pipelines using Deep Learning Streamer in the Frictionless Retail Sample on Intel DevCloud.

Get Started

Explore one of the free-to-use Kubernetes* and JupyterLab development environments to kick-start building and testing edge software solutions. To learn more and begin your journey, watch the overview video

Intel DevCloud: Hardware For Edge Workloads

Test your workload performance with combinations of CPUs, GPUs, and accelerators to identify the architecture that works best for your inferencing solutions.