Intel® DevCloud: Edge Workloads
Announcements
August 11, 2022 - Experience & Learn How to Create a Multistreaming Video Inference Solution
This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces.
July 19, 2022 - 12th Generation Intel® Core™ Processor (Formerly Code Named Alder Lake) Systems Now Available on JupyterLab
Try your own AI workloads on the newly hosted Intel Core i5 and i7 processor systems.
July 7, 2022 - New Automatic Plug-in Sample for JupyterLab
Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.
June 24, 2022 — Deep Learning Workbench Version 2022.1 Now Available
This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models.
May 26, 2022 – Announcing New Code Snippets
Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® DevCloud for edge workloads using the OpenVINO™ toolkit.
April 13, 2022 – Announcing the Intel® DevCloud First Quarter 2022 Release
Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.
April 13, 2022 - New Sample Applications Released for JupyterLab
These samples demonstrate Noise Suppression, Time-Series Forecasting, Object Detection with Tiny YOLO* V4 and Neural Network Compression.
March 18, 2022 - Intel® Distribution of the OpenVINO™ Toolkit Version 2022.1
Intel® DevCloud has been updated with version 2022.1 of the OpenVINO toolkit.
March 11, 2022 - Ultra-Low-Power Pentium® Processor (formerly code named Elkhart Lake) Systems Now Available
Try your own AI workloads or Intel® DevCloud sample applications on the newly hosted Pentium® processor systems.
March 3, 2022 - Container Playground Update Release
This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.
January 18, 2022 - Deep Learning Workbench Version 2021.4.2 Now Available
This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.
November 29, 2021 - JupyterLab Is Now Integrated with Intel® DevCloud
Provides an enhanced experience when working with sample, tutorial and prototype notebooks.
November 16, 2021 - Intel Distribution of the OpenVINO Toolkit Version 2021.4.2
Intel® DevCloud now features the Intel Distribution of the OpenVINO toolkit version 2021.4.2.
November 9, 2021 - Deep Learning Workbench Version 2021.4.1 Now Available
This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements.
November 9, 2021 - Intel DevCloud Now Supports Container Playground (Beta)
Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware.
September 23, 2021 - Announcing OpenVINO™ Integration with TensorFlow*
Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.
September 10, 2021 - Get Support for the Intel Distribution of OpenVINO Toolkit
Intel DevCloud now supports OpenVINO toolkit version 2021.4.1.
August 18, 2021 - OnLogic* Systems Now Available
July 27, 2021 - Deep Learning Workbench for Intel Distribution of OpenVINO Toolkit Version 2021.4 Now Available
Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.
July 21, 2021 - New Frictionless Retail Sample
Explore constructing media analytics pipelines using Deep Learning Streamer in the Frictionless Retail Sample on Intel DevCloud.
Get Started
Explore one of the free-to-use Kubernetes* and JupyterLab development environments to kick-start building and testing edge software solutions. To learn more and begin your journey, watch the overview video.
Experiment With Containerized Workloads
Import from a prebuilt library, container registry, or HELM chart and launch on a wide range of Intel® architectures in a Kubernetes* environment. Build and run containers from Git repositories.
Learn, Prototype, and Optimize with JupyterLab
Develop and test edge AI solutions on Intel architecture. Access Python* and C++ tutorials and sample applications.
Get Started with JupyterLab
View Tutorials
Find Sample Applications
AI Sample Applications
Find examples specific to your market needs and begin customizing, optimizing, and benchmarking using Intel® hardware.
Intel DevCloud: Hardware For Edge Workloads
Test your workload performance with combinations of CPUs, GPUs, and accelerators to identify the architecture that works best for your inferencing solutions.
CPUs
Intel® Core™ i5, Intel® Core™ i7, and Intel® Core™ i9 processors
Intel Atom® processors
Intel® Xeon® processors
Pentium® processors
GPUs
Intel® HD Graphics
Intel® UHD Graphics
Intel® Iris® Plus graphics
Accelerators
Intel® Movidius™ Myriad™ X VPU