Intel® DevCloud: Edge Workloads
This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces.
July 19, 2022 - 12th Generation Intel® Core™ Processor (Formerly Code Named Alder Lake) Systems Now Available on JupyterLab
Try your own AI workloads on the newly hosted Intel Core i5 and i7 processor systems.
Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.
This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models.
Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® DevCloud for edge workloads using the OpenVINO™ toolkit.
Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.
Intel® DevCloud has been updated with version 2022.1 of the OpenVINO toolkit.
This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.
This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.
Provides an enhanced experience when working with sample, tutorial and prototype notebooks.
Intel® DevCloud now features the Intel Distribution of the OpenVINO toolkit version 2021.4.2.
This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements.
Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware.
Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.
Intel DevCloud now supports OpenVINO toolkit version 2021.4.1.
July 27, 2021 - Deep Learning Workbench for Intel Distribution of OpenVINO Toolkit Version 2021.4 Now Available
Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.
Explore one of the free-to-use Kubernetes* and JupyterLab development environments to kick-start building and testing edge software solutions. To learn more and begin your journey, watch the overview video.
Intel DevCloud: Hardware For Edge Workloads
Test your workload performance with combinations of CPUs, GPUs, and accelerators to identify the architecture that works best for your inferencing solutions.