Latest from 2023

Introducing the new cloud version of Deep Learning Workbench on the Intel® Developer Cloud for the Edge that makes it easy to import, optimize, and analyze the performance of AI models with Intel® Distribution of OpenVINO™ Toolkit, PyTorch*, TensorFlow*, and ONNX* (Open Neural Network Exchange).
 

Try this intuitive tool within your browser to bring your own AI model and visualize model-per-layer performance on the latest Intel® architecture including 4th generation Intel® Xeon® Scalable processors (formerly code named Sapphire Rapids) and Intel® Data Center GPU Flex Series within minutes.

 

Try It Now

Intel® Developer Cloud for the Edge now features the Intel® Distribution of OpenVINO™ toolkit version 2023.2.0 (the latest version). Experience the upgrade in performance on the latest hardware using the Benchmark App.

 

View Release Notes

Intel® Developer Cloud for the Edge now features Intel® Distribution of OpenVINO™ toolkit version 2023.1.0 (the latest version). Experience the upgrade in performance on the latest hardware through using the Benchmark App.

 

Note Deep Learning Workbench is included in version 2022.1 of the toolkit.

View Release Notes

Using the latest OpenVINO™ toolkit release, this sample application demonstrates how a smart video IoT retail solution may be created to perform a real-time queue management by using YOLOv8* medium model for object detection to enhance business productivity.

Try It Now

Intel® Developer Cloud for the Edge now features the Intel® Distribution of OpenVINO™ toolkit version 2022.3.1 LTS. Experience the upgrade in performance on the latest hardware using the Benchmark App.

 

Note Deep Learning Workbench is on version 2022.1 of the toolkit.

View Release Notes

Using the latest OpenVINO™ release, this sample application demonstrates how a smart video IoT solution may be created to perform real-time object detection and instance segmentation with a pretrained YOLO* V8 Nano model.

Try It Now

Explore the capabilities of OpenVINO™ for generative AI with Stable Diffusion Jupyter* Notebook tutorials on Intel® Developer Cloud for the Edge.

Text-to-Image Generation with Stable Diffusion v2 and OpenVINO

Infinite Zoom Stable Diffusion v2 and OpenVINO

Image Editing with InstructPix2Pix and OpenVINO

Text-to-Image Generation with ControlNet Conditioning

Intel® Developer Cloud for the Edge now features the latest Intel® Distribution of OpenVINO™ toolkit version 2023.0. Experience the upgrade in performance on the latest hardware via the Benchmark App

Note Deep Learning Workbench is on version 2022.1 of the toolkit.

View Release Notes

Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware. Customers from PRC now have a better user experience with reduced latency for containerized workflows.

Try It Now

Deploy and evaluate your AI workloads on the newly hosted Intel® Core™ i7-13700 with Intel® Arc™ A770 graphics (8 GB).

Intel® Core™ i7-13700 Processor

Intel® Arc™ A770 Graphics (8 GB)

Try It Now with the Benchmark App

Analyze and evaluate updated samples or tutorials with improved throughput AI inference performance and streamlined coding.

Updated samples and tutorials with OpenVINO toolkit 2.0 APIs:

  • Intruder Detection: Demonstrates how a smart video IoT solution may be created using Intel® hardware and software tools to perform intruder detection.
  • Performance Comparison: Demonstrates the impact of data type and hardware selection on model performance.
  • Store Aisle Monitor: Demonstrates how a smart video IoT solution may be created using Intel hardware and software tools to perform store aisle monitoring.
  • Restricted Zone Notifier: Demonstrates how a smart video IoT solution may be created using Intel hardware and software tools to perform restricted zone notification
  • People Counter System: Demonstrates how a smart video IoT solution may be created using Intel hardware and software tools to perform people counting.

Learn More about OpenVINO Toolkit 2.0 APIs

Deploy and evaluate your AI workloads on the newly hosted Intel® Core™ i3-12300HL and Intel® Core™ i7-12800HL processors.

Try It Now with the Benchmark App

Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® Gold-6338N and Intel® Xeon® Gold 6448Y processors.

Try It Now with the Container Playground

Bare Metal Environment: The performance metrics dashboard now adds two new metrics—Total Workload Power Consumption (kW) and Total Workload Energy Consumption (kJ)—to hardware and application-level metrics. These metrics measure workload power and energy use while an application runs to help users decide on efficient computing for their solution.

Try It Now

Deploy and evaluate your AI workloads on the newly hosted i9-13900TE and i7-13700TE processors.

Try It Now with the Benchmark App

Try It Now with the Container Playground

Experience the upgrade in performance on our latest hardware through the Benchmark App.

Note The Deep Learning Workbench can be found in version 2022.1 of the toolkit.

Release Notes

Experience the new Electrocardiogram (ECG) Arrhythmia Prediction application. It uses the latest version of  the OpenVINO™ toolkit for inference to predict atrial fibrillation by identifying different time-series ECG rhythmic wavelengths.

Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® Platinum 8480+ Processor.

Launch JupyterLab and Try It Now

Deploy and evaluate your AI workloads on the newly hosted Intel® Core™ i5-1250PE processor.

2022 Archive

Learn to use GStreamer* pipelines for complex media analytics tasks such as detection, classification, and tracking using Intel® Deep Learning Streamer 2022.2.

View the Tutorial

Bare Metal Environment: The performance metrics dashboard adds a new metric called Power Usage (Watts)/Time GMT to hardware and application level metrics. This metric shows CPU power use while the application is running to help users further fine-tune their workload and hardware configuration.

Try It

Analyze and evaluate updated samples or tutorials with improved throughput performance.

Tutorial with Sync OpenVINO 2.0 API:

Samples and tutorials with Async OpenVINO 2.0 APIs:

Learn More about OpenVINO 2.0 APIs

Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 170. The GPU is optimal for pipelines that include more complex AI models, such as multiple object detection or multiple classification models.

Learn More about Intel® Data Center GPU Flex Series 170

Launch JupyterLab and Try It Now

Learn to deploy an industrial computer vision model to detect real-world analog pointer meters and extract corresponding digital readings using the Intel® Distribution of OpenVINO™ toolkit.

View Industrial Meter Reader

Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 140. With higher number of video encode and decode engines, this GPU can support a high number of streams for AI inferencing workloads.

Learn More

Intel® Deep Learning Streamer (Intel® DL Streamer) code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook for detection, classification, and tracking applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit. Use the tag ‘DL Streamer_LTS’ to filter out the Intel DL Streamer code snippets.

Learn More

Intel® Developer Cloud for the Edge now features the Intel® Distribution of OpenVINO™ toolkit version 2022.2. 

Note  Deep Learning Workbench is on version 2022.1 of the toolkit. 

View Release Notes

Analyze and evaluate newly added samples: Image Classification Using ResNet-50 and Sequence Classification Using BERT to Accelerate PyTorch* Model Inference Using OpenVINO™ Integration with ONNX* Runtime for PyTorch*. 

View Sample

Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® D-2712T and Intel Xeon D-2796NT processors.

Quickly browse, import, or export your AI models and test data from Amazon S3 buckets to optimize and evaluate inference performance on Intel® hardware within Intel® Developer Cloud for the Edge.

Learn More

The tutorial notebook has reusable code snippets to help you quickly create OpenVINO™ applications.

Try It Out

Try your own AI workloads on the newly hosted Intel Core i7 and i9 processors.

Accelerate PyTorch model inference using OpenVINO™ Integration with ONNX* Runtime for PyTorch* as an runtime provider to run on Intel® hardware.

Learn More

This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces. 

Learn More

Try your own AI workloads on the newly hosted Intel Core i5 and i7 processors.

Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.

View Sample

New notebooks demonstrate brain tumor segmentation, classification, object detection, and style transfer in French and German languages.

This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models. 

Learn More 

Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit.

Launch JupyterLabs and Try It Now

Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.

Intel® Developer Cloud for the Edge has been updated with version 2022.1 of the OpenVINO™ toolkit.

Release Notes

Try your own AI workloads or Intel® Developer Cloud for the Edge sample applications on the newly hosted Pentium® processors.

This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.

Get Started  

This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.  

Get Started

2021 Archive

Provides an enhanced experience when working with sample, tutorial and prototype notebooks.

Intel® Developer Cloud for the Edge now features the Intel Distribution of OpenVINO toolkit version 2021.4.2.

Release Notes

This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements. 

Get Started

Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware. 

Try It Now

Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.

Learn More

Intel Developer Cloud for the Edge now supports OpenVINO toolkit version 2021.4.1.

Release Notes

Try your own AI workloads or Intel Developer Cloud for the Edge sample applications on the newly hosted OnLogic* platforms.

Learn More

Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.

Try It Now 

Explore constructing media analytics pipelines using Deep Learning Streamer in the Frictionless Retail Sample on Intel Developer Cloud for the Edge.