Intel® Developer Cloud for the Edge
Access Everything You Need to Build for the Edge. Anywhere.
The Intel® Developer Cloud for the Edge is designed to help you evaluate, benchmark, and prototype AI and edge solutions on Intel® hardware for free. Developers can get started at any stage of edge development.
- Research problems or ideas with the help of tutorials and reference implementations.
- Optimize your deep learning model performance on various Intel hardware configurations.
- Import or develop your edge inference applications on bare metal.
- Import or build multicontainer solutions for your cloud-native applications.
To get started, choose the environment or solution that best meets your goals.
Bare Metal Development
Develop or import your computer vision or edge AI application using a JupyterLab environment on a bare metal infrastructure.
- Preinstalled with the Intel® Distribution of OpenVINO™ toolkit
- Rapid prototyping with code snippets
- Reusable references
- Python* debugger
Containerized Development
Build your application in a container or import your existing containerized workloads to test on a wide range of Intel hardware.
- Launch containerized workloads on Intel hardware using Kubernetes*.
- Import from a prebuilt library, container registry, or Helm* chart.
- Connect to cloud storage.
- Import sources from Git and build container images.
Announcements
January 24, 2023 - 4th Gen Intel® Xeon® Scalable Processor (Formerly Code Named Sapphire Rapids) Is Now Available on JupyterLab
Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® Platinum 8480+ Processor.
Launch JupyterLab and Try It Now
January 05, 2023 - 12th Generation Intel® Core™ Processor (Formerly Code Named Alder Lake) Is Now Available on JupyterLab
Deploy and evaluate your AI workloads on the newly hosted Intel® Core™ i5-1250PE processor.
December 22, 2022 - Intel® Deep Learning Streamer 2022.2 Is Now Supported on JupyterLab
Learn to use GStreamer* pipelines for complex media analytics tasks such as detection, classification, and tracking using Intel® Deep Learning Streamer 2022.2.
View the Tutorial
December 21, 2022 - Intel® Developer Cloud for the Edge Now Features New Telemetry Metric on Power Usage
Bare Metal Environment: The performance metrics dashboard adds a new metric called Power Usage (Watts)/Time GMT to hardware and application level metrics. This metric shows CPU power use while the application is running to help users further fine-tune their workload and hardware configuration.
Try It
December 02, 2022 - Intel® Developer Cloud for the Edge Now Features Samples with OpenVINO™ 2.0 Sync and Async APIs
Analyze and evaluate updated samples or tutorials with improved throughput performance.
Tutorial with Sync OpenVINO 2.0 API:
Samples and tutorials with Async OpenVINO 2.0 APIs:
November 22, 2022 - Intel® Data Center GPU Flex Series 170 (Formerly Code Named Arctic Sound) Now Available on JupyterLab
Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 170. The GPU is optimal for pipelines that include more complex AI models, such as multiple object detection or multiple classification models.
Learn More about Intel® Data Center GPU Flex Series 170
Launch JupyterLab and Try It Now
November 15, 2022 - New Industrial Meter Reader Hands-On Module on Intel® Edge AI Certification Program
Learn to deploy an industrial computer vision model to detect real-world analog pointer meters and extract corresponding digital readings using the Intel® Distribution of OpenVINO™ toolkit.
November 7, 2022 - Intel® Data Center GPU Flex Series 140 (Formerly Code Named Arctic Sound) Now Available on JupyterLab
Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 140. With higher number of video encode and decode engines, this GPU can support a high number of streams for AI inferencing workloads.
October 27, 2022 - Introducing Intel® Deep Learning Streamer Code Snippets for JupyterLab
Intel® Deep Learning Streamer (Intel® DL Streamer) code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook for detection, classification, and tracking applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit. Use the tag ‘DL Streamer_LTS’ to filter out the Intel DL Streamer code snippets.
September 26, 2022 - Intel® Distribution of the OpenVINO™ Toolkit Version 2022.2
Intel® Developer Cloud for the Edge now features the Intel® Distribution of OpenVINO™ toolkit version 2022.2.
Note Deep Learning Workbench is on version 2022.1 of the toolkit.
September 22, 2022 - OpenVINO™ Integration with Torch-ORT [Beta] Release: New Samples Added
Analyze and evaluate newly added samples: Image Classification Using ResNet-50 and Sequence Classification Using BERT to Accelerate PyTorch* Model Inference Using OpenVINO™ Integration with ONNX* Runtime for PyTorch*.
September 20, 2022 - Intel® Xeon® D Processors (Formerly Code Named Ice Lake) Are Now Available on JupyterLab
Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® D-2712T and Intel Xeon D-2796NT processors.
September 7, 2022 - Introducing Cloud Storage Connector with Amazon* S3 Support for JupyterLab
Quickly browse, import, or export your AI models and test data from Amazon S3 buckets to optimize and evaluate inference performance on Intel® hardware within Intel® Developer Cloud for the Edge.
September 7, 2022 - New Code Snippet Tutorial
The tutorial notebook has reusable code snippets to help you quickly create OpenVINO™ applications.
September 7, 2022 - 11th Generation Intel Core Processors (Formerly Code Named Tiger Lake) Are Now Available on JupyterLab
Try your own AI workloads on the newly hosted Intel Core i7 and i9 processors.
August 22, 2022 - OpenVINO™ Integration with Torch-ORT [Beta] Release
Accelerate PyTorch model inference using OpenVINO™ Integration with ONNX* Runtime for PyTorch* as an runtime provider to run on Intel® hardware.
August 11, 2022 - Experience & Learn How to Create a Multistreaming Video Inference Solution
This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces.
July 19, 2022 - 12th Generation Intel Core Processors (Formerly Code Named Alder Lake) Are Now Available on JupyterLab
Try your own AI workloads on the newly hosted Intel Core i5 and i7 processors.
July 7, 2022 - New Automatic Plug-in Sample for JupyterLab
Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.
June 24, 2022 — Deep Learning Workbench Version 2022.1 Now Available
This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models.
May 26, 2022 – Announcing New Code Snippets
Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit.
April 13, 2022 – Announcing the Intel® Developer Cloud for the Edge First Quarter 2022 Release
Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.
April 13, 2022 - New Sample Applications Released for JupyterLab
These samples demonstrate Noise Suppression, Time-Series Forecasting, Object Detection with Tiny YOLO* V4 and Neural Network Compression.
March 18, 2022 - Intel® Distribution of the OpenVINO™ Toolkit Version 2022.1
Intel® Developer Cloud for the Edge has been updated with version 2022.1 of the OpenVINO™ toolkit.
March 11, 2022 - Ultra-Low-Power Pentium® Processors (formerly code named Elkhart Lake) Are Now Available
Try your own AI workloads or Intel® Developer Cloud for the Edge sample applications on the newly hosted Pentium® processors.
March 3, 2022 - Container Playground Update Release
This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.
January 18, 2022 - Deep Learning Workbench Version 2021.4.2 Now Available
This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.
November 29, 2021 - JupyterLab Is Now Integrated with Intel® Developer Cloud for the Edge
Provides an enhanced experience when working with sample, tutorial and prototype notebooks.
November 16, 2021 - Intel Distribution of the OpenVINO Toolkit Version 2021.4.2
Intel® Developer Cloud for the Edge now features the Intel Distribution of OpenVINO toolkit version 2021.4.2.
November 9, 2021 - Deep Learning Workbench Version 2021.4.1 Now Available
This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements.
November 9, 2021 - Intel Developer Cloud for the Edge Now Supports Container Playground (Beta)
Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware.
September 23, 2021 - Announcing OpenVINO™ Integration with TensorFlow*
Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.
September 10, 2021 - Get Support for the Intel Distribution of OpenVINO Toolkit
Intel Developer Cloud for the Edge now supports OpenVINO toolkit version 2021.4.1.
August 18, 2021 - OnLogic* Systems Now Available
Try your own AI workloads or Intel Developer Cloud for the Edge sample applications on the newly hosted OnLogic* platforms.
July 27, 2021 - Deep Learning Workbench for Intel Distribution of OpenVINO Toolkit Version 2021.4 Now Available
Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.
July 21, 2021 - New Frictionless Retail Sample
Explore constructing media analytics pipelines using Deep Learning Streamer in the Frictionless Retail Sample on Intel Developer Cloud for the Edge.
Additional Developer Solutions
Edge for Industrial
Speed up development of industrial solutions with prevalidated software packages for machine learning, data analytics, and software-defined control at the edge.
OpenVINO™ Integration with TensorFlow*
This solution is designed for developers who want to use built-in TensorFlow* models in their OpenVINO™ toolkit inferencing applications. In most cases, OpenVINO™ integration with TensorFlow* can be accomplished with just two lines of code.
Deep Learning Workbench
The Deep Learning Workbench simplifies using the Intel Distribution of OpenVINO toolkit to tune, visualize, and compare the performance of deep learning models on Intel® architecture.
Post-Training Optimization Tool
Accelerate the inference of deep learning models without retraining. The Post-Training Optimization Tool (POT) offers a suite of tools that let you automate the model transformation process without changing model structure.
Hardware Available For Edge Workloads
Test your workload performance with combinations of CPUs, GPUs, and accelerators in bare metal and cloud-native container architectures to identify the hardware platform that works best for your inferencing solutions.
Move from Prototype to Production
Take your product from prototype to production with downloadable solutions from the Intel® Developer Catalog, and procure hardware from Intel's ecosystem of hardware partners.
Development Kits
Jump-start your application development with the latest foundational kits with an option to include an Intel® Vision Accelerator Design product.
Intel® Developer Catalog
This development resource offers computer vision and deep learning solutions to try in the Container Playground, which uses the Red Hat OpenShift platform, to install and deploy to the network edge.
Ecosystem Partners
Run your workloads or run a sample from Intel on partner hardware hosted within Intel Developer Cloud for the Edge.