Intel® Developer Cloud for the Edge
Bare Metal Development
Develop or import your computer vision or edge AI application using a JupyterLab environment on a bare metal infrastructure.
- Preinstalled with the Intel® Distribution of OpenVINO™ toolkit
- Rapid prototyping with code snippets
- Reusable references
- Python* debugger
Build your application in a container or import your existing containerized workloads to test on a wide range of Intel hardware.
- Launch containerized workloads on Intel hardware using Kubernetes*.
- Import from a prebuilt library, container registry, or Helm* chart.
- Connect to cloud storage.
- Import sources from Git and build container images.
Bare Metal Environment: The performance metrics dashboard now adds two new metrics—Total Workload Power Consumption (kW) and Total Workload Energy Consumption (kJ)—to hardware and application-level metrics. These metrics measure workload power and energy use while an application runs to help users decide on efficient computing for their solution.
Deploy and evaluate your AI workloads on the newly hosted i9-13900TE and i7-13700TE processors.
Try It Now with the Benchmark App
Experience the upgrade in performance on our latest hardware through the Benchmark App.
Note The Deep Learning Workbench can be found in version 2022.1 of the toolkit.
Experience the new Electrocardiogram (ECG) Arrhythmia Prediction application. It uses the latest version of the OpenVINO™ toolkit for inference to predict atrial fibrillation by identifying different time-series ECG rhythmic wavelengths.
Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® Platinum 8480+ Processor.
Launch JupyterLab and Try It Now
Deploy and evaluate your AI workloads on the newly hosted Intel® Core™ i5-1250PE processor.
Learn to use GStreamer* pipelines for complex media analytics tasks such as detection, classification, and tracking using Intel® Deep Learning Streamer 2022.2.
View the Tutorial
Bare Metal Environment: The performance metrics dashboard adds a new metric called Power Usage (Watts)/Time GMT to hardware and application level metrics. This metric shows CPU power use while the application is running to help users further fine-tune their workload and hardware configuration.
Analyze and evaluate updated samples or tutorials with improved throughput performance.
Tutorial with Sync OpenVINO 2.0 API:
Samples and tutorials with Async OpenVINO 2.0 APIs:
Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 170. The GPU is optimal for pipelines that include more complex AI models, such as multiple object detection or multiple classification models.
Learn More about Intel® Data Center GPU Flex Series 170
Launch JupyterLab and Try It Now
Learn to deploy an industrial computer vision model to detect real-world analog pointer meters and extract corresponding digital readings using the Intel® Distribution of OpenVINO™ toolkit.
Deploy and evaluate your AI workloads on the newly hosted Intel® Data Center GPU Flex Series 140. With higher number of video encode and decode engines, this GPU can support a high number of streams for AI inferencing workloads.
Intel® Deep Learning Streamer (Intel® DL Streamer) code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook for detection, classification, and tracking applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit. Use the tag ‘DL Streamer_LTS’ to filter out the Intel DL Streamer code snippets.
Intel® Developer Cloud for the Edge now features the Intel® Distribution of OpenVINO™ toolkit version 2022.2.
Note Deep Learning Workbench is on version 2022.1 of the toolkit.
Analyze and evaluate newly added samples: Image Classification Using ResNet-50 and Sequence Classification Using BERT to Accelerate PyTorch* Model Inference Using OpenVINO™ Integration with ONNX* Runtime for PyTorch*.
Deploy and evaluate your AI workloads on the newly hosted Intel® Xeon® D-2712T and Intel Xeon D-2796NT processors.
Quickly browse, import, or export your AI models and test data from Amazon S3 buckets to optimize and evaluate inference performance on Intel® hardware within Intel® Developer Cloud for the Edge.
The tutorial notebook has reusable code snippets to help you quickly create OpenVINO™ applications.
Try your own AI workloads on the newly hosted Intel Core i7 and i9 processors.
Accelerate PyTorch model inference using OpenVINO™ Integration with ONNX* Runtime for PyTorch* as an runtime provider to run on Intel® hardware.
This solution uses the Intel® Deep Learning Streamer and Intel® Distribution of OpenVINO™ toolkit to identify people and determine social distance in crowded public spaces.
Try your own AI workloads on the newly hosted Intel Core i5 and i7 processors.
Learn the latest inference API for automatic device selection and performance hints to dynamically optimize for latency or throughput.
This incremental update includes support for text classification models, OpenVINO™ API 2.0 enabled in tools, Cityscapes Dataset, and initial support for natural language processing (NLP) models.
Code snippets are small blocks of reusable code that can be inserted in a Jupyter* Notebook to aid and accelerate coding of sample applications on Intel® Developer Cloud for the Edge workloads using the OpenVINO™ toolkit.
Now you can import and securely launch HELM charts, Docker* Compose images, and containers on Intel® hardware in the new Kubernetes* environment. Explore, edit, and test OpenVINO™ toolkit sample applications with JupyterLab, and optimize AI models with the improved Deep Learning Workbench.
These samples demonstrate Noise Suppression, Time-Series Forecasting, Object Detection with Tiny YOLO* V4 and Neural Network Compression.
Intel® Developer Cloud for the Edge has been updated with version 2022.1 of the OpenVINO™ toolkit.
Try your own AI workloads or Intel® Developer Cloud for the Edge sample applications on the newly hosted Pentium® processors.
This release adds support for Intel® Distribution of OpenVINO™ toolkit version 2021.4.2, and contains minor bug fixes and usability enhancements.
This latest version allows you to do out-of-the-box benchmarking, measure the accuracy of your model, and perform comprehensive comparison of accuracy between floating point and int8 models and more.
Provides an enhanced experience when working with sample, tutorial and prototype notebooks.
Intel® Developer Cloud for the Edge now features the Intel Distribution of OpenVINO toolkit version 2021.4.2.
This incremental update includes support for explainable AI for classification model types, visualization of inference results, streamlined INT8 calibration flow, and multiple UX improvements.
Seamlessly develop, build, and test cloud-native container applications on various target deployment hardware.
Use OpenVINO toolkit optimizations with TensorFlow* inference applications across a wide range of compute devices.
Intel Developer Cloud for the Edge now supports OpenVINO toolkit version 2021.4.1.
Try your own AI workloads or Intel Developer Cloud for the Edge sample applications on the newly hosted OnLogic* platforms.
Create datasets and explore the performance benefits of converting models to int8 and TensorFlow. Now includes YOLO* model support.
Explore constructing media analytics pipelines using Deep Learning Streamer in the Frictionless Retail Sample on Intel Developer Cloud for the Edge.
Hardware Available For Edge Workloads
Test your workload performance with combinations of CPUs, GPUs, and accelerators in bare metal and cloud-native container architectures to identify the hardware platform that works best for your inferencing solutions.