Using a single ID and password, you can use single sign on (SSO) authentication to log in to any of the following areas:

  • Intel® Developer Cloud for the Edge
  • Intel® Edge Software Hub
  • Intel Developer Cloud for oneAPI
  • Intel Developer Cloud for the Edge Forum
  • Product downloads
  • Other Intel websites

Search your inbox for the welcome email from DeveloperCloudfortheEdge@intel.com or check the spam folder. If you do not find the welcome email, see the community support page.

Announcements regarding scheduled maintenance and downtime are posted on the community support page.

When the scheduled maintenance period begins, running tasks are terminated. Your saved files and Jupyter* Notebook are not impacted by the upgrade. After maintenance is complete, resubmit the task.

For general support questions, see Intel Support.

For Intel Developer Cloud for the Edge support, see the community support page.

You can create multiple accounts, but each account must be registered to a different email ID.

To update your account, sign in to your account. If you are unable to sign into your Intel account, see the community support page.

Use the latest version of one of the following browsers:

  • Microsoft Edge
  • Mozilla Firefox
  • Google Chrome

Note To reduce the risk of sign in or cache issues when using Intel Developer Cloud for the Edge tools and applications, enable cookies in your browser and clear your browser cache.

When the scheduled maintenance period begins, running tasks are terminated. Your saved files and Jupyter* Notebook are not impacted by the upgrade. After maintenance is complete, resubmit the task.

For general support questions, see Intel Support.

For Intel Developer Cloud for the Edge support, see the community support page.

You can create multiple accounts, but each account must be registered to a different email ID.

To update your account, sign in to your account. If you are unable to sign into your Intel account, see the community support page.

Use the latest version of one of the following browsers:

  • Microsoft Edge
  • Mozilla Firefox
  • Google Chrome

Note To reduce the risk of sign in or cache issues when using Intel Developer Cloud for the Edge tools and applications, enable cookies in your browser and clear your browser cache.

Data Privacy and Security Practices

Intel Corporation and Colfax International jointly operate Intel Developer Cloud for the Edge and follow industry regulations to strictly protect the security of user data.

Intel hired Bishop Fox to conduct advanced security assessments on Developer Cloud for the Edge. Bishop Fox is the largest private professional services firm specializing in offensive security testing.

The types of assessments included API penetration testing, external penetration testing, internal penetration testing, and remediation testing.

For more information on Intel Developer Cloud security, review this article on Prototyping and Benchmarking.

No, user-uploaded data is not backed up.

Intel Developer Cloud for the Edge protects user-uploaded data such as code, executables, and datasets with standard Linux permission controls so other users cannot access it. This data may be reviewed by specific Colfax staff for security implementation and troubleshooting.

No, user-uploaded data is not backed up.

Intel Developer Cloud for the Edge protects user-uploaded data such as code, executables, and datasets with standard Linux permission controls so other users cannot access it. This data may be reviewed by specific Colfax staff for security implementation and troubleshooting.

Benchmarking and Technical

  • For Container Playground, users get access for 120 days.
  • In the Container Playground environment, users get a session time of four hours after signing in.

Users can request for account extension for an additional 120 days. Sign in to your account and request an extension.

Telemetry is an automated, data-driven decision-making process that determines the ideal hardware requirements for a user's solution.

To enable telemetry, follow these instructions.

The telemetry dashboard captures the following metrics that run during a given job:

  • Average inference time (in ms)
  • Inference count
  • Target hardware
  • Frames per second
  • Inference times
  • CPU or GPU usage during inferencing
  • Average CPU or GPU temperature
  • Memory usage during inferencing

Currently, Grafana-based telemetry is not supported in the Container Playground.

How do I use the Post Training Optimization Tool (POT)?

You can learn more about the POT by accessing the interactive Jupyter Tutorial and documentation. To use the POT in your own applications, run !pot -h within a Jupyter Notebook cell to output the command syntax and variables.

Qarpo library is a custom library that provides Jupyter* Notebook users an interface to perform the following tasks:

  1. Submit the jobs to nodes in a cluster.
  2. Track the progress of jobs running.
  3. Display the output results of the jobs.
  4. Plot the metric results for the completed jobs.

The Accuracy Checker is an extensible, flexible, and configurable deep learning accuracy validation framework. It is a modular component of the Intel Distribution of OpenVINO toolkit and can be applied with datasets and AI models to collect aggregated quality metrics.

A Neural Network Compression Framework (NNCF) provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop and is designed to work with models from PyTorch and TensorFlow frameworks. For more information, refer to the NNCF sample application.

Quantization in machine learning is the process of converting data in FP32 (floating point 32 bits) to a smaller precision such as int8 (integer 8 bit). Different types of quantization methods supported on Intel Developer Cloud for the Edge are:

  • 8-bit quantization
  • Mixed-precision quantization
  • Binarization
  • Sparsity
  • Filter pruning

For more details, see the article on Quantizing Models.

Synchronous (Sync) API runs inference and returns results when the job is completed. Async API runs inference in parallel on a separate thread or device, which allows the main thread to simultaneously perform other tasks such as capturing input data, preprocessing input data, and post-processing results.

No, not currently.

Try stopping and starting the application from the notebook. If the issue persists, use the Reset option to clear your Deep Learning Workbench data, and then launch again.

Note You will lose any created projects when you reset.

Ensure that the Grafana dashboard that you are trying to access is from a recently submitted job and that you are logged in from the same account as the one submitting jobs.

JupyterLab and Container Playground environments do not support external ssh access.

The telemetry dashboard captures the following metrics that run during a given job:

  • Average inference time (in ms)
  • Inference count
  • Target hardware
  • Frames per second
  • Inference times
  • CPU or GPU usage during inferencing
  • Average CPU or GPU temperature
  • Memory usage during inferencing

Currently, Grafana-based telemetry is not supported in the Container Playground.

How do I use the Post Training Optimization Tool (POT)?

You can learn more about the POT by accessing the interactive Jupyter Tutorial and documentation. To use the POT in your own applications, run !pot -h within a Jupyter Notebook cell to output the command syntax and variables.

Qarpo library is a custom library that provides Jupyter* Notebook users an interface to perform the following tasks:

  1. Submit the jobs to nodes in a cluster.
  2. Track the progress of jobs running.
  3. Display the output results of the jobs.
  4. Plot the metric results for the completed jobs.

The Accuracy Checker is an extensible, flexible, and configurable deep learning accuracy validation framework. It is a modular component of the Intel Distribution of OpenVINO toolkit and can be applied with datasets and AI models to collect aggregated quality metrics.

A Neural Network Compression Framework (NNCF) provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop and is designed to work with models from PyTorch and TensorFlow frameworks. For more information, refer to the NNCF sample application.

Quantization in machine learning is the process of converting data in FP32 (floating point 32 bits) to a smaller precision such as int8 (integer 8 bit). Different types of quantization methods supported on Intel Developer Cloud for the Edge are:

  • 8-bit quantization
  • Mixed-precision quantization
  • Binarization
  • Sparsity
  • Filter pruning

For more details, see the article on Quantizing Models.

Synchronous (Sync) API runs inference and returns results when the job is completed. Async API runs inference in parallel on a separate thread or device, which allows the main thread to simultaneously perform other tasks such as capturing input data, preprocessing input data, and post-processing results.

No, not currently.

Try stopping and starting the application from the notebook. If the issue persists, use the Reset option to clear your Deep Learning Workbench data, and then launch again.

Note You will lose any created projects when you reset.

Ensure that the Grafana dashboard that you are trying to access is from a recently submitted job and that you are logged in from the same account as the one submitting jobs.

JupyterLab and Container Playground environments do not support external ssh access.

Container Playground

  • The user inside your container must be non-root, with IDs in the range of 10 to 10000000000
  • The Port range must be between 1024 and 65536
  • Privileged escalation is not supported
  • Host ports are not supported

Container Playground workloads run on a wide range of Intel hardware using Kubernetes* and based on the Red Hat* OpenShift* platform. For more information, refer to https://www.intel.com/devcloud-containers

Yes. Use the View/Edit Code option to download sample applications. You can also download some of Intel's reference implementations from the Intel Software repository after agreeing to the license terms.

The supported public registries are Docker Hub, Azure, and Quay.io.

The supported private registries are Docker Hub and Quay.io.

A project can be launched on up to three target platforms simultaneously. When the workload has completed execution, the target platform is freed up for other projects to execute.

The logs can be accessed for 15 days and filesystem output is accessible anytime, provided the account is active if the project is present in your dashboard. The Container Playground also provides a storage indicator so you can free up space if you are running low on storage.

After launching a project, the target hardware is allocated to your user account exclusively until running is completed or a maximum time limit of 15 minutes.

In Container Playground, any build that is completed can be viewed in My Library > Resource. To start rebuilding, use the Actions option to edit the configuration and provide the latest branch details.

GitLab is not supported while importing applications from the source code repository.

A job file is a Bash script that serves as a wrapper around the Python executable of an application that is run directly on the edge compute node. One purpose of the job file is to simplify running an application on different edge (compute) nodes by accepting a few arguments and then performing the necessary steps before and after running the application executable. The command qsub is used to submit jobs to edge compute nodes in the JupyterLabs environment in the Intel Developer Cloud for the Edge.

After submitting a job, it is placed in a queue to be run when the requested compute nodes become available. The custom Jupyter Notebook widget liveQstat() displays the output of the job with live updates. Use the qstat command to check the status of the jobs that are submitted.

On Container Playground, your dashboard displays the state of the launched project.

In the JupyterLab and Container Playground environments, there is no root access for users. Therefore, you cannot use sudo in your commands or install packages with pip and conda commands.

In the JupyterLab environment:

  • Users get up to 50 GB of storage space on the development node.

In the Container Playground environment:

  • Users get a private registry that can hold up to 15 containers with a total storage limit of 20 GB.
  • The file system has a storage limit of 1 GB and is expandable up to 5 GB.
  • Exclusive target platform access for testing containers is up to 15 minutes.
  • A maximum of eight projects can be created.

  • In the JupyterLab environment, a cleanup procedure is launched after the job is run.
  • In the Container Playground environment, a cleanup procedure is performed after the launched project completes running or is terminated and before another project is launched.

On JupyterLab and Container Playground environments, the hardware is exclusively allocated to a single job by a single user, during running.

  • In the JupyterLab environment, users get 2 GB memory (RAM) for each development node.
  • In the Container Playground environment, users get 2 vCPUs and 4 GB RAM for viewing and editing the code.

On JupyterLab and Container Playground environments, use the Intel® Distribution of OpenVINO™ Toolkit Benchmarking app, which is available in C++ and Python.

  • In the JupyterLab environment, go to the Overview page.
  • From the Container Playground, use the Coding Environment option to launch a lightweight JupyterLab IDE to edit code and build containers with the Buildah command, which is accessible from the terminal interface .

In the JupyterLab and Container Playground environments, jobs or projects launched on the target platforms that support an integrated GPU have GPU access enabled by default..

 

The supported public registries are Docker Hub, Azure, and Quay.io.

The supported private registries are Docker Hub and Quay.io.

A project can be launched on up to three target platforms simultaneously. When the workload has completed execution, the target platform is freed up for other projects to execute.

The logs can be accessed for 15 days and filesystem output is accessible anytime, provided the account is active if the project is present in your dashboard. The Container Playground also provides a storage indicator so you can free up space if you are running low on storage.

After launching a project, the target hardware is allocated to your user account exclusively until running is completed or a maximum time limit of 15 minutes.

In Container Playground, any build that is completed can be viewed in My Library > Resource. To start rebuilding, use the Actions option to edit the configuration and provide the latest branch details.

GitLab is not supported while importing applications from the source code repository.

A job file is a Bash script that serves as a wrapper around the Python executable of an application that is run directly on the edge compute node. One purpose of the job file is to simplify running an application on different edge (compute) nodes by accepting a few arguments and then performing the necessary steps before and after running the application executable. The command qsub is used to submit jobs to edge compute nodes in the JupyterLabs environment in the Intel Developer Cloud for the Edge.

After submitting a job, it is placed in a queue to be run when the requested compute nodes become available. The custom Jupyter Notebook widget liveQstat() displays the output of the job with live updates. Use the qstat command to check the status of the jobs that are submitted.

On Container Playground, your dashboard displays the state of the launched project.

In the JupyterLab and Container Playground environments, there is no root access for users. Therefore, you cannot use sudo in your commands or install packages with pip and conda commands.

In the JupyterLab environment:

  • Users get up to 50 GB of storage space on the development node.

In the Container Playground environment:

  • Users get a private registry that can hold up to 15 containers with a total storage limit of 20 GB.
  • The file system has a storage limit of 1 GB and is expandable up to 5 GB.
  • Exclusive target platform access for testing containers is up to 15 minutes.
  • A maximum of eight projects can be created.

  • In the JupyterLab environment, a cleanup procedure is launched after the job is run.
  • In the Container Playground environment, a cleanup procedure is performed after the launched project completes running or is terminated and before another project is launched.

On JupyterLab and Container Playground environments, the hardware is exclusively allocated to a single job by a single user, during running.

  • In the JupyterLab environment, users get 2 GB memory (RAM) for each development node.
  • In the Container Playground environment, users get 2 vCPUs and 4 GB RAM for viewing and editing the code.

On JupyterLab and Container Playground environments, use the Intel® Distribution of OpenVINO™ Toolkit Benchmarking app, which is available in C++ and Python.

  • In the JupyterLab environment, go to the Overview page.
  • From the Container Playground, use the Coding Environment option to launch a lightweight JupyterLab IDE to edit code and build containers with the Buildah command, which is accessible from the terminal interface .

In the JupyterLab and Container Playground environments, jobs or projects launched on the target platforms that support an integrated GPU have GPU access enabled by default..