How It Works
Step through the process of using the Intel® Distribution of OpenVINO™ toolkit, and take a closer look at the key phases from setting up and planning your solution to deployment.
This toolkit contains a full suite of development and deployment tools. To try building a project from start to finish, use the Intel® DevCloud for the Edge, which includes a fully configured set of pretrained models and hardware for evaluation.
Prerequisite: Plan and Set Up
Select your host and target platforms, and make choices about models.
Step 1: Train Your Model
Use your framework of choice to prepare and train a deep learning model.
Step 2: Convert and Optimize
Run the Model Optimizer to convert your model and prepare it for inferencing.
Step 3: Tune for Performance
Use the Inference Engine to compile the optimized network and manage inference operations on specified devices.
Step 4: Deploy Applications
Use the Inference Engine to deploy your applications.