Open Potential (Real-World AI Solutions)
See what's possible and start building your AI solutions journey with the OpenVINO™ toolkit.
AI development at the edge opens up many possibilities and use cases, and enables AI deployment directly on edge devices. Explore Edge AI Reference Kits from Intel, which come with a Jupyter* Notebook, how-to videos, step-by-step tutorials, and more.
Edge AI Reference Kits
Jump-start your AI solution by developing with the Edge AI Reference Kits or integrating them into your existing apps. The prebuilt components can be foundations for custom AI inference apps across many industries like retail, healthcare, and manufacturing. Using best practices, you can work faster than traditional development workflows. |
Try It: Defect Detection with Anomalib
Learn how to build computer vision and defect detection applications for AI inference solutions. Use the open source Anomalib library for unsupervised learning with imbalanced datasets, enabling handling of rare defects in real time. Improve quality control in manufacturing, healthcare, agriculture, and more. Optionally, this project incorporates a Dobot robot.
Solutions Overview (2:35)
Watch a demo of a practical implementation of defect detection for manufacturing quality control.
Technical how-to (6:34)
Explore sample code and learn how to use it yourself.
Try It: Smart Meter Scanning—Object Detection & Segmentation
Build new AI inference capabilities into your applications to automate analog information and transform it into digital data and analytics for a boost in accuracy and reliability. Automate meter reading with computer vision for industries like energy, manufacturing, and any sector that uses analog meters.
Solutions Overview (2:46)
This demo shows a practical implementation of object detection and segmentation for smart meter scanning.
Technical how-to (8:50)
Get an in-depth look at how to use the sample code and try it on your own.
Try It: Intelligent Queue Management YOLOv8* Object Detection & Counting
Build versatile AI inference apps using object detection AI, counting, and decision-making. Detect and count how many people are in a queue to reduce wait times, improve customer satisfaction, and optimize staffing. Manage lines with computer vision for industries like retail business, healthcare, or any sector that needs object detection and object counting.
Solutions Overview (2:40)
Watch this demo of a practical implementation of YOLO* v8 object detection.
Technical How-to (19:32)
Explore sample code and learn how to try it on your own.
Take a Closer Look at the OpenVINO Toolkit
Optimize and deploy AI inference solutions using this open source AI toolkit.
- Accelerate AI inference and optimize deployment on popular hardware platforms by maximizing the available compute across accelerators while using a common API.
- Choose from a wide range of pretrained models that provide flexibility for your use case and preferred framework, like TensorFlow* or PyTorch*.
- Retrain or fine-tune models using post-training quantization techniques.
Meet the Inference Makers at Intel
Get to know this team of experts in AI who can guide you through real-world use cases for the OpenVINO toolkit and deep learning AI inference solutions.
Raymond Lo
PhD in computer engineering, AI innovation entrepreneur, San Jose, US
Paula Ramos
PhD in engineering, computer vision scientist, North Carolina, US
Anisha Udayakumar
AI innovation consultant, systems engineer, Chennai, India
Adrian Boguszewski
Computer science engineer, deep learning expert, Swindon, England
Zhuo Wu
PhD in electronics, professor and research scientist, Shanghai, China
See for Yourself: OpenVINO Toolkit in the Real World
Take a look at how Intel is collaborating with others to achieve business success with the OpenVINO toolkit.
Megh Computing uses AI-powered video analytics to optimize operations.
PreciTaste* uses AI to precision forecast quick service restaurant (QSR) food production.
Camera-based drive-through meters help QSRs optimize food production.

Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit. By signing up, you get:
• Early access product updates and releases
• Exclusive invitations to webinars and events
• Training and tutorial resources
• Contest announcements
• Other breaking news
Support, Resources, and Frequently Asked Questions
You can report OpenVINO toolkit issues to the GitHub repository. For Edge AI Reference Kits issues, join the discussion on the OpenVINO notebooks repository or the Intel Support Forum.
Intel offers Edge AI Reference Kits for specific AI use cases, such as Smart Meter Scanning, Real-time Anomaly Detection, and Intelligent Queue Management. More use cases are added on a frequent basis. Take advantage of real-time computer vision to create AI inference solutions using object detection, object segmentation, and future inclusions like generative AI.
These kits use pretrained optimized models that accelerate AI on popular platforms and include detailed documentation, how-to videos, code samples, and GitHub repositories. The kits enable you to speed up your deployment and adoption process and save time.
The OpenVINO toolkit accelerates the process of model compression for AI use cases and then deploying them on various hardware platforms. This speeds up the development of AI inference solutions and makes it more efficient for you to turn your ideas into applications of AI in the real world.
Yes. The OpenVINO toolkit compiles models to run on many different devices to give you the flexibility to write code once and deploy your model across CPUs, GPUs, VPUs, and other accelerators.
To get the best possible performance, it’s important to properly set up and install the current GPU drivers on your system. Use the guide on How to Install Intel GPU Drivers on Windows* and Ubuntu*.
Note Use the guide to install drivers and set up your system before using the OpenVINO toolkit for GPU-based AI inference solutions.
This guide was tested on Intel® Arc™ graphics and Intel® Data Center GPU Flex Series on systems with Ubuntu 22.04 LTS and Windows 11. To use the the OpenVINO toolkit GPU plug-in and offload inference to Intel GPUs, the Intel® Graphics Driver must be properly configured on your system.
The new family of discrete GPUs from Intel are not just for gaming; they can also run AI at the edge or on servers.
The plug-in architecture of the OpenVINO toolkit supports the optimization of AI inference on third-party hardware as well as Intel platforms. See the full list of supported devices here.
It’s optimized for performance. OpenVINO tookit runs computationally intensive, deep learning models with minimal impact to accuracy. It has features that maximize efficiency, like the AUTO device plug-in or thread scheduling on 12th gen Intel Core processors and higher.
The OpenVINO toolkit is highly compatible with multiple frameworks and standardized protocols. OpenVINO™ model server uses the same architecture and API as TensorFlow* Serving and KServe to make deployment more scalable for modern workloads.
The OpenVINO toolkit minimizes the time it takes to process input data to produce a prediction as an output. Decision-making is faster while your system interactions are more efficient.
With the Model Conversion API and the NNCF, OpenVINO offers several optimization techniques to enhance performance and reduce latency.
Read about the various model compression options like quantization-aware training, post-training quantization, and more in this model optimization guide.
The OpenVINO model server, part of the OpenVINO toolkit, lets you host models and efficiently deploy applications on a wide range of hardware. You can drop in AI inference acceleration without rewriting code.
- Your model is made accessible over standard network protocols through a common API, which is also used by KServe.
- Remote AI inference enables you to create lightweight clients that focus on API calls, which requires fewer updates.
- Applications can be developed and maintained independent from the model framework, hardware, and infrastructure.
- Easier access control to the model since the topology and weights are not exposed through the client applications.
- More efficient horizontal and vertical AI inference scaling is afforded by this deployment structure.
By providing you with a comprehensive set of tools and resources, you can streamline workflows while optimizing AI inference and the real-world performance of AI models with OpenVINO.