AI PC Development Tools
Seamlessly transition projects from early AI development on the PC to cloud-based training to edge deployment. Learn what is required of AI workloads and what is available to get started today.
Streamline AI Integration
Intel provides a suite of powerful development tools designed to streamline the integration of AI into applications. These tools use Intel® hardware for AI PCs for great performance and low power use. The suite enables developers to build powerful AI-infused applications without deep AI expertise.
Core AI PC Development Kit Technologies
OpenVINO™ Toolkit
- Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs.
- Optimize models for efficient deployment.
- Use pre-optimized models that are ready for production.
Open Neural Network Exchange (ONNX*)
- Create cross-platform inference with ONNX* Runtime.
- Improve model performance across multiple platforms.
Web Neural Network API (WebNN)
- Deploy AI entirely within a web browser.
- Take advantage of lower-level acceleration libraries to run AI more efficiently.
- Run with near-native performance in the browser.
- Use ONNX Runtime Web or TensorFlow.js for ease of use at high-level abstraction.
For AI application development and optimization see Tools for Application Development and AI Frameworks.
Configure Your AI PC Development Kit
OpenVINO Toolkit
This open source toolkit is for developers desiring high-performance and power-efficient AI inferencing across multiple operating systems and hardware architectures. Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs. This distribution includes tools to compress, quantize, and optimize models for efficient deployment in end-user applications.
OpenVINO™ toolkit manages AI workloads across CPUs, GPUs, and NPUs for optimal deployment. You can accelerate AI inference and generative AI (GenAI) workloads, achieve lower latency, and increase throughput while maintaining accuracy through optimization tools such as Neural Network Compression Framework (NNCF). OpenVINO toolkit also natively supports models from AI frameworks such as PyTorch*, TensorFlow*, and ONNX, and provides developers with a set of prevalidated models. Developers can download and build their own cutting-edge AI applications.
Browse the OpenVINO™ Model Hub for AI inference that includes the latest OpenVINO toolkit performance benchmarks for a select list of leading GenAI and LLMs on Intel CPUs, built-in GPUs, NPUs, and accelerators.
- Model Performance: Find out how top models perform on Intel hardware.
- Hardware Comparison: Find the right Intel hardware platform for your solution.
Download this comprehensive white paper on LLM optimization that uses compression techniques. Learn to use the OpenVINO toolkit to compress LLMs, integrate them into AI applications, and deploy them on your PC with maximum performance.
ONNX Model and ONNX Runtime
ONNX is a machine learning model format, and ONNX Runtime is a cross-platform inference and training machine learning accelerator. For developers desiring broader platform coverage (mobile, tablets, and PCs) than OpenVINO toolkit, ONNX may be a good choice. It works with Intel platforms and allows developers to improve model performance while targeting multiple platforms with ease. A key component of ONNX is the ONNX Execution Providers (EP), which enable certain hardware acceleration technologies to run AI models. Intel platforms have two optimized EPs: the OpenVINO™ Execution Provider and the DirectML EP.
ONNX is an AI model format based on an open source project with support from Microsoft*. Its goal is to facilitate the exchange of machine learning models between different frameworks with the benefits of:
- Interoperability across frameworks: ONNX can act as a bridge between several popular AI frameworks, including OpenVINO toolkit, PyTorch, and TensorFlow.
- Ease of deployment on AI PCs: ONNX Runtime can take advantage of the hardware capabilities of AI PCs that use CPUs and GPUs.
- Language compatibility: The ONNX project includes samples that show how to use different programming languages such as C++ and C# to bind to ONNX Runtime.
WebNN
As machine learning evolves, bridging software and hardware for scalable, web-based solutions has been an ongoing challenge. The WebNN API enables AI models to run with near-native performance in the browser. The API is also enabled in many popular browsers on Intel platforms. Web applications gain the ability to create, compile, and run machine learning models. Web application developers use higher-level frameworks such as ONNX Runtime Web and TensorFlow.js, which use WebNN to provide high-performance AI model inferencing. WebNN is currently an experimental feature in popular browsers and is undergoing extensive community testing.
For instructions on enabling WebNN in your browser, see WebNN Installation Guides.
Tools for Application Development and AI Frameworks
Develop AI Applications
- Intel® C++ Essentials: Compile, debug, and use our most popular performance libraries for SYCL* across diverse architectures.
- Intel® Distribution for Python*: Use this distribution to make Python applications more efficient and performant.
- Intel® Deep Learning Essentials: Access tools to develop, compile, test, and optimize deep learning frameworks and libraries.
- VTune™ Profiler: Optimize application performance, system performance, and system configuration.
Optimize and Tune Training Models for Deep Learning and Inference
- AI Frameworks and Tools: Unlock the full capabilities of your Intel hardware with software tools at all levels of the AI stack.
- Get the most performance from your end-to-end pipeline on all your available hardware.
- Accelerate end-to-end data science and machine learning pipelines using Python tools and frameworks.
Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about the latest development tools and resources for the Intel Core Ultra processor and your AI PC applications. Sign up now to get access to product updates and releases, exclusive invitations to webinars and events, valuable training and tutorial resources, exciting contest announcements, and other breaking news.