AI PC Development Tools
Seamlessly transition projects from early AI development on the PC to cloud-based training to edge deployment. Learn what is required of AI workloads and what is available to get started today.
Streamline AI Integration
Intel provides a suite of powerful development tools designed to streamline the integration of AI into applications. These tools use Intel® hardware for AI PCs for great performance and low power use. The suite enables developers to build powerful AI-infused applications without deep AI expertise.
Core AI PC Development Kit Technologies
OpenVINO™ Toolkit
- Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs.
- Optimize models for efficient deployment.
- Use pre-optimized models that are ready for production.
Windows* AI Foundry
- The Windows* AI Foundry is Microsoft's platform for developers to build and integrate intelligent AI experiences into Windows 11 applications, leveraging on-device hardware such as Intel CPUs, GPUs, and NPUs for optimized performance.
Web Neural Network API (WebNN)
- Deploy AI entirely within a web browser.
- Take advantage of lower-level acceleration libraries to run AI more efficiently.
- Run with near-native performance in the browser.
- Use ONNX Runtime Web or LiteRT.js for ease of use at high-level abstraction.
For AI application development and optimization see Tools for Application Development and AI Frameworks.
Configure Your AI PC Development Kit
OpenVINO Toolkit
This open source toolkit is for developers desiring high-performance and power-efficient AI inferencing across multiple operating systems and hardware architectures. Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs. This distribution includes tools to compress, quantize, and optimize models for efficient deployment in end-user applications.
OpenVINO™ toolkit manages AI workloads across CPUs, GPUs, and NPUs for optimal deployment. You can accelerate AI inference and generative AI (GenAI) workloads, achieve lower latency, and increase throughput while maintaining accuracy through optimization tools such as Neural Network Compression Framework (NNCF). OpenVINO toolkit also natively supports models from AI frameworks such as PyTorch*, TensorFlow*, and ONNX, and provides developers with a set of prevalidated models. Developers can download and build their own cutting-edge AI applications.
Browse the OpenVINO™ Model Hub for AI inference that includes the latest OpenVINO toolkit performance benchmarks for a select list of leading GenAI and LLMs on Intel CPUs, built-in GPUs, NPUs, and accelerators.
- Model Performance: Find out how top models perform on Intel hardware.
- Hardware Comparison: Find the right Intel hardware platform for your solution.
Download this comprehensive white paper on LLM optimization that uses compression techniques. Learn to use the OpenVINO toolkit to compress LLMs, integrate them into AI applications, and deploy them on your PC with maximum performance.
Windows AI Foundry
The Windows AI Foundry is Microsoft's platform for developers to build and integrate intelligent AI experiences into Windows 11 applications, leveraging on-device hardware such as Intel CPUs, GPUs, and NPUs for optimized performance. It:
- Provides built-in AI features and APIs on Windows 11 AI PCs that run locally and enable unique AI experiences
- Accesses ready-to-use APIs, including Phi Silica language model, AI Imaging, and Text Recognition.
- Integrates open-source models via Foundry Local and custom ONNX models through Windows ML.
- Enables hardware acceleration via Windows ML and OpenVINO™ Execution Provider for enhanced performance on Intel AI PCs (CPUs, GPUs, and NPUs).
- The OpenVINO Execution Provider for Windows ML requires the following system configurations:
- 11th Generation Intel® Core™ processors (formerly code name Tiger Lake) or newer with at least 8GB memory for CPU acceleration
- 12th Generation Intel® Core™ processors (formerly code name Alder Lake) or newer with at least 16GB memory for GPU acceleration
- Intel® Core™ Ultra Series 1 processors (formerly code name Meteor Lake) or newer with at least 16GB memory for NPU acceleration
WebNN
As machine learning evolves, bridging software and hardware for scalable, web-based solutions has been an ongoing challenge. The WebNN API enables AI models to run with near-native performance in the browser. The API is also enabled in many popular browsers on Intel platforms. Web applications gain the ability to create, compile, and run machine learning models. Web application developers use higher-level frameworks such as ONNX Runtime Web and LiteRT.js, which use WebNN to provide high-performance AI model inferencing. WebNN is currently an experimental feature in popular browsers and is undergoing extensive community testing.
For instructions on enabling WebNN in your browser, see WebNN Installation Guides.
Tools for Application Development and AI Frameworks
Develop AI Applications
- Intel® C++ Essentials: Compile, debug, and use our most popular performance libraries for SYCL* across diverse architectures.
- Intel® Distribution for Python*: Use this distribution to make Python applications more efficient and performant.
- Intel® Deep Learning Essentials: Access tools to develop, compile, test, and optimize deep learning frameworks and libraries.
- Intel® VTune™ Profiler: Optimize application performance, system performance, and system configuration.
Optimize and Tune Training Models for Deep Learning and Inference
- AI Frameworks and Tools: Unlock the full capabilities of your Intel hardware with software tools at all levels of the AI stack.
- Get the most performance from your end-to-end pipeline on all your available hardware.
- Accelerate end-to-end data science and machine learning pipelines using Python tools and frameworks.
Join the AI PC Developer Program
Be among the first to learn about the latest development tools and resources for the Intel Core Ultra processor and your AI PC applications. Join the AI PC developer program now to get access to product updates and releases, exclusive invitations to webinars and events, valuable training and tutorial resources, and other breaking news.