Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

AI PC Development Tools

 

Seamlessly transition projects from early AI development on the PC to cloud-based training to edge deployment. Learn what is required of AI workloads and what is available to get started today.

 

 

 

  • Overview
  • Get Started
  • Documentation and Resources

Streamline AI Integration

Intel provides a suite of powerful development tools designed to streamline the integration of AI into applications. These tools use Intel® hardware for AI PCs for great performance and low power use. The suite enables developers to build powerful AI-infused applications without deep AI expertise. 

Core AI PC Development Kit Technologies

OpenVINO™ Toolkit

  • Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs.
  • Optimize models for efficient deployment.
  • Use pre-optimized models that are ready for production.

 

Open Neural Network Exchange (ONNX*)

  • Create cross-platform inference with ONNX* Runtime.
  • Improve model performance across multiple platforms.

Web Neural Network API (WebNN)

  • Deploy AI entirely within a web browser.
  • Take advantage of lower-level acceleration libraries to run AI more efficiently.
  • Run with near-native performance in the browser.
  • Use ONNX Runtime Web or TensorFlow.js for ease of use at high-level abstraction.


For AI application development and optimization see Tools for Application Development and AI Frameworks.

Configure Your AI PC Development Kit

Get Started
Docs & Resources

OpenVINO Toolkit

This open source toolkit is for developers desiring high-performance and power-efficient AI inferencing across multiple operating systems and hardware architectures. Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs. This distribution includes tools to compress, quantize, and optimize models for efficient deployment in end-user applications.

OpenVINO™ toolkit manages AI workloads across CPUs, GPUs, and NPUs for optimal deployment. You can accelerate AI inference and generative AI (GenAI) workloads, achieve lower latency, and increase throughput while maintaining accuracy through optimization tools such as Neural Network Compression Framework (NNCF). OpenVINO toolkit also natively supports models from AI frameworks such as PyTorch*, TensorFlow*, and ONNX, and provides developers with a set of prevalidated models. Developers can download and build their own cutting-edge AI applications.

Browse Prevalidated Models

 

Browse the OpenVINO™ Model Hub for AI inference that includes the latest OpenVINO toolkit performance benchmarks for a select list of leading GenAI and LLMs on Intel CPUs, built-in GPUs, NPUs, and accelerators.

  • Model Performance: Find out how top models perform on Intel hardware.
  • Hardware Comparison: Find the right Intel hardware platform for your solution.

Explore AI Model Benchmarks

Download this comprehensive white paper on LLM optimization that uses compression techniques. Learn to use the OpenVINO toolkit to compress LLMs, integrate them into AI applications, and deploy them on your PC with maximum performance.

Unlock the Power of LLMs

ONNX Model and ONNX Runtime

ONNX is a machine learning model format, and ONNX Runtime is a cross-platform inference and training machine learning accelerator. For developers desiring broader platform coverage (mobile, tablets, and PCs) than OpenVINO toolkit, ONNX may be a good choice. It works with Intel platforms and allows developers to improve model performance while targeting multiple platforms with ease. A key component of ONNX is the ONNX Execution Providers (EP), which enable certain hardware acceleration technologies to run AI models. Intel platforms have two optimized EPs: the OpenVINO™ Execution Provider and the DirectML EP.

ONNX is an AI model format based on an open source project with support from Microsoft*. Its goal is to facilitate the exchange of machine learning models between different frameworks with the benefits of:

  • Interoperability across frameworks: ONNX can act as a bridge between several popular AI frameworks, including OpenVINO toolkit, PyTorch, and TensorFlow.
  • Ease of deployment on AI PCs: ONNX Runtime can take advantage of the hardware capabilities of AI PCs that use CPUs and GPUs.
  • Language compatibility: The ONNX project includes samples that show how to use different programming languages such as C++ and C# to bind to ONNX Runtime.

ONNX

WebNN

As machine learning evolves, bridging software and hardware for scalable, web-based solutions has been an ongoing challenge. The WebNN API enables AI models to run with near-native performance in the browser. The API is also enabled in many popular browsers on Intel platforms. Web applications gain the ability to create, compile, and run machine learning models. Web application developers use higher-level frameworks such as ONNX Runtime Web and TensorFlow.js, which use WebNN to provide high-performance AI model inferencing. WebNN is currently an experimental feature in popular browsers and is undergoing extensive community testing.

For instructions on enabling WebNN in your browser, see WebNN Installation Guides.

Tools for Application Development and AI Frameworks


 

Develop AI Applications

  • Intel® C++ Essentials: Compile, debug, and use our most popular performance libraries for SYCL* across diverse architectures.
  • Intel® Distribution for Python*: Use this distribution to make Python applications more efficient and performant.
  • Intel® Deep Learning Essentials: Access tools to develop, compile, test, and optimize deep learning frameworks and libraries.
  • Intel® VTune™ Profiler: Optimize application performance, system performance, and system configuration.


 

Optimize and Tune Training Models for Deep Learning and Inference

  • AI Frameworks and Tools: Unlock the full capabilities of your Intel hardware with software tools at all levels of the AI stack.
  • Get the most performance from your end-to-end pipeline on all your available hardware.
  • Accelerate end-to-end data science and machine learning pipelines using Python tools and frameworks.

Sign Up for Exclusive News, Tips & Releases

Be among the first to learn about the latest development tools and resources for the Intel Core Ultra processor and your AI PC applications. Sign up now to get access to product updates and releases, exclusive invitations to webinars and events, valuable training and tutorial resources, exciting contest announcements, and other breaking news.

All fields are required unless marked optional.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

NOTE: Expressing interest does not guarantee that you will receive an AI PC dev kit. Intel will provide further information regarding dev kit distribution process, including selection of recipients.


By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel® technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel's web sites and communications are subject to our Privacy Notice and Terms of Use.

NOTE: Expressing interest does not guarantee that you will receive an AI PC dev kit. Intel will provide further information regarding dev kit distribution process, including selection of recipients.


By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel® technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel's web sites and communications are subject to our Privacy Notice and Terms of Use.

Thank you for registering to stay up-to-date with the latest on AI PC.

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo