Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

🡨 AI and HPC Ecosystem

 

Developer Resources from Intel and Microsoft*

Microsoft* and Intel collaborate to optimize AI workloads spanning AI PCs running DirectML, Microsoft Azure* Cloud Services, and multiplatform deployment with ONNX* (Open Neural Network Exchange) Runtime. They also collaborate on open source projects to streamline AI training, model optimization, and inference. This includes ONNX model optimization, Microsoft DeepSpeed* on Intel® GPUs and AI accelerators, and Web Neural Network (WebNN) deployment on AI PCs.

Learn more about AI PC and Microsoft Azure offerings.

Intel and Microsoft Case Studies

        

Nuance: Deliver Imaging AI in the Clinical World (Microsoft Ignite Session Replay)

Intel Accelerates PadChest and fMRI Models on Microsoft Azure Machine Learning

"The Intel team's optimization of fMRI and PadChest models using Intel® Extension for PyTorch* and OpenVINO™ toolkit powered by oneAPI, leading to approximately 6x increase in performance, tailored for medical imaging, showcases best practices that do more than just accelerate running times. These enhancements not only cater to the unique demands of medical image processing but also offer the potential to reduce overall costs and bolster scalability."

— Santamaria-Pang Alberto, principal applied data scientist, Health AI at Microsoft

 

"We are elated to leverage the power of CPU instances provided by Microsoft Azure machine learning to enable developers and data scientists to take advantage of Intel® AI optimizations powered by Intel hardware. By integrating optimizations such as the Intel® Extension for Scikit-learn* powered by oneAPI into the platform, users can easily accelerate development and deployment of machine learning workloads for faster results and achieve a reduction in resource costs with just a few lines of code."

— Vijay Aski, partner director AI platform, Microsoft

 

Using Microsoft Models, Tools and Services on Intel® Platforms

Learn how to get started or how to get the most out of Microsoft software and models running on Intel-based platforms spanning data center, cloud, and AI PCs. These joint offerings are based on OpenVINO toolkit, AI Tools, and Intel® Gaudi® software.

Multiplatform

  • AI Everywhere: Accelerate Your Development from PC to Cloud
  • Accelerate ONNX Models with OpenVINO™ Execution Provider
  • Get Started with Olive Model Optimization for ONNX Runtime
  • Documentation for Olive Model Optimization with OpenVINO Toolkit
  • Intel® Neural Compressor: Model Optimization for ONNX

AI PC

  • Run Phi-3-mini on an AI PC from Intel
  • Optimized ONNX Models Run on AI PCs
  • Intel and Microsoft Collaborate to Bring Efficient LLM Experiences
  • Get Started with DirectML
  • WebNN Overview

Data Center and Cloud

  • Azure Machine Learning-Based Federated Learning with Intel® Xeon® Platforms
  • Train with scikit-learn on Azure Machine Learning
  • Get Started with DeepSpeed on Intel® Gaudi® Accelerators
  • DeepSpeed User Guide for Training on Intel Gaudi Accelerators
  • Accelerate ONNX Models with oneAPI Deep Neural Network Library (oneDNN) Execution Provider

More Resources

AI Development Resources

Explore tutorials, training, documentation, and support resources for AI developers.

AI Tools

Download Intel-optimized end-to-end AI tools and frameworks.

Intel® AI Hardware

Learn what type of device best suits your AI workload, spanning CPUs, GPUs, and AI accelerators.

  • Overview
  • Resources
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo