Developer Resources from Intel and the PyTorch* Foundation
Intel is committed to actively contributing to and advocating for the PyTorch* community. Intel believes that PyTorch holds a pivotal place in accelerating AI as it allows for fast application development to promote experimentation and innovation.
Advance PyTorch through Intel Optimizations
PyTorch benefits from substantial optimizations provided by Intel for x86, including accelerating PyTorch using Intel® oneAPI Deep Neural Network Library (oneDNN), optimizations for ATen operators, bfloat16, and auto-mixed precision support. Intel appreciates collaborating with colleagues at Meta* and other contributors from the open source community.
AI Developer Tools
Resources
Documentation
Intel® Extension for PyTorch* Documentation
Int8 Quantization for x86 CPU in PyTorch
Accelerated CPU Inference with PyTorch Inductor Using torch. compile
Language Identification: Building an End-to-End AI Solution Using PyTorch
Accelerated Image Segmentation Using PyTorch
How to Build an Interactive Chat-Generation Model Using DialoGPT and PyTorch
How to Accelerate PyTorch Geometric on Intel® CPUs
Celebrate PyTorch 2.0 with New Performance Features for AI Developers
PyTorch 2.0 Takes a Leap Forward in Performance and Innovation
Stable Diffusion with Intel® Arc™ GPUs: Using PyTorch and Docker* on Windows
Training
Do It Yourself
More Resources
AI Machine Learning Portfolio
Explore all Intel® AI content for developers.
AI Tools
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Intel® AI Hardware
The Intel portfolio for AI hardware covers everything from data science workstations to data preprocessing, machine learning and deep learning modeling, and deployment in the data center and at the intelligent edge.