
AI Everywhere
Meet the only x86 data center processor with built-in AI acceleration and optimizations for popular software to enable AI everywhere.
Optimized Software for AI
The most popular AI libraries and frameworks are now optimized for 3rd Gen Intel® Xeon® Scalable processors, delivering up to 100x speedup over baseline.1 Optimizations span the data, modeling, and deployment pipeline, helping accelerate your end-to-end time to deployment.
Built-In AI Acceleration
Intel® Deep Learning Boost (Intel® DL Boost) delivers powerful AI training and inference acceleration under the hood of both 2nd and 3rd Gen Intel® Xeon® Scalable processors.
Up to
25x
better inference performance than AMD EPYC 7763 (object detection).2
Up to
1.5x
1.5x higher performance than AMD EPYC 7763 (Milan) across 20 key customer workloads.3
Up to
1.3x
1.3x higher performance than Nvidia A100 across 20 key AI customer workloads.4
Up to
25x
better inference performance than AMD EPYC 7763 (object detection).2
Up to
1.5x
1.5x higher performance than AMD EPYC 7763 (Milan) across 20 key customer workloads.3
Up to
1.3x
1.3x higher performance than Nvidia A100 across 20 key AI customer workloads.4
Optimized and Flexible for the Future
Massive Memory for AI Applications
Support memory-intensive AI workloads with faster I/O, increased memory bandwidth, and more capacity with eight channels and 64 lanes of PCIe Gen 4 per CPU. Additionally, 3rd Gen Intel® Xeon® Scalable processors support up to 6 TB mem/socket with Intel® Optane™ persistent memory.
AI Security Innovations
Enable federated learning with Intel® Software Guard Extensions (Intel® SGX) to securely bring disparate data sources together for AI and analytics applications.
Broad AI Solutions Ecosystem
Benefit from Intel’s optimization work with more than 350 ecosystem partners to help your AI workloads run optimally, regardless of industry focus.
oneAPI
3rd Gen Intel® Xeon® Scalable processors are built on the open, standards-based oneAPI programming model for all architectures (CPU, GPU, FPGA and other accelerators) to avoid proprietary hardware lock-in and accelerate cross-architecture development.
Successful AI Projects
Learn how these customers and partners are pushing the boundaries of what’s possible with AI.
Find Your Intel® Xeon® Processor
3rd Gen Intel® Xeon® Scalable processors provide a flexible, cost-effective foundation for your AI projects. View your options and find your ideal CPU.
FAQs
Frequently Asked Questions
A central processing unit (CPU) and a graphics processing unit (GPU) have very different roles. As the only data center CPU with built-in AI acceleration, hardware-enhanced security, and software optimizations, 3rd Gen Intel® Xeon® Scalable processors are the foundation for diverse AI workloads. 3rd Gen Intel® Xeon® Scalable processors offer enhanced Intel® Speed Select Technology (Intel® SST) capabilities to give you fine-grain control over CPU performance that can help to optimize total cost of ownership (TCO). And, since 3rd Gen Intel® Xeon Scalable processors are well suited for applications beyond AI, they help preserve your investment as needs evolve.
No. Today, Intel® Xeon® processors are being used for deep learning without GPU support. GPUs have been opportunistically applied to deep learning, but GPU architecture is not uniquely advantageous for AI. And as AI continues to evolve, both deep learning and machine learning will need highly scalable architectures. Intel® architecture can support larger models and offer consistency from edge to cloud. These processors can run most AI workloads, and particularly excel at recommendation systems, recurrent neural networks (RNNs), reinforcement learning, graph neural networks (GNNs), and classical machine learning.
Intel aims to make it as easy as possible to optimize code on Intel® architecture, many times with just one line of code. As part of the Intel® oneAPI AI Analytics Toolkit (AI Kit), developers can now speed up their machine learning algorithms using Intel® Extension for Scikit-Learn with just one line of code change. On the data prep side, developers typically use a wildly-popular library called pandas to accelerate their applications with just one line of code change as well using the Intel® Distribution of Modin.
A CPU can offer data security and compliance, as well as help offset application latency, through right-size capability without purchasing special hardware. When selecting your processor, note the number of cores and clock speed as indicators of power consumption and performance. As the only data center CPU with built-in AI acceleration, hardware-enhanced security, and software optimizations like Intel® Speed Select Technology (Intel® SST), 3rd Gen Intel® Xeon® Scalable processors can help deliver higher performance per total cost of ownership (TCO) across a diverse set of smart AI workloads.
The next generation of Intel® Xeon® Scalable processor, code-named “Sapphire Rapids", will feature a new microarchitecture designed to address the dynamic and increasingly demanding workloads in future data centers across compute, networking, and storage. It will feature an all-new built-in AI acceleration engine, called Intel® Advanced Matrix Extensions (AMX), with a new expandable two-dimensional register file and new matrix multiply instructions to significantly enhance performance for a variety of deep learning workloads.