ai everywhere

AI Everywhere

Meet the only x86 data center processor with built-in AI acceleration and optimizations for popular software to enable AI everywhere.

Optimized and Flexible for the Future

FAQs

Frequently Asked Questions

A central processing unit (CPU) and a graphics processing unit (GPU) have very different roles. As the only data center CPU with built-in AI acceleration, hardware-enhanced security, and software optimizations, 3rd Gen Intel® Xeon® Scalable processors are the foundation for diverse AI workloads. 3rd Gen Intel® Xeon® Scalable processors offer enhanced Intel® Speed Select Technology (Intel® SST) capabilities to give you fine-grain control over CPU performance that can help to optimize total cost of ownership (TCO). And, since 3rd Gen Intel® Xeon Scalable processors are well suited for applications beyond AI, they help preserve your investment as needs evolve.

No. Today, Intel® Xeon® processors are being used for deep learning without GPU support. GPUs have been opportunistically applied to deep learning, but GPU architecture is not uniquely advantageous for AI. And as AI continues to evolve, both deep learning and machine learning will need highly scalable architectures. Intel® architecture can support larger models and offer consistency from edge to cloud. These processors can run most AI workloads, and particularly excel at recommendation systems, recurrent neural networks (RNNs), reinforcement learning, graph neural networks (GNNs), and classical machine learning.

Intel aims to make it as easy as possible to optimize code on Intel® architecture, many times with just one line of code. As part of the Intel® oneAPI AI Analytics Toolkit (AI Kit), developers can now speed up their machine learning algorithms using Intel® Extension for Scikit-Learn with just one line of code change. On the data prep side, developers typically use a wildly-popular library called pandas to accelerate their applications with just one line of code change as well using the Intel® Distribution of Modin.

A CPU can offer data security and compliance, as well as help offset application latency, through right-size capability without purchasing special hardware. When selecting your processor, note the number of cores and clock speed as indicators of power consumption and performance. As the only data center CPU with built-in AI acceleration, hardware-enhanced security, and software optimizations like Intel® Speed Select Technology (Intel® SST),  3rd Gen Intel® Xeon® Scalable processors can help deliver higher performance per total cost of ownership (TCO) across a diverse set of smart AI workloads.

The next generation of Intel® Xeon® Scalable processor, code-named “Sapphire Rapids", will feature a new microarchitecture designed to address the dynamic and increasingly demanding workloads in future data centers across compute, networking, and storage. It will feature an all-new built-in AI acceleration engine, called Intel® Advanced Matrix Extensions (AMX), with a new expandable two-dimensional register file and new matrix multiply instructions to significantly enhance performance for a variety of deep learning workloads.

Product and Performance Information

1Li, Wei. “Software AI accelerators: AI performance boost for free.” Intel,2021. https://www.intel.com/content/www/us/en/developer/articles/technical/software-ai-accelerators-ai-performance-boost-for-free.html
2 3rd Gen Intel® Xeon® Scalable processor supporting Intel® DL Boost INT8 delivers up to 25x better inference throughput vs. AMD Milan FP32 across a diverse set of AI workloads that include Image Classification, Object Detection, Natural Language Processing and Image Recognition. Visit Intel.com/performanceindex for more information.
31.5x higher AI performance with 3rd Gen Intel® Xeon® Scalable processor supporting Intel® DL Boost vs. FP32 AMD EPYC Milan (geomean of 20 workloads including logistic regression inference, logistic regression fit, ridge regression inference, ridge regression fit, linear regression inference, linear regression fit, elastic net inference, XGBoost Fit, XGBoost predict, SSD-ResNet34 inference, Resnet50-v1.5 inference, Resnet50-v1.5 training, BERT Large SQuaD inference, kmeans inference, kmeans fit, brute_knn inference, SVC inference, SVC fit, dbscan fit, traintestsplit). Visit Intel.com/performanceindex for more information.
41.3x higher AI performance with 3rd Gen Intel® Xeon® Scalable processor supporting Intel® DL Boost vs. NVIDIA A100 (geomean of 20 workloads including logistic regression inference, logistic regression fit, ridge regression inference, ridge regression fit, linear regression inference, linear regression fit, elastic net inference, XGBoost Fit, XGBoost predict, SSD-ResNet34 inference, Resnet50-v1.5 inference, Resnet50-v1.5 training, BERT Large SQuaD inference, kmeans inference, kmeans fit, brute_knn inference, SVC inference, SVC fit, dbscan fit, traintestsplit). Visit Intel.com/performanceindex for more information.