Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Developer Tools for Intel® Xeon® Processors

With Intel® Software Development Tools and optimized AI frameworks powered by oneAPI, developers can maximize application performance by activating the advanced capabilities of the latest generation Intel® Xeon® processors for AI and accelerated computing, including Intel® Advanced Matrix Extensions (Intel® AMX), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and Intel Advanced Vector Extensions 2 (Intel® AVX2).

 

 

 

Accelerate AI Frameworks and Applications

Accelerate generative AI (GenAI), LLMs, and other deep learning and data science pipelines using the Intel® oneAPI Base Toolkit and AI Tools from Intel.

Developers can achieve significant performance improvements out of the box on Intel hardware now that Intel® oneAPI library optimizations are being regularly upstreamed to the latest versions of PyTorch,* TensorFlow,* and other leading deep learning frameworks, enabling developers to achieve orders of magnitude performance improvements on Intel hardware using their existing AI workflows.

Intel® oneAPI Deep Neural Network Library (oneDNN) accelerates deep learning and generative AI models on Intel® Xeon® 6 processors with P-cores, the first Intel CPU platform supporting AI acceleration with Intel AMX through FP16 and complex FP16 instruction sets, building on already existing int8 and BF16 extensions.

  • Up to 3x better Llama 2 performance compared to the prior generation for large language models (LLMs)1
  • Up to 1.86x gen-to-gen performance improvement in AI inferencing2

Get the Quick Start Guide to Accelerate AI with Intel AMX and Intel software optimizations.

 

 

Accelerate AI and General Compute Workloads

Build, analyze, optimize, and scale applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory using the Intel oneAPI Base Toolkit, Intel® Distribution for Python*, and Intel® oneAPI HPC Toolkit.

  • Accelerate math functions across multiple domains such as BLAS, LAPACK, and FFT on Intel Xeon 6 processors with Intel® oneAPI Math Kernel Library (oneMKL) performance tuning for up to 2.5x better HPCG performance when compared to the prior generation with MRDIMM.3
  • Intel® oneAPI DPC++/C++ Compiler improves data access through preloading cache, reducing latency and Intel AMX FP16 instruction support used by oneDNN. Get up to 1.6x C++ application performance advantage on Linux relative to other compilers on Intel® Xeon® 6 Processors.4
  • Intel® Fortran Compiler supports back end code generation and can achieve up to 1.46x application performance advantage on Linux relative to other compilers on Intel® Xeon® 6 Processors.5
  • Intel® MPI Library now supports 128-core tuning and optimizations for scale out and scale up.
  • Intel® VTune™ Profiler has new features, such as hotspots, microarchitecture and memory access, I/O, and platform diagram, that make identifying performance bottlenecks and memory issues easier.
  • Intel® oneAPI Threading Building Blocks (oneTBB) is enhanced to scale parallel execution performance on Intel Xeon 6 processor's higher CPU core count to accelerate multi-threaded applications.

Intel Software Development Tools:

Intel AI Tools

Intel oneAPI Base Toolkit

Intel® oneAPI HPC Toolkit

Intel® Distribution for Python*

Intel® Fortran Compiler

Intel® oneAPI Deep Neural Network Library

Intel® oneAPI DPC++/C++ Compiler

Intel® oneAPI Math Kernel Library

Intel® VTune™ Profiler

Intel® oneAPI Threading Building Blocks

1) See [9A2] at intel.com/processorclaims: Intel Xeon 6 processors. Results may vary.

2) See [9A3] at intel.com/processorclaims: Intel Xeon 6 processors. Results may vary.

3) See [9H10] at intel.com/processorclaims: Intel Xeon 6 processors. Results may vary. More details on oneMKL benchmarks for Intel Xeon 6 processors.

(4) See https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler.html

(5) See https://www.intel.com/content/www/us/en/developer/tools/oneapi/fortran-compiler.html

Multiarchitecture Acceleration

Intel Software Development Tools offers a complete set of advanced compilers, libraries, analysis tools, and optimized frameworks that help developers optimize multiarchitecture applications on Intel CPUs, GPUs, and FPGAs.  

In multiarchitecture systems with Intel Xeon processors and Intel GPUs, using a single code base through oneAPI programming delivers productivity and performance. Accelerate time-to-value when developing high-performance applications. Ensure your software investments continue to add value with future-ready programming.  

Learn how Intel Software Development Tools enable multiarchitecture acceleration.

Migrate from CUDA* to C++ with SYCL* to unlock your code from the constraints of vendor-specific tools and accelerators.

Get Started

Develop and scale your code with confidence with Intel Software Development Tools, the highly productive development stack for AI and open accelerated computing.

Developers can obtain cloud-based access to the latest Intel hardware, including Intel Xeon 6 processors, and Intel Software Development Tools on the Intel® Tiber™ AI Cloud.

Learn how to use Intel VTune Profiler to:

Detect and Fix Scheduling Overhead in an Intel oneTBB Application

Understand the Use Efficiency of Intel® Data Direct I/O Technology, a Hardware Feature of Intel Xeon Processors

Analyze CPU Use of Your OpenMP* or Hybrid OpenMP-MPI Application and Identify Causes of Possible Inefficiencies

Detect and Fix Frequent Parallel Bottlenecks of OpenMP Programs, such as Imbalance on Barriers and Scheduling Overhead

Identify Processor Core Underuse with OpenMP Serial Time

Learn More about the Latest Intel Xeon Processors
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo