Framework VersionModelUsagePrecisionThroughputPerf/WattLatency(ms)Batch size
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingavx_fp3222.28 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingavx_fp3222.28 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingavx_fp32195.41 tokens/s  30
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingavx_fp32172.08 tokens/s  30
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingamx_int879.08 token/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_int873.17 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingamx_int8525.55 token/s  30
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_int8352.25 tokens/s  30
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingamx_bf1643.24 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf1641.56 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingamx_bf16392.6 tokens/s  30
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf16280.2 tokens/s  26
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 1024/128Natural Language Processingamx_bf3222.88 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf3222.88 tokens/s  1
Intel PyTorch 2.6.0+IPEX Inf LLMsChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf32765.42 tokens/s  30
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingfp3223.03 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingfp3222.53 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingfp32168.20 tokens/s  16
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingfp32129.32 tokens/s  32
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingamx_int875.20 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingamx_int869.49 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingamx_int8467.34 tokens/s  16
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingamx_int8260.83 tokens/s  32
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf1641.60 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingamx_bf1639.77 tokens/s  1
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 2016/32Natural Language Processingamx_bf16389.80 tokens/s  16
OpenVINO 2024.4.0 Inf LLMChatGLM3-6B Token Size 1024/128Natural Language Processingamx_bf16222.83 tokens/s  32
Intel PyTorch 2.1 DeepSpeedGPT-J 6B Token size 1024/128text-generation, Beam Search, Width=4int8  351
Intel PyTorch 2.1 DeepSpeedGPT-J 6B Token size 1024/128text-generation, Beam Search, Width=4int8173 tokens/s 92.58
Intel PyTorch 2.1 DeepSpeedGPT-J 6B Token size 1024/128text-generation, Beam Search, Width=4bf16  52.51
Intel PyTorch 2.1 DeepSpeedGPT-J 6B Token size 1024/128text-generation, Beam Search, Width=4bf16  98.58
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingavx_fp3222.64 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingavx_fp3221.95 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingavx_fp32141.65 tokens/s  21
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingavx_fp32106.92 tokens/s  15
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_int878.05 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_int872.73 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_int8430.84 tokens/s  25
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_int8304.49 tokens/s  25
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf1643.68 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf1641.59 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf16352.17 tokens/s  27
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf16262.67 tokens/s  27
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf3222.68 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf3221.84 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf32172.78 tokens/s  21
Intel PyTorch 2.6.0+ IPEX Inf LLMsGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf32123.58 tokens/s  15
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingavx_fp3223.20 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingavx_fp3222.34 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingavx_fp32132.50 tokens/s  16
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingavx_fp3290.51 tokens/s  32
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingamx_int877.03 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingamx_int871.07 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingamx_int8446.21 tokens/s  16
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingamx_int8230.70 tokens/s  32
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf1643.13 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf1641.29 tokens/s  1
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 2016/32Natural Language Processingamx_bf16379.96 tokens/s  16
OpenVINO 2024.4.0 Inf LLMGPT-J-6B Token Size 1024/128Natural Language Processingamx_bf16206.26 tokens/s  32
Intel PyTorch 2.2 MLPerf v4.0GPT-J (MLPerf v4.0, offline, 99.0% acc)CNN-DailyMail News Text Summarization (input 13,368)int43.61 samp/s  8
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingavx_fp3210.67 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingavx_fp3210.25 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingavx_fp3256.51 tokens/s  10
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingavx_fp3240.28 tokens/s  6
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_int838.37 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_int835.69 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_int8224.35 tokens/s  19
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_int8138.8 tokens/s  18
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf1620.49 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf1619.56 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf16187.43 tokens/s  30
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf16124.03 tokens/s  22
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf3210.68 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf3210.32 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf3263.42 tokens/s  10
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf3242.16 tokens/s  6
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingavx_fp3210.60 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingavx_fp3210.27 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingavx_fp3261.31 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingavx_fp3246.52 tokens/s  32
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingamx_int835.62 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingamx_int832.67 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingamx_int8219.04 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingamx_int8100.83 tokens/s  32
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf1620.07 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf1619.17 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 2016/32Natural Language Processingamx_bf16181.39 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-13B Token size 1024/128Natural Language Processingamx_bf1687.99 tokens/s  32
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingavx_fp3220.07 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingavx_fp3219.11 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingavx_fp32126.26 tokens/s  19
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingavx_fp3292.14 tokens/s  15
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_int870.59 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_int864.78 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_int8327.4 tokens/s  18
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_int8248.89 tokens/s  21
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf1638.9 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf1636.67 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf16310.58 tokens/s  29
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf16220.41 tokens/s  27
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf3220.08 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf3219.12 tokens/s  1
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf32144.72 tokens/s  19
Intel PyTorch 2.6.0+ IPEX Inf LLMsLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf32106.95 tokens/s  15
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingavx_fp3220.09 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingavx_fp3219.47 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingavx_fp32122.89 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingavx_fp3285.47 tokens/s  32
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingamx_int863.28 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingamx_int858.50 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingamx_int8320.79 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingamx_int8167.44 tokens/s  64
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf1637.70 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf1636.06 tokens/s  1
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 2016/32Natural Language Processingamx_bf16276.86 tokens/s  16
OpenVINO 2024.4.0 Inf LLMLLaMA2-7B Token size 1024/128Natural Language Processingamx_bf16151.64 tokens/s  32
Intel PyTorch 2.1 DeepSpeedLLaMA2-7B Token size 1024/128text-generation, Beam Search, Width=4int8  41.51
Intel PyTorch 2.1 DeepSpeedLLaMA2-7B Token size 1024/128text-generation, Beam Search, Width=4int8149.5 tokens/s 1078
Intel PyTorch 2.1 DeepSpeedLLaMA2-7B Token size 1024/128text-generation, Beam Search, Width=4bf16  59.51
Intel PyTorch 2.1 DeepSpeedLLaMA2-7B Token size 1024/128text-generation, Beam Search, Width=4bf16142.2 tokens /s 112.58
OpenVINO 2023.2LLaMA2-7b Token size 32/512GenAI_chatInt411.3 tokens/s 88.441
OpenVINO 2023.2LLaMA2-7b Token size 32/512GenAI_chatint813.5 tokens/s 73.741
OpenVINO 2023.2LLaMA2-7b Token size 32/512GenAI_chatfp3211.3 tokens/s 88.391
OpenVINO 2023.2LLaMA2-7b Token size 80/512GenAI_chatInt411.4 tokens/s 87.171
OpenVINO 2023.2LLaMA2-7b Token size 80/512GenAI_chatint813.6 tokens/s 73.091
OpenVINO 2023.2LLaMA2-7b Token size 80/512GenAI_chatfp3211.2 tokens/s 891
OpenVINO 2023.2LLaMA2-7b Token size 142/512GenAI_chatInt411.5 tokens/s 86.631
OpenVINO 2023.2LLaMA2-7b Token size 142/512GenAI_chatint813.3 tokens/s 75.151
OpenVINO 2023.2LLaMA2-7b Token size 142/512GenAI_chatfp3211.1 tokens/s 89.731
OpenVINO 2023.2Stable Diffusion 2.1, 20 Steps, 64 PromptsGenAI_text_imageint80.24 img/s 4,1601
MLPerf Inference v4.0Stable Diffusion XL (offline)Image Generationbf160.19 samp/s  8
OpenVINO 2024.4.0Stable-DiffusionImage Generationfp320.05 samp/s  1
OpenVINO 2024.4.0Stable-DiffusionImage Generationamx_int80.12 samp/s  1
OpenVINO 2024.4.0Stable-DiffusionImage Generationamx_bf160.13 samp/s  1
MLPerf Inference v4.0ResNet50 v1.5 (offline)Image Recognitionint825,289.6 samp/s  256
Intel PyTorch 2.1ResNet50 v1.5Image RecognitionInt812,862.56 img/s13.23 1
Intel PyTorch 2.1ResNet50 v1.5Image RecognitionInt819,386.47 img/s19.21 64
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf168,211.8 img/s8.13 1
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf1610,187.87 img/s10.82 64
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionfp321,773.68 img/s1.74 1
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionfp321,703.77 img/s1.57 64
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf322,431.26 img/s2.4 1
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf322,686.97 img/s2.67 64
Intel TensorFlow 2.14ResNet50 v1.5Image RecognitionInt89,726.18 img/s9.67 1
Intel TensorFlow 2.14ResNet50 v1.5Image RecognitionInt816,036.8 img/s17.01 32
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionbf166,782.09 img/s7.04 1
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionbf169,312.72 img/s9.4 32
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionfp321,560.99 img/s1.45 1
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionfp321,663.44 img/s1.57 32
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionbf322,013.88 img/s1.84 1
Intel TensorFlow 2.14ResNet50 v1.5Image Recognitionbf322,874.29 img/s2.73 32
OpenVINO 2023.2ResNet50 v1.5Image RecognitionInt818,674.37 img/s26.68 1
OpenVINO 2023.2ResNet50 v1.5Image Recognitionbf1611,537.06 img/s16.48 1
OpenVINO 2023.2ResNet50 v1.5Image Recognitionfp321,721.58 img/s2.46 1
MLPerf Inference v4.0BERT-Large (offline, 99.0% acc)Natural Language Processingint81,668.5 samp/s  1,300
OpenVINO 2024.4.0BERTLargeNatural Language Processingfp3255.82 sent/s  1
OpenVINO 2024.4.0BERTLargeNatural Language Processingfp3252.53 sent/s  32
OpenVINO 2024.4.0BERTLargeNatural Language Processingamx_int8415.88 sent/s  1
OpenVINO 2024.4.0BERTLargeNatural Language Processingamx_int8395.41 sent/s  64
OpenVINO 2024.4.0BERTLargeNatural Language Processingamx_bf16246.91 sent/s  1
OpenVINO 2024.4.0BERTLargeNatural Language Processingamx_bf16253.7 sent/s  32
Intel PyTorch 2.1BERTLargeNatural Language Processingint8411.14 sent/s0.42 1
Intel PyTorch 2.1BERTLargeNatural Language Processingint8455.33 sent/s0.45 16
Intel PyTorch 2.1BERTLargeNatural Language Processingbf16243.89 sent/s0.24 1
Intel PyTorch 2.1BERTLargeNatural Language Processingbf16278.00 sent/s0.25 44
Intel PyTorch 2.1BERTLargeNatural Language Processingfp3244.56 sent/s0.04 1
Intel PyTorch 2.1BERTLargeNatural Language Processingfp3250.49 sent/s0.05 16
Intel PyTorch 2.1BERTLargeNatural Language Processingbf3298.49 sent/s0.09 1
Intel PyTorch 2.1BERTLargeNatural Language Processingbf3296.98 sent/s0.09 16
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingavx_fp3252.79 sent/s  1
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingavx_fp3251.67 sent/s  12
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_int8431.06 sent/s  1
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_int8539.05 sent/s  44
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_bf16240.04 sent/s  1
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_bf16280.08 sent/s  36
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_bf3296.74 sent/s  1
Intel PyTorch 2.6.0 + IPEXBERT LargeNatural Language Processingamx_bf3297.76 sent/s  12
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingfp3247.92 sent/s  1
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingfp3244.56 sent/s  12
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_int8266.89 sent/s  1
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_int8200.88 sent/s  10
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_bf16200.38 sent/s  1
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_bf16219.81 sent/s  196
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_bf3293.19 sent/s  1
Intel Tensor Flow 2.19.0BERT LargeNatural Language Processingamx_bf3286.61 sent/s  12
Intel TensorFlow 2.14BERTLargeNatural Language Processingint8323.58 sent/s0.32 1
Intel TensorFlow 2.14BERTLargeNatural Language Processingint8324.56 sent/s0.33 12
Intel TensorFlow 2.14BERTLargeNatural Language Processingbf16224.04 sent/s0.22 1
Intel TensorFlow 2.14BERTLargeNatural Language Processingbf16231.37 sent/s0.23 28
Intel TensorFlow 2.14BERTLargeNatural Language Processingfp3255.34 sent/s0.05 1
Intel TensorFlow 2.14BERTLargeNatural Language Processingfp3248.46 sent/s0.05 12
Intel TensorFlow 2.14BERTLargeNatural Language Processingbf32101.93 sent/s0.1 1
Intel TensorFlow 2.14BERTLargeNatural Language Processingbf3298.81 sent/s0.1 12
OpenVINO 2023.2BERTLargeNatural Language Processingint8373.6867 sent/s0.37 1
OpenVINO 2023.2BERTLargeNatural Language Processingint8388.25 sent/s0.39 32
OpenVINO 2023.2BERTLargeNatural Language Processingbf16244.25 sent/s0.24 1
OpenVINO 2023.2BERTLargeNatural Language Processingbf16281.79 sent/s0.27 40
OpenVINO 2023.2BERTLargeNatural Language Processingfp3257.16667 sent/s0.06 1
OpenVINO 2023.2BERTLargeNatural Language Processingfp3255.67 sent/s0.05 16
Intel PyTorch 2.1DLRM Criteo TerabyteRecommenderint823,444,587 rec/s23611.92 128
Intel PyTorch 2.1DLRM Criteo TerabyteRecommenderbf1613,223,343 rec/s12742.32 128
Intel PyTorch 2.1DLRM Criteo TerabyteRecommenderfp322,742,037 rec/s2615.42 128
Intel PyTorch 2.1DLRM Criteo TerabyteRecommenderbf326,760,005 rec/s6699.18 128
Intel PyTorch 2.6.0 + IPEXDLRM-v2Recommenderavx_fp32386,589 rec/s  128
Intel PyTorch 2.6.0 + IPEXDLRM-v2Recommenderamx_int83,995,112 rec/s  128
Intel PyTorch 2.6.0 + IPEXDLRM-v2Recommenderamx_bf162,566,728 rec/s  128
Intel PyTorch 2.6.0 + IPEXDLRM-v2Recommenderamx_bf32718,293 rec/s  128
MLPerf Inference v4.0DLRM-v2 (offline, 99.9% acc)Recommenderint89,111.08 samp/s  400
Intel PyTorch 2.6.0 + IPEXStable-DiffusionImage Generationavx_fp320.05 img/s  1
Intel PyTorch 2.6.0 + IPEXStable-DiffusionImage Generationamx_int80.22 img/s  1
Intel PyTorch 2.6.0 + IPEXStable-DiffusionImage Generationamx_bf160.19 img/s  1
Intel PyTorch 2.6.0 + IPEXStable-DiffusionImage Generationamx_bf320.06 img/s  1
Intel PyTorch 2.1DistilBERTNatural Language Processingint86,380.26 sent/s6.8 1
Intel PyTorch 2.1DistilBERTNatural Language Processingint810,701.44 sent/s11.39 104
Intel PyTorch 2.1DistilBERTNatural Language Processingbf164,651.69 sent/s4.97 1
Intel PyTorch 2.1DistilBERTNatural Language Processingbf166,864.75 sent/s7.23 88
Intel PyTorch 2.1DistilBERTNatural Language Processingfp321,121.45 sent/s1.12 1
Intel PyTorch 2.1DistilBERTNatural Language Processingfp321,205.86 sent/s1.27 32
Intel PyTorch 2.1DistilBERTNatural Language Processingbf322,161.93 sent/s2.15 1
Intel PyTorch 2.1DistilBERTNatural Language Processingbf322,584.98 sent/s2.63 56
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionavx_fp32369.22 fps  1
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionavx_fp32367.75 fps  64
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_int82556.2 fps  1
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_int83632.56 fps  118
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_bf161456.58 fps  1
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_bf161936.35 fps  142
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_bf32699.98 fps  1
Intel PyTorch 2.6.0 + IPEXVision-TransformerImage Recognitionamx_bf32767.5 fps  256
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionfp32336.59 fps  1
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionfp32356.97 fps  38
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_int81481.00 fps  1
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_int82066.89 fps  64
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_bf161234.05 fps  1
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_bf161687.83 fps  96
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_bf32790.19 fps  1
Intel Tensor Flow 2.19.0Vision-TransformerImage Recognitionamx_bf321031.97 fps  38
OpenVINO 2024.4.0Vision-TransformerImage Recognitionfp32368.75 fps  1
OpenVINO 2024.4.0Vision-TransformerImage Recognitionfp32372.06 fps  32
OpenVINO 2024.4.0Vision-TransformerImage Recognitionamx_int82193.38 fps  1
OpenVINO 2024.4.0Vision-TransformerImage Recognitionamx_int82218.53 fps  64
OpenVINO 2024.4.0Vision-TransformerImage Recognitionamx_bf161335.27 fps  1
OpenVINO 2024.4.0Vision-TransformerImage Recognitionamx_bf161241.14 fps  32
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationint877.94 sent/s0.07 1
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationint8334.65 sent/s0.31 448
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf1652 sent/s0.05 1
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf16367.07 sent/s0.35 448
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationfp321,099.6 sent/s26.53 1
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationfp32137.37 sent/s0.12 448
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf3224.86 sent/s0.02 1
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf32155.04 sent/s0.14 448
OpenVINO 2023.23D-UnetImage Segmentationint830.31 samples/s0.03 1
OpenVINO 2023.23D-UnetImage Segmentationint827.18333 samples/s0.02 6
OpenVINO 2023.23D-UnetImage Segmentationbf1615.67667 samples/s0.01 1
OpenVINO 2023.23D-UnetImage Segmentationbf163.18 samples/s0 7
OpenVINO 2023.23D-UnetImage Segmentationfp323.49 samples/s0 1
OpenVINO 2023.23D-UnetImage Segmentationfp3214.40 samples/s0.01 3
OpenVINO 2023.2SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionint8590.2267 img/s0.57 1
OpenVINO 2023.2SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionbf16297.79 img/s0.28 1
OpenVINO 2023.2SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionfp3236.92 img/s0.04 1
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationint81,679.87 fps1.73 1
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationint82,481.66 fps2.56 58
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationbf16802.44 fps0.8 1
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationbf161,175.18 fps1.1 72
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationfp32186.33 fps0.19 1
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationfp32202.33 fps0.19 40
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationbf32279.07 fps0.28 1
Intel PyTorch 2.1ResNeXt101 32x16d ImageNetImage Classificationbf32320.62 fps0.29 58
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationfp321773.31 fps  1
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationfp321639.04 fps  16
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationamx_int811951.1 fps  1
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationamx_int817018.39 fps  64
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationamx_bf168324.08 fps  1
OpenVINO 2024.4.0ResNet50-v1-5Image Classificationamx_bf1610659.37 fps  32
Intel PyTorch 2.6.0 + IPEXLCMReasoning and Understandingavx_fp320.79  1
Intel PyTorch 2.6.0 + IPEXLCMReasoning and Understandingamx_int83.28  1
Intel PyTorch 2.6.0 + IPEXLCMReasoning and Understandingamx_bf162.73  1
Intel PyTorch 2.6.0 + IPEXLCMReasoning and Understandingamx_bf320.92  1
OpenVINO 2024.4.0LCMReasoning and Understandingfp320.69  1
OpenVINO 2024.4.0LCMReasoning and Understandingamx_int81.84  1
OpenVINO 2024.4.0LCMReasoning and Understandingamx_bf161.85  1
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionavx_fp32124.82 fps  1
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionavx_fp32122.74 fps  4
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_int8648.05 fps  1
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_int8662.29 fps  16
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_bf16553.29 fps  1
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_bf16518.59 fps  30
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_bf32194.71 fps  1
Intel PyTorch 2.6.0 + IPEXYolo-v7Object Detectionamx_bf32184.2 fps  32
OpenVINO 2023.2Yolo-v8nObject DetectionInt83,513.54 img/s  1
OpenVINO 2023.2Yolo-v8nObject Detectionbf163,632.55 img/s  1
OpenVINO 2023.2Yolov-8nObject Detectionfp321,249.91 img/s  1
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionfp32781.50 img/s  1
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionfp32551.60 img/s  36
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionamx_bf161548.03 img/s  1
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionamx_bf161194.27 img/s  36
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionamx_bf32880.03 img/s  1
Intel Tensor Flow 2.19.0Yolo-v5Object Detectionamx_bf32625.90 img/s  36
OpenVINO 2024.4.0Yolov-5sObject Detectionfp32728.05 img/s  1
OpenVINO 2024.4.0Yolov-5sObject Detectionfp32629.49 img/s  8
OpenVINO 2024.4.0Yolov-5sObject Detectionamx_int83229.96 img/s  1
OpenVINO 2024.4.0Yolov-5sObject Detectionamx_int82815.91 img/s  8
OpenVINO 2024.4.0Yolov-5sObject Detectionamx_bf162216.34 img/s  1
OpenVINO 2024.4.0Yolov-5sObject Detectionamx_bf162075.86 img/s  16
MLPerf Inference v4.0RetinaNet (offline)Object Detectionint8371.08 samp/s  2
MLPerf Inference v4.0RNN-T (offline)Speech-to-textint8+bf168,679.48 samp/s  256
Intel Tensor Flow 2.19.0R-GATMulti-Relational GraphsFP323187.69  1
Intel Tensor Flow 2.19.0R-GATMulti-Relational GraphsFP326823.85  649
Intel Tensor Flow 2.19.0R-GATMulti-Relational Graphsamx_bf165523.51  1
Intel Tensor Flow 2.19.0R-GATMulti-Relational Graphsamx_bf1619143.27  649
Intel Tensor Flow 2.19.0R-GATMulti-Relational Graphsamx_bf323297.23  1
Intel Tensor Flow 2.19.0R-GATMulti-Relational Graphsamx_bf329764.66  649

 

Framework VersionModelUsagePrecisionTTT (minutes)AccuracyBatch SizeRanks
Transformers 4.31, Intel Extension for Pytorch 2.0.1, PEFT 0.4.0GPT-J 6B (Glue MNLI dataset)Fine-tuning, Text-generationbf16184.2082.281
Transformers 4.34.1, Intel PyTorch 2.1.0, PEFT 0.5.0, Intel(r) oneCCL v2.1.0BioGPT 1.5B (PubMedQA dataset)Response generationbf1639.8079.488
Intel(r) Tensorflow 2.14, horovod 0.28, Open MPI 4.1.2, Python 3.10.0ResNet50 v1.50 (Colorectal histology dataset)Colorectal cancer detectionfp326.9894.13264
Intel(r) Tensorflow 2.14, horovod 0.28, Open MPI 4.1.2, Python 3.10.0ResNet50 v1.50 (Colorectal histology dataset)Colorectal cancer detectionbf164.0894.93264
Intel(r) Tensorflow 2.14, horovod 0.28, Open MPI 4.1.2, Python 3.10.0ResNet50 v1.50 (Colorectal histology dataset)Colorectal cancer detectionfp325.3494.132128
Intel(r) Tensorflow 2.14, horovod 0.28, Open MPI 4.1.2, Python 3.10.0ResNet50 v1.50 (Colorectal histology dataset)Colorectal cancer detectionbf162.9094.932128
Transformers 4.35.0, Intel PyTorch 2.0.100, Intel® oneCCL 2.0.100BERTLarge Uncased (IMDb dataset)Sentiment Analysisfp3247.9593.84644
Transformers 4.35.0, Intel PyTorch 2.0.100, Intel® oneCCL 2.0.100BERTLarge Uncased (IMDb dataset)Sentiment Analysisbf1615.9693.8644
Transformers 4.35.0, Intel PyTorch 2.0.100, Intel® oneCCL 2.0.100BERTLarge Uncased (GLUE SST2 dataset)Sentiment Analysisfp3210.4892.22564
Transformers 4.35.0, Intel PyTorch 2.0.100, Intel® oneCCL 2.0.100BERTLarge Uncased (GLUE SST2 dataset)Sentiment Analysisbf162.9392.092564

 

Framework VersionModel/DatasetUsagePrecisionThroughputPerf/WattBatch size
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionfp32175.29 img/s0.22128
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf16396.24 img/s0.52256
Intel PyTorch 2.1ResNet50 v1.5Image Recognitionbf32197.14 img/s0.25128
Intel TensorFlow 2.14ResNet50 v1.5 ImageNet (224 x224)Image Recognitionfp32145.93 img/s0.19512
Intel TensorFlow 2.14ResNet50 v1.5 ImageNet (224 x224)Image Recognitionbf16354.45 img/s0.46512
Intel TensorFlow 2.14ResNet50 v1.5 ImageNet (224 x224)Image Recognitionbf32166.37 img/s0.21512
Intel PyTorch 2.1DLRM Criteo Terabyte, QUAD ModeRecommenderfp32290,772.24 rec/s359.8332,768
Intel PyTorch 2.1DLRM Criteo Terabyte, QUAD ModeRecommenderbf16862,286.46 rec/s 1,055.35 32,768
Intel PyTorch 2.1DLRM Criteo Terabyte, QUAD ModeRecommenderbf32417,584.33 rec/s504.2932,768
Intel TensorFlow 2.14SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionfp3261.25 img/s0.09448
Intel TensorFlow 2.14SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionbf16219.77 img/s0.31448
Intel TensorFlow 2.14SSD-ResNet34 COCO 2017 (1200 x1200)Object Detectionbf3283.44 img/s0.11448
Intel PyTorch 2.1RNNT LibriSpeechSpeech Recognitionfp324.35 fps0.0164
Intel PyTorch 2.1RNNT LibriSpeechSpeech Recognitionbf1635.13 fps0.0464
Intel PyTorch 2.1RNNT LibriSpeechSpeech Recognitionbf3213.65 fps0.0232
Intel PyTorch 2.1MaskR-CNN COCO 2017Object Detectionfp324.8 img/s0.01128
Intel PyTorch 2.1MaskR-CNN COCO 2017Object Detectionbf1616.43 img/s0.02128
Intel PyTorch 2.1MaskR-CNN COCO 2017Object Detectionbf325.37 img/s0.0196
Intel PyTorch 2.1BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingfp324.41 sent/s0.0164
Intel PyTorch 2.1BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingbf1612.53 sent/s0.0228
Intel PyTorch 2.1BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingbf325.52 sent/s0.0156
Intel TensorFlow 2.14BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingfp325.38 sent/s0.0164
Intel TensorFlow 2.14BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingbf1611.74 sent/s0.0264
Intel TensorFlow 2.14BERTLarge Wikipedia 2020/01/01 seq len=512Natural Language Processingbf326.07 sent/s0.0164
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationfp3215,671.55 sent/s16.9542,000
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf1640,653.1 sent/s43.7742,000
Intel TensorFlow 2.14Transformer MLPerfLanguage Translationbf3215,316.08 sent/s15.4442,000

Hardware and software configuration (measured October 24, 2023):

Deep learning configuration:

  • Hardware configuration for Intel® Xeon® Platinum 8592+ processor (formerly code named Emerald Rapids): 2 sockets for inference, 1 socket for training, 64 cores, 350 watts, 1024GB 16 x 64GB DDR5 5600 MT/s memory, operating system CentOS* Stream 9. Using Intel® Advanced Matrix Extensions (Intel® AMX) int8 and bf16 with Intel® oneAPI Deep Neural Network Library (oneDNN) optimized kernels integrated into Intel® Extension for PyTorch*, Intel® Extension for TensorFlow*, and Intel® Distribution of OpenVINO™ toolkit. Measurements may vary. If the dataset is not listed, a synthetic dataset was used to measure performance.

Transfer learning configuration:

  • Hardware configuration for Intel® Xeon® Platinum 8592+ processor (formerly code named Emerald Rapids): 2 sockets, 64 cores, 350 watts, 16 x 64 GB DDR5 5600 memory, BIOS version 3B05.TEL4P1, operating system: CentOS stream 8, using Intel® Advanced Matrix Extensions (Intel® AMX) int8 and bf16 with Intel® oneAPI Deep Neural Network Library (oneDNN) v2.6.0 optimized kernels integrated into Intel® Extension for PyTorch* v2.0.1, Intel® Extension for TensorFlow* v2.14, and Intel® oneAPI Data Analytics Library (oneDAL) 2023.1 optimized kernels integrated into Intel® Extension for Scikit-learn* v2023.1. Intel® Distribution of Modin* v2.1.1, and Intel oneAPI Math Kernel Library (oneMKL) v2023.1. Measurements may vary.

MLPerf* configuration:

  • Hardware configuration for MLPerf* Inference v4.0 measurements on Intel® Xeon® Platinum 8592+ processor (formerly code named Emerald Rapids): 2 sockets for inference, 64 cores, 350 watts, 1024 GB 16 x 64 GB DDR5-5600 MT/s memory, operating system: CentOS* Stream 8. Using Intel® Advanced Matrix Extensions (Intel® AMX) int4, int8, and bf16 with Intel® oneAPI Deep Neural Network Library (oneDNN) optimized kernels integrated into Intel® Extension for PyTorch*. Measurements may vary. The model specifications and datasets used for MLPerf workloads are specified by MLCommons and viewable at MLPerf Inference: Datacenter Benchmark Suite Results.

Hardware and software configuration (measured March 13, 2025):

Additional tests were performed on updated versions of the models: 1-node, 2x Intel® Xeon® Platinum 8592+ processors, 64 cores, hyperthreading on, turbo on, non-uniform memory access (NUMA) 4.

Integrated accelerators available (used): DLB 2 [0], DSA 2 [0], IAA 2 [0], QAT 2 [0].

Total memory: 1024 GB (16 x 64 GB DDR5 5600 MT/s [5600 MT/s]), BIOS EGSDCRB1.SYS.0109.D34.2402031438, microcode 0x21000230, 1x Ethernet controller I225-LM, 1x 3.6T SSDPE2KX040T8 from Intel, 1x 931.5G Intel, CentOS* Stream 9, 6.2.0-emr.bkc.6.2.3.6.31.x86_64. TensorFlow*: 2.19.0, Intel® oneAPI Deep Neural Network Library (oneDNN): e34cb13, PyTorch*: 2.6.0.dev20241124+cpu, Intel® Extension for PyTorch*: 2.6.0+gitc5a2330, oneDNN: v3.6.2, OpenVINO™ toolkit: 2024.4.0, oneDNN: 3.5.0. Test by Intel as of March 13, 2025, 10:45:43 a.m. UTC.