
A New Era of Accelerated Computing
oneAPI: Putting the Needs of Developers First
oneAPI is an open, cross-architecture programming model that frees developers to use a single code base across multiple architectures. The result is accelerated compute without vendor lock-in.
Explore how to develop on:
AI Workloads: The Newest 2023 Tools Streamline Multiarchitecture Programming
Take advantage of the latest AI resources, optimized to empower developers with end-to-end performance and productivity across Intel's portfolio of CPUs and GPUs.
2023.1 Release of Intel® AI Analytics Toolkit – just launched February 10th
Six New AI Reference Kits – making for almost 30 total that can be applied free to multiple industry workloads
New code samples for end-to-end AI workloads: Census | Language Identification | Get Started with Intel® Extension for PyTorch*
Performance Leap in Inference & Training
oneAPI tools benchmark test results for MLPerf* DeepCAM, commonly used in HPC data centers for detection of hurricanes and atmospheric rivers in climate data.
Deep Learning Inference & Training Boost
Hardware acceleration with Intel® Advanced Matrix Extensions (Intel® AMX) and support for int8 (inference) and bfloat16 (training/inference) datatypes provide significant performance gains for AI and deep learning inference and training workloads compared to previous generation Intel® Xeon® Scalable processors and competition. TensorFlow* and PyTorch* frameworks running on 4th generation Intel Xeon Scalable Processors deliver leading AI performance with Intel AMX support and extended optimization capabilities enabled through the Intel® oneAPI Deep Neural Network Library (oneDNN).
Speeding Exascale Material Discovery
oneAPI tools benchmark test results for the Liquid Crystal workload in LAMMPS, a popular molecular dynamics code for life science and materials research.
Life & Material Science Speed-Ups
This benchmark was run on a combination of Intel® Xeon CPU Max Series and Intel® Data Center GPU Max Series using both DDR and high-bandwidth memory. Many of the brain-mapping codes are written in C++, so the Intel® oneAPI DPC++/C++ Compiler—the oneAPI implementation of C++ with SYCL*—was used to expose parallelism and vectorization to effectively use the GPUs. OpenMP* was also used in the compiler for parallelism in other portions of the code along with the Intel® oneAPI Math Kernel Library (oneMKL).
Dev-to-Dev Insights & POV
The Parallel Universe Magazine, Issue 51
Featuring the four-year anniversary of oneAPI
The January 2023 issue of the award-winning technical journal focuses on SYCL* and oneAPI performance libraries.
Why Intel® IPP is Great for Multimedia & Data Processing Performance
Robert Mueller-Albrecht, oneAPI Library Tools Marketing, Intel
Learn about the key functions of this primitives library, including usage models and code samples.
Next-Gen Compiler Technologies: a Q&A with the Architects
Don Badie, senior director oneAPI Marketing, Intel
Two experts discuss the rise, diversity, and future of compilers and their performance on Intel’s new hardware.
Brilliant Speedup for Volume Rendering
Greg P. Johnson, ray tracing software engineer, Intel
The latest release of Intel® Open VKL supports multiple new features for scientific visualization and VFX.
Industry Showcase
See the latest collaborations, use cases, and success stories facilitated by oneAPI.
Aible* and Intel discuss the cloud-based AI solution company’s success in helping customers achieve value in under 30 days and how they do it. Listen. [32:13]
A joint venture between the university, Pembroke College, Intel, and Dell* is advancing next-gen HPC computers, using oneAPI tools to drive the solution.
Bookmark to stay up to date on this partnership and its AI solutions for edge, analytics, and automation that are becoming a cornerstone to helping businesses grow and innovate.
Check out the growing list of over 200 global companies, universities, and institutions taking advantage of oneAPI to advance applications and solutions and open new doors for developer opportunities.