Intel Innovation: Cloud-Edge Infrastructure News

Intel works with the open ecosystem to ensure developers have optimized tools and software environments.




Computing is spreading across heterogeneous fabrics of CPUs, GPUs, application accelerators, interconnect processors, edge-computing devices and FPGAs – all of which require persistent memory and software to bind these elements into a complete solution. The race is on to generate, store and analyze data at zettascale levels. It took over 12 years to get from petascale to exascale computing. Intel has challenged itself to make it to zetta in five years: zetta 2027.  Central to this goal is Intel’s work with the open ecosystem to ensure developers have optimized tools and software environments to accelerate their deployments.

MORE: Press Kits: Intel Innovation 2021 | 12th Gen Intel Core … Webcast: Intel Innovation Keynote … News Releases: Intel Innovation Spotlights New Products, Technology and Tools for Developers | Intel Unveils 12th Gen Intel Core, Launches World’s Best Gaming Processor, i9-12900K … Intel Innovation Topic News: Developer/oneAPI | Ubiquitous Computing | Artificial Intelligence | Pervasive Connectivity

Intel and SiPearl Accelerate Europe’s Exascale Computing Program

Intel and SiPearl are collaborating to provide a joint platform for the first European exascale supercomputers. SiPearl is designing the microprocessor used in European exascale supercomputers and has selected Intel’s Ponte Vecchio graphics processing units (GPU) as the high performance computing (HPC) accelerator within the system’s HPC node. This partnership will offer European customers the possibility to combine SiPearl’s high-performance and low-power central processing unit (“Rhea”) with Intel’s family of general-purpose GPUs to make a high-performance compute node fostering European exascale deployments.

To enable this powerful combination, SiPearl plans to port and optimize oneAPI for the Rhea processor. As an open, standards-based programming model, oneAPI increases developer productivity and workload performance by providing a single programming solution across the heterogeneous compute node. The paired solution will also underline the value of Compute Express Link standardization in connecting compute elements, providing lower latency, coherent connectivity relative to PCIe connections.

Intel works with the open-source community and ecosystem partners to simplify the developer process to build on next-generation Intel Xeon Scalable processors (code-named “Sapphire Rapids”). (Credit: Intel Corporation)

Prepping Developer Ecosystem for Next-Gen Intel Xeon Scalable Processors

Intel is working with the open-source community and its large pool of ecosystem partners to simplify the process for developers to build on next-generation Intel® Xeon® Scalable® processors (code-named “Sapphire Rapids”) and harness several of the new acceleration engines built into the processor. Next-gen Xeon processors are designed to tackle overhead in data-center-scale deployment models while enabling greater processor core utilization and reducing power and area costs. A new Intel® Accelerator Interfacing Architecture (AIA) instruction set built into the processor helps to support efficient dispatch, synchronization and signaling to discrete accelerators. Developers will also have access to several of the performance-enhancing accelerator engines within next-gen Xeon processors, including Intel® Advanced Matrix Extensions (AMX), Intel® QuickAssist Technology and Intel® Dynamic Load Balancer.