EPCC: Cluster Helps Solve Exascale I/O Challenges

Intel® Optane™ persistent memory with 2nd Gen Intel® Xeon® Scalable processors accelerate HPC workloads.

At a glance:

  • EPCC is the supercomputing center based at the University of Edinburgh. Its mission is to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.

  • EPCC’s new Intel® Optane™ persistent memory cluster delivered promising benefits for the NEXTGenIO project.

author-image

By

Executive Summary

EPCC (Edinburgh Parallel Computing Center) is a hotbed for scientific research using High Performance Computing (HPC). The center has gained an international reputation for the most advanced capability in all aspects of HPC, High Performance Data Analytics (HPDA), and novel computing. EPCC hosts a range of leading HPC systems including Tesseract, the United Kingdom’s Extreme Scaling service of the DiRAC (Distributed Research utilizing Advanced Computing) facility. Tesseract is a 20,000-plus core hypercube-based supercomputer built on Intel® Xeon® Scal­able processors and Cornelis Networks1 fabric. But among its other resources is a smaller, yet as important, cluster that enabled breakthrough research on the NEXTGenIO project. Researchers use this cluster, built by Fujitsu on 2nd Generation Intel Xeon Scalable processors and Intel® Optane™ persistent memory, to modify and optimize codes that can deliver the benefits of Intel Optane persistent memory for high-throughput parallel workloads.

Challenge

To achieve Exascale computing requires addressing many challenges, including those with HPC I/O, which has lagged behind other HPC advances. EPCC led the NEXTGenIO project, funded by the European Commission, with research into how to leverage byte-addressable persistent memory (B-APM) for large parallel com­puting workloads. Partners in NEXTGenIO include EPCC, Intel, Fujitsu, Barcelona Supercomputing Center (BSC), Technische Universitat Dresden, ARM/Allinea, European Center for Medium-range Weather Forecasting (ECMWF), and ARCTUR.

Intel Optane persistent memory along with 2nd Generation Intel Xeon Scalable processors deliver a flexible B-APM architecture for servers using Intel Optane persistent memory.

For the NEXTGenIO project, work at EPCC focused on the overall technical and architectural needs to maximize the potential of Intel Optane persistent memory in its several different usage modes. Working with industry and academia across Europe, researchers implemented new filesystems, designed and developed data-aware schedulers, investigated check-pointing software, and integrated I/O and communi­cation libraries for the necessary modifications and optimizations to software that can benefit from Intel Optane persistent memory.

Solution

Exploring new memory hierarchies for HPC architectures in order to accelerate performance and increase system efficiencies has been a focus of supercomputing for many years. From burst buffers to SSDs in each node for local data storage, and now B-APM, system architects are moving faster, more efficient technologies closer to the CPUs.

Intel Optane persistent memory combines the traits of storage and memory into a single high-capacity module that fits into a server DRAM slot. Most 2nd Generation Intel Xeon Scalable processors recognize this technology, provid­ing near-DRAM performance with up to 3 TB of capacity per socket. Intel Optane persistent memory offers different usage modes to optimize the technology for various types of workloads, whether they need massive volatile memory capacity, persistent data storage with near-DRAM perfor­mance, or a combination of both.

With an OpenFOAM code with high amounts of I/O, we used Intel Optane persistent memory as a filesystem on the node. We used some of our tools to get data into and out of the node and used the persistent memory as the data store. The solver ran over 8x faster—12 percent of the original run time— compared to going to the Lustre filesystem for data.”—Adrian Jackson, Senior Research Fellow at EPCC

EPCC’s new research cluster built by Fujitsu provided computing resources for NEXTGenIO research. In an intensive co-creation process, Fujitsu analyzed NEXTGenIO partners’ application I/O and compute requirements and designed and manufactured the NEXTGenIO system focused to overcome existing bottlenecks. The cluster houses 34 nodes of dual-socket 2nd Generation Intel Xeon Platinum 8260 processors, 3 TBs of Intel Optane persistent memory, and 192 GB of DRAM per node. A 100 Gbps Cornelis Networks fabric connects the compute nodes, while a 56 Gbps InfiniBand network attaches to an external Lustre parallel filesystem. The architecture allowed researchers to learn how to take advantage of high-capacity persistent memory across many different large parallel codes, such as OpenFOAM, a compu­tational fluid dynamics (CFD) code used in the ECMWF’s IFS forecasting software.

“We are glad Fujitsu could contribute to the success of the NEXTGenIO project,” said Olivier Delachapelle, Fujitsu Head of Category Management-Products Sales Europe. “Solving the I/O bottleneck opens the door to significantly increased perfor­mance and scalability towards HPC Exascale. Fujitsu success­fully integrated the DCPMM technology to our PRIMERGY and PRIMEQUEST product lines. This breakthrough in I/O perfor­mance therefore will also reduce time to results for many of our customers applications in a broad range of business sectors.”

Intel Optane persistent memory along with 2nd Generation Intel Xeon Scalable processors deliver a flexible B-APM architec­ture for servers.

Results

EPCC’s new Intel Optane persistent memory cluster delivered promising benefits for the NEXTGenIO project. Benchmarks and large-scale applications were used to measure performance of the cluster with Intel Optane persistent memory.

Working with the ECMWF, project partners were able develop novel software to take advantage of the new cluster’s tech­nologies. The ECMWF ensemble forecasting software runs 56 forecasts every day to provide nine to 16-day weather fore­casts, which are made available to member organizations. The software was modified to use Intel Optane persistent memory in app-direct mode, leveraging the technology as a persistent data store.

“With an OpenFOAM code with high amounts of I/O,” explained Adrian Jackson, Senior Research Fellow at EPCC, “we used Intel Optane persistent memory as a filesystem on the node. We used some of our tools to get data into and out of the node and used the persistent memory as the data store. The solver ran over 8x faster—12 percent of the original run time—compared to going to the Lustre filesystem for data.”2

Using Intel Optane Persistent Memory, an OpenFOAM code with high amounts of I/O ran faster compared to a Lustre filesystem.3

Other codes took advantage of the system’s two-level memory architecture, using the large memory capacity of Intel Optane persistent memory for data and the DRAM as cache (memory mode).

In another study, while a large-scale simulation required 20 nodes or more to run without persistent memory, the same code ran on a single node with Intel Optane persistent memory, according to Jackson. “While the simulation did not run faster, it ran more efficiently on a single node,” added Jackson.2 Such consolidation potentially promises benefits for scaling out large applications on smaller clusters.

Several other studies were completed, including synthetic workflows and IOR benchmarks. The work is described in a University of Edinburgh Research Explorer report.4 The work is ongoing to further optimize I/O for large-scale codes on HPC machines with Intel Optane persistent memory and 2nd Generation Intel Xeon Scalable processors.

Download the PDF ›

Solution Summary

  • Intel Xeon Platinum 8260 processors in a cluster of 34 NEXTGenIO prototype nodes developed and manufactured by Fujitsu 
  • Intel Optane persistent memory (3 TBs/node) 

Explore Related Products and Solutions

Product and Performance Information

1

Intel has spun out the Omni-Path business to Cornelis Networks, an independent Intel Capital portfolio company. Cornelis Networks will continue to serve and sell to existing and new customers by delivering leading purpose-built high-performance network products for high performance computing and artificial intelligence. Intel believes Cornelis Networks will expand the ecosystem of high-performance fabric solutions, offering options to customers building clusters for HPC and AI based on Intel® Xeon™ processors. Additional details on the divestiture and transition of Omni-Path products can be found at www.cornelisnetworks.com.

2Results provided by EPCC
3Results provided by EPCC