Better, Faster, and More Scalable: The March to Exascale
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
According to the Exascale Computing Project (ECP), exascale supercomputers will process a quintillion (1018) calculations each second, which more realistically simulates the processes involved in precision and compute-intense uses such as medicine, manufacturing, and climate sciences. And with ECP’s goal to launch such a computationally powerful ecosystem in the US by 2021, the march to Exascale is more like a sprint.
Essential to this type of implementation is the development, tuning, and scaling of message-passing interface (MPI) applications—providing more nodes with more cores and more threads, all interconnected by high-speed fabric.
Note The MPICH* source base from Argonne National Labs—the basis for Intel® MPI Library—has been updated.
In this webinar, overcoming MPI inefficiencies at scale is addressed, including:
- How the Intel MPI Library optimizes the MPICH source
- How Application Performance Snapshot—part of Intel® VTune™ Profiler—can help you quickly understand how your distributed and shared memory applications are performing, and where you can focus your optimization effort
Dmitry Durnov
Software development engineer, Intel Corporation
Dmitry is a senior software engineer on the Intel MPI Library team at Intel Corporation. He is one of the lead developers and his current main focus is a full-stack MPI product optimization for new Intel platforms (Intel® Xeon® Scalable processors, Intel® Xeon Phi™ processors, and Intel® Omni-Path Architecture).
Deliver flexible, efficient, and scalable cluster messaging with this multifabric message-passing library that implements the open source MPICH specification.