Move Message Passing Interface Applications to the Next Level
Speaker: Adrian Jackson, EPCC, University of Edinburgh
Message passing interface (MPI) is the de facto standard for distributed memory programming, required for applications that need to exploit more than one node on a computer. However, MPI has a complex set of functionality, from blocking synchronous communications to neighborhood collectives. Moving applications from small-scale parallel performance to multipetascale systems requires significant work to reduce the overhead associated with MPI parallelization, ensuring the minimum of performance is wasted on parallel costs.
This session discusses and outlines different techniques for reducing and removing MPI parallelization costs with practical examples and demonstrations of the performance improvements.
Learn about research on using shared memory to reduce MPI communication costs within a node and reducing the cost of MPI collectives on a range of high-performance computing (HPC) resources (including Intel® Xeon Phi™ processors). This session also discusses how multi-channel DRAM (MCDRAM) can affect general MPI performance on Intel Xeon Phi processors, and includes the best ways to configure the MPI job launcher and pin MPI processes. Decipher which communication libraries to use when running on multiple Intel Xeon Phi processors.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.