Parallel Universe Magazine - Issue 37, July 2019

ID 823158
Updated 7/11/2019
Version
Public

A newer version of this document is available. Customers should click here to go to the newest version.

The Parallel Universe Magazine

author-image

By

It probably seems like a long time ago, but it’s just been three months since the Event Horizon Telescope published its black hole image. This was obviously an amazing scientific feat. But a single image doesn’t convey the vast amount of expertise, data, and computation that went into its creation. The Event Horizon General Relativistic Magnetohydrodynamic Code Comparison Project provides details about some of the codes involved, including ECHO*. Advancing the Performance of Astrophysics Simulations with ECHO-3DHPC* (published last year in issue 34 of The Parallel Universe) describes the optimization of this code by researchers from the Leibniz Supercomputing Centre in collaboration with Intel.

Our feature article in this issue, Leadership Performance with 2nd-Generation Intel® Xeon® Scalable Processors, describes the newest addition to the Intel® Xeon® processor family. This new processor includes Intel® Deep Learning Boost, support for Intel® Optane™ DC persistent memory, and up to 56 cores and 12 DDR4 memory channels per socket. After reading this article, you’ll know why this new processor is setting new performance records. Using the Latest Performance Analysis Tools to Prepare for Intel® Optane™ DC Persistent Memory shows you how to determine if your application will benefit from this new memory technology, and how to analyze applications that use this technology.

Non-uniform memory access (NUMA) architectures have been around for a long time. Most of us know that threads should stay close to their data for faster memory access, but few of us pay attention to where our threads are actually running or whether the operating system is moving our threads around. Measuring the Impact of NUMA Migrations on Performance helps you to understand how your threads are behaving on NUMA systems.

Our series on optimizing and parallelizing Python* codes continues in this issue. Parallelism in Python*: Directing Vectorization with NumExpr* shows how simple code modifications can drastically improve the performance of complex mathematical expressions.

Turbo-Charged Open Shading Language* on Intel® Xeon® Processors with Intel® Advanced Vector Extensions 512 describes Intel’s efforts to vectorize the Oscar*-winning Open Shader Language*, the de facto open source standard for digital content creation that has over 100 movie credits.

Finally, we close this issue with two guest editorials: one from Mike Croucher from Numerical Algorithms Group and another from James Reinders, our editor emeritus. Mike describes The Performance Optimisation and Productivity (POP) Project that the European Union funds to improve software performance. In Seven Ways HPC Software Developers Can Benefit from Intel® Software Investments, James describes how you can maximize performance while minimizing effort by taking advantage of work that Intel has already done. These editorials show that sometimes the path to performance is just a matter of knowing what’s available.

As always, don’t forget to check out Tech.Decoded for more information on Intel's solutions for code modernization, visual computing, data center and cloud computing, data science, and systems and IoT development.

Henry A. Gabb
July 2019