Welcome to our first issue of The Parallel Universe for 2019. I didn’t make any bold predictions at the start of 2018—just that the parallel computing future is heterogeneous. However, this trend was already well underway, and will continue to gain momentum this year. It wasn’t exactly a bold prediction, and I won’t make any bold predictions this year either. I’ll just call out a few trends I’m watching.
The open source community initiative on software-defined visualization (SDVis.org) continues to demonstrate that the CPU is better for large-scale rendering than GPU-based solutions, which suffer from memory limitations and high cost. This is the topic of our feature article, Intel® Rendering Framework Using Software-Defined Visualization. The advantage of SDVis isn’t news to the film industry, which has been doing CPU-based rendering for many years, but SDVis is spreading to other computational domains where visualization of ever-larger datasets is needed.
This brings us to another trend I’m watching closely: “The Convergence of HPC, BDA, and AI in Future Workflows” (a talk I gave recently at the 2018 New York Scientific Data Summit at Brookhaven National Laboratory). Trish Damkroger, Intel’s vice president and general manager of Extreme Computing, published a similar viewpoint recently on Top500.org: "The Intersection of AI, HPC, and HPDA: How Next-Generation Workflows Will Drive Tomorrow’s Breakthroughs." The line between traditional high-performance computing, artificial intelligence, and big data analytics is blurring, so I asked the Intel Data Center Group to provide a guest commentary: Unifying AI, Analytics, and HPC on a Single Cluster.
As I’ve said before, heterogeneous parallelism is the future, and FPGAs are getting attention as an offload device for software acceleration. James Reinders, our editor emeritus, published several articles last year on programming FPGAs. In this issue, Professor Martin Herbordt from Boston University shares some of his best practices for OpenCL programming on FPGAs. In Advancing OpenCL™ for FPGAs, he walks us through the optimization of some common numerical algorithms.
We round out this issue with three articles on code optimization: Parallelism in Python*, Remove Memory Bottlenecks Using Intel® Advisor, and MPI-3 Non-Blocking I/O Collectives in Intel® MPI Library.
Future issues of The Parallel Universe will feature articles on using just-in-time compilation to optimize Python code, new features in Intel® Software Development Tools, performance case studies, and much more. Be sure to subscribe so you won't miss a thing.
Also, don’t forget to check out Tech.Decoded for more information on Intel solutions for code modernization, visual computing, data center and cloud computing, data science, and systems and IoT development.
Henry A. Gabb