The Parallel Universe Magazine
Intel’s quarterly magazine helps you take your software development into the future with the latest tools, tips, and training to expand your expertise.
Issue 52, April 2023
Feature: Supply-Chain Optimization at Enterprise Scale
This issue is full of AI-related content, as well as several articles on heterogeneous parallel programming.
Download Issue 52 to find:
- The featured article from Ted Jones and Karl Eklund (Red Hat*), and Karol Brejna and Piotr Grabuszynski (Intel) demonstrates the open source Red Hat OpenShift* Data Science platform on Intel® architecture.
- Articles on Transformer model optimizations, improving utility maintenance prediction, and AI democratization.
- A look at solving heterogeneous programming challenges.
- Articles that explore the oneAPI SYCL* device discovery API and the Level Zero hardware abstraction layer.
Contents
Learn how to use the open source Red Hat OpenShift* Data Science platform with Intel®-optimized software to improve data science and analytics processes.
Accelerate machine learning with the Intel® AI Analytics Toolkit. The Predictive Asset Analytics Reference Kit is designed to predict the status of utility assets and the probability of failure. It is one of many practical AI reference kits developed in collaboration with Accenture*.
Express heterogeneous parallelism using open, standard programming languages. Find out how Fortran and OpenMP* solve the three main heterogeneous computing challenges: offloading computation to an accelerator, managing data transfer between disjoint memories, and calling existing APIs on the target device.
Learn about the accelerators in your system. John Pennycook and Henry Gabb show how to use the SYCL device discovery API to determine what accelerators are available in a system and how to query their characteristics.
Fuse operations to take advantage of the oneAPI Deep Neural Network Library (oneDNN). Learn about optimizations to the popular Transformer model from Google* for natural language processing, including achieving significant performance improvement for both inference throughput and latency.
Democratize AI with Intel-optimized software. Learn how to use the Intel® End-to-End AI Optimization Kit to make your AI pipeline faster, simpler, and more accessible by improving scale-up and scale-out efficiency, delivering more efficient deep learning models, and automating the pipeline.
Accelerate linear algebra computations with standard, open approaches. In this article, Henry Gabb and Nawal Copty show how to solve a batch of linear systems on an accelerator using Intel® oneAPI Math Kernel Library (oneMKL) and OpenMP.
An open, back end approach to compute anywhere. The Level Zero hardware abstraction layer makes oneAPI incredibly versatile. The article gives an overview of the application interface, the basic architecture, and its benefits for low-level access control to compute unit resources.