The Parallel Universe, Issue 50

ID 823158
Updated 10/27/2022
Version
Public

A newer version of this document is available. Customers should click here to go to the newest version.

The Parallel Universe Magazine

  • Intel’s quarterly magazine for software innovation.

  • The latest tools, tips, and training to expand your expertise.

author-image

By

Letter from the Editor

Just Got Back from Intel® Innovation

The Intel® Developer Forum was a big part of my early career at Intel, so I was disappointed when it was discontinued. As I write this, Intel® Innovation 2022 has just wrapped up. It reminded me a lot of the Intel Developer Forum. There were plenty of big announcements, technical sessions, breakouts, and catching up with colleagues, collaborators, and customers. The link above has the full agenda, speaker info, and content, so I won’t try to recap the entire conference in a few paragraphs. The Day 1 and Day 2 highlights cover the key announcements, such as:

  • The Intel® Developer Cloud will make new and future hardware platforms, like 4th Gen Intel® Xeon® Scalable processors and Intel® Data Center GPUs, available for prelaunch development and testing.
  • The new Intel® Geti™ platform will enable enterprises to quickly develop and deploy computer vision AI.
  • Intel previewed future high-volume, system-in-package capabilities that will enable pluggable co-package photonics for a variety of applications.
  • The open oneAPI specification will now be managed by Codeplay, an Intel subsidiary.
  • Intel released three new AI reference kits focused on healthcare use-cases.

Much of this issue focuses on sustainable AI, model optimization, and deep learning performance. Our feature article, Maintaining Performant AI in Production, covers MLOps, an often-overlooked component of the AI workflow. It describes how to build an MLOps environment using the Intel® AI Analytics Toolkit, MLflow*, and AWS*. Sustainable AI is becoming an important topic as the use of AI and the size of models increases. This is discussed in our second article, Deep Learning ModelOptimizations Made Easy (or at Least Easier). Along these same lines, The Habana Gaudi2* Processor for Deep Learning describes improvements to this already efficient architecture with impressive MLPerf benchmark results to back it up. PyTorch* Inference Acceleration with Intel® Neural Compressor describes a new, open-source Python* library for model compression that reduces the 
model size and increases the speed of deep learning inference on CPUs or GPUs. Finally, Accelerating PyTorch with Intel® Extension for PyTorch describes our open-source extension to boost performance.

From there, we turn our attention to heterogeneous computing using Python. In Accelerating Python Today, Editor Emeritus James Reinders helps us get ready for the Cambrian explosion in accelerator architectures. As always, don’t forget to check out Tech.Decoded for more information on Intel solutions for code modernization, visual computing, data center and cloud computing, data science, systems and IoT development, and heterogeneous parallel programming with oneAPI.

Henry A. Gabb
October 2022

Henry A. Gabb, Senior Principal Engineer at Intel Corporation, is a longtime high-performance and parallel computing practitioner who has published numerous articles on parallel programming. He was editor/coauthor of “Developing Multithreaded Applications: A Platform Consistent Approach” and program manager of the Intel/Microsoft Universal Parallel Computing Research Centers.

LinkedIn | Twitter