LETTER FROM THE EDITOR
Just Got Back from Intel® Innovation
The Intel® Developer Forum was a big part of my early career at Intel, so I was disappointed when it was discontinued. As I write this, Intel® Innovation 2022 has just wrapped up. It reminded me a lot of the Intel Developer Forum. There were plenty of big announcements, technical sessions, breakouts, and catching up with colleagues, collaborators, and customers. The link above has the full agenda, speaker info, and content, so I won’t try to recap the entire conference in a few paragraphs. The Day 1 and Day 2 highlights cover the key announcements, such as:
- The Intel® Developer Cloud will make new and future hardware platforms, like 4th Gen Intel® Xeon® Scalable processors and Intel® Data Center GPUs, available for prelaunch development and testing.
- The new Intel® Geti™ platform will enable enterprises to quickly develop and deploy computer vision AI.
- Intel previewed future high-volume, system-in-package capabilities that will enable pluggable co-package photonics for a variety of applications.
- The open oneAPI specification will now be managed by Codeplay, an Intel subsidiary.
- Intel released three new AI reference kits focused on healthcare use-cases.
Much of this issue focuses on sustainable AI, model optimization, and deep learning performance. Our feature article, Maintaining Performant AI in Production, covers MLOps, an often-overlooked component of the AI workflow. It describes how to build an MLOps environment using the Intel® AI Analytics Toolkit, MLflow*, and AWS*. Sustainable AI is becoming an important topic as the use of AI and the size of models increases. This is discussed in our second article, Deep Learning ModelOptimizations Made Easy (or at Least Easier). Along these same lines, The Habana Gaudi2* Processor for Deep Learning describes improvements to this already efficient architecture with impressive MLPerf benchmark results to back it up. PyTorch* Inference Acceleration with Intel® Neural Compressor describes a new, open-source Python* library for model compression that reduces the
model size and increases the speed of deep learning inference on CPUs or GPUs. Finally, Accelerating PyTorch with Intel® Extension for PyTorch describes our open-source extension to boost performance.
From there, we turn our attention to heterogeneous computing using Python. In Accelerating Python Today, Editor Emeritus James Reinders helps us get ready for the Cambrian explosion in accelerator architectures. As always, don’t forget to check out Tech.Decoded for more information on Intel solutions for code modernization, visual computing, data center and cloud computing, data science, systems and IoT development, and heterogeneous parallel programming with oneAPI.
Henry A. Gabb