Software AI Acceleration

Published: 06/02/2021  

Last Updated: 08/20/2021

Intel at Software for AI Optimization Summit 2021

@IntelDevTools

Get the Latest on All Things CODE
Sign Up


 

During this June 8-9, 2021, virtual summit, Intel talked about our latest software optimization tools and how you can seamlessly integrate and deploy them into your AI and machine-learning workflows.

We presented how software AI acceleration and other optimizations can improve the performance of AI hardware (CPUs, GPUs, FPGAs, and AI accelerators) by reducing training length, inference time, energy consumption, memory usage, and cost while still maintaining high levels of performance and accuracy.

The summit's opening keynote speaker was Wei Li (vice president and general manager of Machine Learning Performance at Intel). Intel featured additional speaker sessions, a virtual booth, and access to on-demand content.

 

 

Software AI Accelerators: The Next Frontier

Driven by the exponential growth of data, AI demands that computer systems deliver significantly higher performance to meet ever-expanding computing requirements. In this talk, we show software AI accelerators that deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics. This software acceleration is key to enabling AI everywhere with applications across sports, telecommunication, drug discovery, and more.

Reduce Deep-Learning Integration Costs and Maximize Compute Efficiency

Deep-learning frameworks use low-level performance libraries to achieve the best efficiency while running. As framework and libraries quickly evolve, integrating the latest optimization from various AI hardware to framework has been a significant challenge. The Graph API in Intel® oneAPI Deep Neural Network Library (oneDNN) extends oneDNN with a graph interface that eases the integration efforts for fusion optimization. The same graph integration can be reused for various AI hardware including AI accelerators. The talk covers key interface design considerations allowing various implementations for maximum deep learning performance.

Advanced Techniques to Accelerate Model Tuning

Intel discusses algorithms and associated implementations that power SigOpt*, a platform for efficiently conducting model development and hyperparameter optimization. We cover adaptations of black-box optimization methodologies to best serve our customers and their use cases. This is followed by an overview of multiobjective problems and the ways that these can be addressed, both conceptually and practically. Finally, we present some of our ongoing model-specific work, and other future opportunities.

Learn more about our innovations in data science, machine learning, AI engineering, and IoT at the AI Developer Program.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.