TensorFlow* and Intel® oneAPI Deep Neural Network Library

 

Rapid growth in artificial intelligence and machine-learning innovations and workloads necessitates constant developments in both software and hardware infrastructure. Developers of TensorFlow* (which is Google's end-to-end open-source, machine- learning framework) and Intel® oneAPI Deep Neural Network Library (oneDNN) have been collaborating closely to enable users to fully use new hardware features and accelerators, with a focus on x86 architecture. This talk covers recent projects such as int8 and bfloat16 vectorization support that brings custom oneDNN operations to basic TensorFlow software and the upcoming Intel® XPU device plug-in for TensorFlow.

Intel® Advanced Vector Extensions 512 (Intel® AVX-512) with Vector Neural Network Instructions (VNNI)
Intel AVX512 for bfloat16

Speaker

Penporn Koanantakool is a senior software engineer at Google. She leads TensorFlow’s performance optimization collaboration with Intel. Penporn holds a PhD in computer science from the University of California, Berkeley, and a bachelor of engineering degree in computer engineering from Kasetsart University, Thailand.

 

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.