A Deep Learning Challenge with the Supercomputer Fugaku

Deep learning is developing rapidly, and the computational requirements for model development continue to increase. This is because computationally intensive learning is repeatedly performed by changing hyperparameters and other conditions. So, to further improve the performance of deep learning, it is important to develop learning techniques that take advantage of high-performance systems with many computers, such as supercomputers, on a large scale. This talk introduces the efforts for achieving the best performance in the MLPerf* HPC benchmark. It measures learning performance by simultaneously training multiple models on a large scale using Fugaku, the world's first supercomputer that's based on the Arm* instruction set.

Speakers

Kentaro Kawakami is the senior researcher at Computing Laboratory, Fujitsu* Research, in Fujitsu Japan. He joined Fujitsu in 2007. Kentaro is involved in the research and development of image codec LSIs and wireless sensor nodes and AI software for Arm HPC. His department is involved in researching and developing techniques to accelerate deep learning processes on Fugaku, PRIMEHPC FX1000/700, and GPU-based supercomputers. You can find him on GitHub*.