Probabilistic programming languages (PPLs) continue to receive attention for performing Bayesian inference in complex generative models. The trouble is that a scarce amount of PPL-based science applications remain because it’s impractical to rewrite complex scientific simulators in a PPL. Inference comes with high computational cost with a lack of scalable implementations.

Enter Etalumis (simulate spelled backwards), a new system that uses Bayesian inference to improve existing simulators via machine learning.

In this session, Lei Shao, an Intel deep-learning software engineer, presents the novel PPL framework that couples directly to existing scientific simulators through a cross-platform, probabilistic running protocol. It provides Markov chain Monte Carlo methods and deep-learning-based inference compilation engines for tractable inference.

To guide inference compilation, she:

  • Performs distributed training of a dynamic 3D convolutional neural network, long short-term memory (CNN-LSTM) architecture with a PyTorch* and MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k. It achieves performance of 450 TFLOPS through PyTorch enhancements.
  • Demonstrates a Large Hadron Collider use case with the C++ Simulator for Human Error Probability Analysis (SHERPA).
  • Achieves the largest-scale posterior inference in a Turing-complete PPL.

Get the Software

Other Resources

Lei Shao
Deep-learning software engineer, Intel Corporation

Lei Shao is an industry-leading expert in machine learning and large-scale distributed deep learning. She has over 20 patents and a myriad of publications. Lei joined Intel in 2003 and holds a PhD in electrical engineering from University of Washington in Seattle.



Intel® oneAPI Deep Neural Network Library (oneDNN)

Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks. oneDNN is included as part of the Intel® oneAPI Base Toolkit.

Get It Now   

See All Tools