GE Healthcare Solutions Accelerated by Intel® oneAPI Toolkits
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
GE Healthcare collaborated with Intel on oneAPI to make its code cross-architecture ready. When combined with Intel oneAPI tools, the Intel® oneAPI Math Kernel Library helped boost performance. For AI optimizations, porting CUDA* Deep Neural Network (CuDNN) library to Intel® oneAPI Deep Neural Network Library was straightforward. As a result, GE Healthcare estimates saving potentially millions of dollars in configuration costs and years of engineering efforts.
Get the Software
Develop high-performance, data-centric applications for CPUs, GPUs, and FPGAs with this core set of tools, libraries, and frameworks including LLVM-based compilers.
Transcript
Definitely Intel is one of our trusted partners that we collaborate with for many years. Today, Intel® technology is heavily used and drives many of our medical devices like X-ray machines, mammography scanners, CT, MR, pad and molecular imaging scanners as well.
So now we're really excited to collaborate with the Intel team on oneAPI because we see a lot of potential benefits. The benefit that once we have our legacy code ported from C++ to oneAPI, it immediately can take advantage of multicore Intel® CPUs it also makes our legacy code GPU-ready. It's a very strong statement of portability of oneAPI when exactly the same code with just different target being compiled for can run on Intel CPU, Intel® GPU as well as GPU from other vendors. So, we see that it's definitely a major advantage of oneAPI. We can do it with very minimal effort.
We also utilize Intel® software tools like MKL to get the best optimal performance for our software on Intel CPUs.
Another interesting use case for oneAPI and specifically for one of the libraries called oneDNN is really an ability to program fixed function hardware that is really dedicated to accelerate convolutional operation for AI and deep-learning inferencing and training. We were quite excited learning how to port cuDNN code to oneDNN and it turned out to be relatively straightforward as well. Once we ported our code from cuDNN to oneDNN, we can run it on Intel GPUs. We believe will keep growing to address the need to accelerate AI and deep learning inferencing.
We think that we can train our scientists to start using oneAPI and DPC++ for their research and development activities. And that will enable them to run even their prototype code already taking advantage of all the cores available in their CPUs but also maybe accelerating it with a GPU device. So it should make their development process and research process more efficient.
We see oneAPI as potentially becoming a de facto industry standard to program heterogeneous compute systems and we believe that using oneAPI actually provides us with ability to port our code across multiple architectures and even multiple vendors, saving potentially millions of dollars in configuration cost as well as many, many years of engineering effort that we would have to invest if we'll have to completely rewrite this code from one programming model to another.
Related Article