Intel® Gaudi® AI Accelerator
First generation deep learning training & inference processor.
Intel® Gaudi® AI Accelerator
For leading price performance in the cloud and on-premises
With the first-generation Intel® Gaudi® AI processor, customers benefit from the most cost-effective, high-performance training and inference alternative to comparable GPUs. This is the deep learning architecture that enables AWS DL1 instances based on the Intel Gaudi AI accelerator to deliver up to 40% better price/performance training as compared to comparable Nvidia GPU-based instances. The Intel Gaudi AI accelerator’s efficient architecture also enables Supermicro to provide customers with equally significant price performance advantage over GPU-based servers with the Supermicro X12 Server featuring Intel Gaudi AI accelerators.
Intel® Gaudi® AI Accelerator in the Cloud
Get started with Amazon EC2 DL1 Instances based on Intel Gaudi AI accelerators.
Intel® Gaudi® AI Accelerator in the Data Center
Build Intel Gaudi AI processors into your data center with Supermicro.
What makes the Intel Gaudi AI accelerator so efficient?
- 16nm Process technology
- DL Optimized Matrix multiplication engine
- 8 Programmable Tensor processor cores
- 32 GB Onboard HBM2
- 24 SRAM
- 10 Integrated 100G Ethernet ports
Options for building Intel Gaudi AI accelerator systems on premises
For customers who want to build out on-premises systems, we recommend the Supermicro X12 Server, which features eight Intel Gaudi AI processors. For customers who wish to configure their own servers, based on Intel Gaudi AI accelerators, we provide reference model options, the HLS-1 and HLS-1H.
For more information on these server options, please see more details >
Making development on Intel Gaudi AI accelerator fast and easy: Intel® Gaudi® software
Optimized for deep learning model development and to ease migration of existing GPU-based models to Intel Gaudi AI platform hardware. It integrates PyTorch and TensorFlow frameworks and supports a rapidly growing array of computer vision, natural language processing and multi-modal models. In fact, >500K models on Hugging Face are easily enabled on Intel Gaudi accelerators with the Habana Optimum software library.
Getting started with model migration is as easy as adding 2 lines of code, and for expert users who wish to program their own kernels, the Intel Gaudi platform offers the full tool-kit and libraries to do that as well. Intel Gaudi software supports training and inference of models on first-gen Intel Gaudi accelerators and Intel Gaudi 2 accelerators.
For more information on how the Intel Gaudi platform is making it easy to migrate existing or building new models on Gaudi, see our Intel Gaudi software product page >