Three Ways to Get Started with Intel® Gaudi® 2 AI Accelerators for Generative AI and LLM
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
The Intel® Gaudi® 2 AI accelerator has emerged as a game changer for AI compute, built from the ground up to accelerate generative AI and large language models (LLM). (Per the latest MLPerf* Training v4.0 benchmark, it remains the only MLPerf-benchmarked alternative to the NVIDIA* H100 for training and inference.1)
This session unpacks three ways to get started with Intel Gaudi 2 AI accelerators to supercharge your LLM-based applications with performance, productivity, and efficiency, so you can:
- Run LLMs with Hugging Face* Transformer models for training.
- Run inference on generative AI (GenAI) models using Hugging Face* for Optimum Habana.
- Migrate your models from a GPU environment to Intel Gaudi 2 AI accelerators with just a few lines of code.
Skill level: Intermediate
Get the Software
- The Docker* image for Intel Gaudi AI accelerators contains all the software needed to run your models.
Get the Code
You May Also Like
Related Articles