How to Set Up Distributed Training on Google Cloud Platform* Service to Fine-tune an LLM
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Learn how to fine-tune the nanoGPT model on a cluster of CPUs on Google Cloud Platform* service using an Intel®-optimized cloud module.
Intel offers a set of cloud-native open source reference architectures to help AI cloud developers build and deploy solutions on Amazon Web Services (AWS)*, Microsoft Azure*, and Google Cloud Platform service.
This session focuses on training using Google Cloud Platform service.
- Get an overview of how to fine-tune the 12M-parameter nanoGPT model that builds upon the initial code base by Andrej Karpathy using a cluster of Intel® Xeon® Scalable processors.
- Learn how to set up distributed training so you can fine-tune the resulting base large language model (LLM) to your specific objective, for example, on your specific task and dataset.
Skill level: Intermediate
Featured Software
You May Also Like
Related Articles
Related Webinar