Workshop: Deploy Enterprise-Grade GenAI with OPEA on AWS*
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Open Platform for Enterprise AI (OPEA) provides the building blocks for enterprise applications, including LLMs, prompt engines, and data stores, based on retrieval augmented generation (RAG) principles. This session guides you through the processes of building and deploying enterprise-grade generative AI (GenAI) applications for launching on Amazon Web Services (AWS)*. Explore the capabilities of OPEA to streamline development of a RAG pipeline using structured, repeatable techniques.
Through this session, gain expertise in the challenges and complexities of RAG development, including infrastructure and orchestration issues, to build and deploy effective GenAI pipelines.
Geared to intermediate developers, the topics covered in this workshop include:
- Understand how RAG is applied to a microservices architecture for modular and scalable AI solutions.
- Learn how to deploy GenAI blueprints on Kubernetes* for production-ready AI applications.
- Master the best practices for implementing RAG pipelines in the cloud.
- Use Amazon Bedrock* and OpenSearch* to integrate GenAI applications for smooth performance.
- Gain strategies for scaling GenAI workloads with Kubernetes.
- Get hands-on experience with OPEA for building and deploying GenAI solutions.