Build an LLM-Powered Chatbot with Streamlit* and Hugging Face*
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Construct a chatbot with a Streamlit* front end using the power of LLMs. Connect your OpenAI*-compatible API and model endpoint to Hugging Face*. In this demonstration, the model inference endpoints are hosted on Intel® Gaudi® accelerators and deployed on Denvr Dataworks* cloud servers.
As technology advances, chatbots increasingly streamline processes, including generating code, writing blog posts, summarizing text, and conducting market research. Hugging Face Spaces offers a simple and flexible platform to build and share sample applications. Use Intel Gaudi accelerators with the Intel® AI for Enterprise Inference, an Infrastructure-as-a-Service (IaaS) product, hosted by Denvr Dataworks. Use your OpenAI-compatible API key and hosted model endpoint to get started.
The session explores numerous valuable AI tools and introduces environments that facilitate the essential steps in constructing a chatbot. Key technologies are showcased, including Python*, Streamlit, OpenAI API, and Hugging Face Spaces.
Key takeaways from this session include:
- Building an LLM-powered chatbot on Hugging Face
- Coding a front-end Streamlit application
- Using API secrets on Hugging Face
- Using a modern OpenAI-compatible API
The session is geared to novice developers.