As enterprises become more digitized, conversational chatbots have become a critical service to support initiatives across the enterprise, from back-end human resources (HR) to frontline customer service and sales. Up to 70% of enterprise workers interact with conversational platforms.† Chatbots can cut operational costs up to 30% a year and allow support agents to solve more complicated problems. They can also help generate revenue, with 86% of customers saying they are ready to pay more if they receive an excellent customer experience. However, 89% of consumers switch to competitors when poor service is rendered.‡
A basic chatbot that matches words to scripted responses to answer a limited set of FAQs is no longer enough to satisfy users’ requirements. Conversational AI assistants can engage in human-like dialogue, capture the context of the inquiries, and provide more accurate responses. Over 68% of consumers liked that a chatbot could answer their questions quickly.††
AI models that support conversational chatbot interactions are massive and highly complex. The larger the model, the longer the lag between a user’s questions and the responses. So, the solution needs to work in real time and support concurrent users while helping to minimize the cost of ownership.
In partnership with Accenture*, Intel has developed an AI solutions kit to help enterprises build a conversational AI chatbot. This reference kit includes deep learning-based natural language processing (NLP) models for intent classification and name-entity recognition using BERT and PyTorch*. Each kit includes:
- Training data
- An open source, trained deep learning model
- User guides
- oneAPI components
The conversational AI chatbot model was trained using over 4,000 utterances from the Airline Travel Information Systems (ATIS) dataset to provide 94% predictive accuracy. Train this conversational AI chatbot model with your data from customer service, product sales, or another function to customize it to your business.
Optimized on Intel oneAPI for Better Performance
The conversational AI chatbot model was optimized by the Intel® Extension for PyTorch* and Intel® Distribution of OpenVINO™ toolkit for better performance across heterogeneous XPU and FPGA architectures. Intel Extension for PyTorch and OpenVINO™ toolkit allow you to reuse your model development code with minimal code changes for training and inferencing. Performance benchmark tests were run on Microsoft Azure* Standard_D4_v5 using 3rd generation Intel® Xeon® processors to optimize the solution.
For data scientists, faster inferencing in real time means users get better performance in terms of speed and concurrency. With oneAPI toolkits, little to no code change is required to attain the performance boost. Additional conversational AI chatbots could be built for other applications in the enterprise. Faster inference time results in less compute time and costs spent to produce targeted chatbot responses for many user interactions.
For enterprises, conversational AI chatbots mean providing a better customer experience that could result in more revenue and lower customer service operational costs. Providing better service at lower costs can be a benefit to any enterprise.
† The Future of Customer Conversation. Accenture, 2022
‡ Bulao, Jacquelyn, 36 Astonishing Customer Experience Statistics for 2022. techjury, 2022.
†† What Do Your Customers Actually Think About Chatbots? Userlike, 2021.
‡‡ Conversational AI chatbot versus a standard chatbot.