Project Indus Benchmarks LLM Performance in Hindi

The Indus LLM underwent benchmarking on the Intel® platform; results showed it is a robust, versatile, and efficient model.

At a glance:

  • Project Indus is an innovative open-source language model designed specifically for Hindi and its dialects. Focusing on applications within the Indian linguistic landscape, Project Indus aims to enhance natural language generation and processing capabilities.

  • A joint study from Intel and Tech Mahindra benchmarked key performance metrics. By evaluating these parameters under various conditions, a detailed performance profile of the Indus large language model (LLM) on Intel® AI hardware was obtained, offering valuable insights for optimizing its practical implementation.

author-image

By

Executive Summary

This white paper presents a comprehensive benchmarking study of Project Indus, an innovative open-source language model designed specifically for Hindi and its dialects. Focusing on applications within the Indian linguistic landscape, Project Indus aims to enhance natural language generation and processing capabilities. The benchmarking study emphasizes key performance metrics such as Time to First Token (TTFT), inter-token delay, input prompt length, output prompt length, and total throughput in tokens per second. By evaluating these parameters under various conditions, including varying numbers of concurrent requests, a detailed performance profile of the Indus LLM on Intel® AI hardware was obtained. The results highlight the model's effectiveness and scalability, offering valuable insights for optimizing its practical implementation. This study aims to inform developers and researchers about the performance characteristics of the Indus LLM, facilitating its integration and utilization across diverse computational environments.

Read the white paper – “Benchmarking the Indus Language Model on Intel® AI Hardware ” ›