Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Tackle LLM Hallucinations at Scale in the Enterprise

@IntelDevTools

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

A critical need has developed in the industry to detect and eliminate LLM hallucinations, particularly at scale in enterprise configurations. One technique, neural chat for retrieval augmented generation (RAG) applications, remains an effective means of identifying hallucinations consistently and ensuring reliability of generated text. This webinar covers the concepts and methods of implementing the Hughes Hallucination Evaluation Model (HHEM) service and the results obtained from relying on this approach.

Through collaboration with Intel, the scoring system upon which HHEM is based has accrued positive results across the industry. The neural-chat-7b model from Intel, for example, has achieved the lowest hallucination rate on the Vectara* leaderboard of any model of its size. This webinar is designed for enterprise developers and architects, as well as leaders of generative AI and AI analytics.

Primary topics include:

  • Learn why LLMs hallucinate and what methods exist to mitigate hallucinations.
  • Gain familiarity with Vectara’s RAG-as-a-service.
  • Understand what measures are incorporated in the scoring system used by Vectara’s HHEM.
  • Evaluate the results in a number of real-world use cases.
  • See what features Vectara Mockingbird offers in its RAG-specific output generation.

Skill level: Intermediate

Jump to:


You May Also Like
 

   

You May Also Like

Related Articles

Mockingbird: A RAG and Structured Output Focused LLM

Top 5 Tips and Tricks for LLM Fine-Tuning and Inference

Scale Prediction Guard’s Privacy-Conserving LLM Platform on an Intel® Gaudi® 2 AI Accelerator

Related Videos

Set Up Distributed Training on Google Cloud Platform* Service to Fine-Tune an LLM

Implement RAG Architectures for Enhanced Information Retrieval

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo