Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Power DeepSeek* Models and Applications on Intel® Hardware

@IntelDevTools

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Run DeepSeek* models on Intel® hardware to experience the advantages of open source freedom, advanced reasoning capabilities, and a lightweight footprint. This video uses the virtual large language model (vLLM) inferencing engine and the ChatQnA application to illustrate the essential qualities of DeepSeek. The session also demonstrates the low cost of running AI on the CPUs of Intel® Xeon® processors and Intel® Gaudi® AI accelerators, an alternative to GPUs that is efficient and cost-effective.

Gain familiarity with the Open Platform for Enterprise AI (OPEA), which is used to power the ChatQnA application and provides a useful tool to show the fundamentals of constructing chatbots. Developers viewing the tutorials provided can see how the DeepSeek models can be run on relatively modest hardware and get excited about the potential for running them successfully on Intel hardware.

Other topics include optimization techniques using the Intel® Extension for PyTorch* and the process by which ChatQnA can be deployed in minutes on most cloud service providers.

This novice-level video focuses on enterprise customers and partners, C-level executives, AI application developers, and technical decision makers.

The session covers these topics:

  • Survey DeepSeek R1 distill models, witnessing how they can be run with the vLLM inference serving engine delivering high throughput and efficiency.
  • See how a basic ChatQnA application can be built in minutes with DeepSeek R1 distill models using OPEA with just a single change to one environment variable.
  • Discover how Intel Xeon processors and Intel Gaudi AI accelerators are cost-effective platforms relative to GPUs for running lightweight models such as DeepSeek.

Featured software:

  • vLLM, a fast and easier-to-use library for LLM inference and serving.
  • ChatQnA application – OPEA.

Jump to:


You May Also Like
 

   

You May Also Like

Related Articles

OPEA

The OPEA Project Generative AI (GenAI) Examples for ChatQnA (GitHub*)

DeepSeek

Accelerate LLM Inference on Your Local PC

vLLM Source Code (GitHub)

Create Your Own Custom Chatbot

vLLM Documentation for CPUs

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo