Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

<– AI and HPC Ecosystem

 

Developer Resources from Intel and Hugging Face*

 

Intel and Hugging Face* collaborate to optimize the performance of popular models and tools for Intel® hardware and software. The two companies develop and optimize open source tools that enable production AI application deployment, and Intel provides preoptimized models and datasets on the Hugging Face hub. Optimum for Intel® Gaudi® platform simplifies model optimization targeted for Intel CPUs, GPUs, and AI accelerators.

Get Started

Intel and Hugging Face Case Studies

Prediction Guard De-risks LLM Applications

Seekr*: Building Trustworthy LLMs for Evaluating and Generating Content at Scale

"At Hugging Face, we are focused on making the latest advancements in AI more accessible to everyone. Making state-of-the-art machine learning models more efficient and cheaper to use is incredibly important to us, and we're proud to partner with Intel to make it easy for the community to get peak CPU performance, faster model training, and advanced AI deployments on powerful Intel hardware devices using our free open source Optimum library, integrating OpenVINO™ toolkit, Intel Neural Compressor, Synapse AI*, and many more powerful solutions of AI Tools."

— Jeff Boudier, product director, Hugging Face

 

Use Hugging Face Tools with Intel® Platforms

Learn how to get started and how to get the most out of Hugging Face models and tools on Intel-based platforms spanning data centers, cloud, and AI PCs. These joint offerings are based on OpenVINO™ toolkit, AI Tools, and Intel® Gaudi® software.

Multiplatform

  • Hugging Face Optimum for Intel
  • Hugging Face Optimum Documentation
  • Hugging Face Optimum Notebooks
  • Post-training Quantization with AutoRound
  • Faster Assisted Generation with Dynamic Speculation 
  • Faster Decoding with Any Assistant Model Using Universal Assisted Generation Techniques
  • Speed Up LLM Decoding with Advanced Universal Assisted Generation Techniques 
  • Introducing Holistically Evaluating Long-context Language Models (HELMET)

AI PC

  • Hugging Face Model Hub with OpenVINO Toolkit
  • Notebooks for the OpenVINO Toolkit Integration in Hugging Face Optimum for Intel® Gaudi® Processors
  • Optimize and Deploy Models with Optimum for Intel and OpenVINO Toolkit Generative AI
  • A Chatbot on Your Laptop: Phi-2 on Intel® Core™ Ultra Processors
  • Three Minutes to Build a Chatbot on Your AI PC
  • Large Model Weights Compression

Data Center and Cloud

  • Examples for Hugging Face Optimum for Intel Gaudi Processors
  • Build Cost-Efficient Retrieval Augmented Generation (RAG) Applications with Intel Gaudi 2 Processors and Intel® Xeon® Processors
  • SetFit Sentence Transformers
  • Blazing Fast SetFit Inference with 🤗 Optimum for Intel and Intel Xeon Processors
  • CPU-Optimized Embeddings with 🤗 Optimum for Intel and fastRAG
  • Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerators
  • The Ultimate Guide to Using Contrastive Language Image Pretraining (CLIP) with an Intel Gaudi 2 Accelerator
  • Fine-Tune the Meta* Llama 3.2-Vision-Instruct Multimodal LLM on Intel Accelerators
  • Accelerate LLM Inference with Text Generation Inference (TGI) on Intel Gaudi AI Accelerators 
  • Scale LLMs with Intel Gaudi and Intel Xeon Processors 
  • How to Benchmark Language Models on 5th Gen Intel® Xeon® CPUs on Google Cloud Platform* Service

More Resources

AI Development Resources

Explore tutorials, training, documentation, and support resources for AI developers.

AI Tools

Download Intel-optimized end-to-end AI tools and frameworks.

Intel® AI Hardware

Learn what type of device best suits your AI workload, spanning CPUs, GPUs, and AI accelerators.

  • Overview
  • Resources
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo