Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Alder Lake
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Optimize Transformer Models with Tools from Intel and Hugging Face*

@IntelDevTools


Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Transformer models are powerful neural networks that have become the standard for delivering advanced performance for tasks such as natural language processing (NLP), computer vision, and online recommendations. (Fun fact: People use transformers every time they do an internet search on Google* or Microsoft Bing*.)

But there is a challenge: Training these deep learning models at scale requires a large amount of computing power. This can make the process time-consuming, complex, and costly.

This session shares a solution: An end-to-end training and inference optimization for transformers.

Join your hosts from Intel and Hugging Face* (notable for its transformers library) to learn:

  • How to do multi-node, distributed CPU fine-tuning for transformers with hyperparameter optimization using the Hugging Face transformers and Accelerate library, and Intel® Extension for PyTorch*.
  • How to easily do inference optimization (including model quantization and distillation using Optimum for Intel) with the interface between the transformers library and Intel tools and libraries.

Watch a showcase of transformer performance on the latest Intel® Xeon® Scalable processors.

Skill level: Intermediate

 

Featured Software

Get the Intel Extension for PyTorch as part of the Intel® AI Analytics Toolkit or as a stand-alone version.

 

Learn More

  • Hugging Face Trainer: An API for hyperparameter search that makes it easier to start training without manually writing a training loop.
  • Intel® Disruptor Initiative: Participants are companies that are pushing the limits of innovation.

Jump to:

You May Also Like
 

Intel® AI Analytics Toolkit

Accelerate data science and AI pipelines—from preprocessing through machine learning—and provide interoperability for efficient model development.

 

Get It Now

 

See All Tools

 

   

You May Also Like

Related Articles & Blogs

Easier Quantization in PyTorch* Using Fine-Grained FX

Intel, Habana* Labs, and Hugging Face* Advance Deep Learning Software

Deep Learning Model Optimizations Made Easy (or at Least Easier)

Optimize End-to-End AI Pipelines

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Diversity & Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Do Not Share My Personal Information

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration and other factors. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

Intel Footer Logo