Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Evolving Perspectives on Operational AI: MLOps with Full Stack Optimizations

@IntelDevTools


Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Operational AI describes using models and algorithms to integrate AI into day-to-day customer experiences and business processes.

This session explores the concept generally, plus dives into a methodology for achieving it—combining MLOps† components and AI optimizations to deploy performant and scalable AI solutions in production while ensuring that individual components of an AI system are optimized across the stack.

Key learnings:

  • Practical examples of implementing MLOps components—such as model registries, data versioning, and monitoring—with AI tools such as model compression and Intel®-optimized AI frameworks
  • How implementation of software and hardware optimizations can help optimize critical parts of the machine learning lifecycle
  • How to use the combo of MLOps and AI optimization to maximize return on investment (ROI) and AI system quality

This session takes advantage of the latest Intel® hardware and software available in the Intel® Tiber™ AI Cloud. 

Skill level: Intermediate

 

Featured Software

Get the following tools as a stand-alone component or as part of the AI Frameworks and Tools.

Intel® Neural Compressor

Intel® Extension for PyTorch*

Intel® Extension for Scikit-learn*

 

Download Code Samples

Fine-Tuning Text Classification Model with Intel Neural Compressor

Get Started with Intel Extension for PyTorch

See All Code Samples

 

†MLOps (machine learning operations) productionizes the operational rigor of the machine learning lifecycle. 

 

Jump to:

You May Also Like
 

   

You May Also Like

Related Articles

Maintain Performant AI in Production by Using an MLOps Environment

Maximize Microsoft Azure* Machine Learning Models with Intel® AI Frameworks

30 Days to AI Value: Development Best Practices

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo