Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.
  1. Push-Button Productization of AI Models

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Download PDF
Download PDF

Push-Button Productization of AI Models

Discover how Intel IT achieves push-button productization of AI models, enabling them to deploy AI faster and at scale.

Intel IT's large AI group works across Intel to transform critical work, optimize processes, eliminate scalability bottlenecks and generate significant business value (more than USD 1.5B return on investment in 2020). Our efforts unlock the power of data to make Intel’s business processes smarter, faster and more innovative, from product design to manufacturing to sales and pricing.

Intel IT’s AI group includes over 200 data scientists, machine-learning (ML) engineers and AI product experts. We systematically work across Intel’s core activities to deliver AI solutions that optimize processes and eliminate various scalability bottlenecks. We use AI to deliver high business impact and transform Intel’s internal operations, including engineering, manufacturing, hardware validation, sales, performance and Mobileye. Over the last decade, we have deployed over 500 ML models to production—more than 100 models were deployed just during the last year.

To enable this operation at scale, we developed Microraptor, a set of machine-learning operations (MLOps) capabilities. MLOps is the practice of efficiently developing, testing, deploying and maintaining ML in production. It automates and monitors the entire ML lifecycle and enables seamless collaboration across teams, resulting in faster time to production and reproducible results.

To enable MLOps, we build an AI productization platform for each business domain that we work with, such as sales or manufacturing. Models and AI services are delivered, deployed, managed and maintained on top of the AI platforms.

Our MLOps capabilities are reused in all of our AI platforms. Microraptor enables world-class MLOps to accelerate and automate the development, deployment and maintenance of ML models. Our approach to model productization avoids the typical logistical hurdles that often prevent other companies’ AI projects from reaching production. Our MLOps methodology enables us to deploy AI models to production at scale through continuous integration/continuous delivery, automation, reuse of building blocks and business process integration.

Microraptor uses many open-source technologies to enable the full MLOps lifecycle while abstracting the complexity of these technologies from the data scientists. Data scientists do not have to know anything about Kubernetes or Elasticsearch. They can focus their efforts on finding or developing the best ML model. Once the model is ready, a data scientist can simply register the model to MLflow (an open-source platform for managing the end-to-end ML lifecycle) while complying with some basic coding standards. Everything else—from building to testing to deploying—happens automatically. The model is first deployed as a release candidate that can be later activated with another push of a button into the relevant business domain’s AI platform.

Our MLOps methodology provides many advantages:

  • The AI platforms abstract deployment details and business process integration so that data scientists can concentrate on model development.
  • We can deploy a new model in less than half an hour, compared to days or weeks without MLOps.
  • Our systematic quality metrics minimize the cost and effort required to maintain the hundreds of models we have in production.

Related Videos

Show more Show less

Related Materials

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration and other factors. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo