Intel provides both hardware and software solutions for companies to use in building and deploying their AI and machine learning (ML) models. AI/ML workloads demand high power and infrastructure costs, inhibiting organizations from optimizing costs and speeding up inferencing. Intel provides AI chips and optimized solutions to scale and unlock AI insights.
Intel commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study and examine the potential benefits enterprises may realize by deploying Intel AI1. The purpose of this study is to provide readers with a framework to evaluate the potential financial impact of Intel AI on their organizations.
To better understand the benefits, costs, and risks associated with Intel AI, Intel commissioned Forrester Consulting to interview seven customers with experience using Intel AI chips and software for their AI/ML inferencing workloads.
This is an executive summary of the full Total Economic Impact™ study. Please read the full study for more detailed information.
Interviewees noted several reasons for investing in Intel AI chips, including:
- Size, weight, and power. Intel AI chips were smaller, weight less, consumed less power, and produced less heat than alternatives when running AI/ML inference workloads. This was especially important when trying to move AI compute to edge devices to speed up inferencing tasks, as opposed to sending data to the cloud or back to the data center for processing.
- Ecosystem and breadth of portfolio. Intel’s chipset covers the breadth of infrastructures and AI use cases for companies, making it simpler to deploy across ecosystems. This is especially important when considering interoperability and compatibility of AI/ML workloads across a company’s IT infrastructure (e.g., from edge to core).
- Inability of GPUs to process very large images. One customer noted that the size of images they work with are too large for GPUs to process. Intel’s variety of processor chips including central processing units (CPUs) and FPGAs afforded them the ability to balance data size, latency, and overall performance.
Based on the customer interviews and TEI analysis, Forrester found the following risk-adjusted present value (PV) benefits from using Intel AI.
[For AI inferencing] you can get GPU speeds on Intel CPUs.
- Development time savings with OpenVINO totaling nearly $2.2 million. Interviewees noted that their organizations’ data scientists used Intel’s OpenVINO toolkit to deploy their inferencing models to Intel AI chips, optimize Pytorch Models, and save development time. Customers reported that their organizations used Intel’s pre trained deep learning encoders, and one customer gave the example that their organization used OpenVINO’s eyeglass detection module instead of building that from scratch. This significantly reduced coding and deployment time for their inference models.
- Interoperability efficiencies totaling more than $1.1 million. Interviewees reported using Intel AI chips to deploy inferencing workloads across a broad range of infrastructure from data centers to cloud to edge. Deployment flexibility might be needed between edge and data center devices if an edge device can process a subset of computer vision inferencing workloads but then needs to send more complex data back to the data center for processing.
One customer noted that their organization expected ten times reduction in developer resources by developing once in OpenVINO and Intel and porting the code across data center and edge devices, as opposed to developing on another chipset and platform and then requiring a separate x86 edge team to redevelop the code. Another customer reported that up to 40% of their AI/ML projects require interoperability between inferencing devices.
- Hardware savings totaling over $1.6 million. Interviewees reported that using Intel chips for their AI workloads resulted in significant cost savings. Their organizations used their existing infrastructure for inferencing workloads run in the data center and upgrading their edge devices to run inferencing workloads cost less with Intel chips compared to alternatives. A customer told Forrester that their organization ran up to 70% of their AI/ML workloads on their existing data center infrastructure, and another customer reported that they saved up to $5,000 upgrading edge devices with Intel CPUs.
Additional benefits that customers experienced but were not quantified include:
- Improved inference performance. Customers noted that Intel AI chips improved inference performance compared to alternatives. With this solution, inferencing workloads ran quickly. Additionally, edge devices allowed inferencing to run locally on the device as opposed to sending the data to the cloud and back, saving more time.
- Less power required for field-programmable gate array (FPGA) vs. GPUs. Customers also noted that edge workloads required special considerations, all of which Intel AI chips addressed, noting that FPGA chips are a much more power-considerate device. Intel AI provides size/weight/power considerations, the ability to power the chip and edge device from a battery, and heat generation considerations.
- Software adoption. Customers noted that the simple developer interface for OpenVINO and other software associated with Intel AI chips was key in driving adoption for their company and data scientists.
Total Economic Impact Analysis
For more information, download the full study: “The Total Economic ImpactTM Of Intel AI,” a commissioned study conducted by Forrester Consulting on behalf of Intel, June 2021.
Forrester interviewed seven organizations with experience using Intel AI and combined the results into a three-year composite organization financial analysis. Risk-adjusted present value (PV) quantified benefits include:
- Development time savings with OpenVINO of $2,150,353.
- Inferencing flexibility and interoperability efficiencies totaling
more than $1,142,375.
- Hardware savings totaling over $1,623,155.
The reader should be aware of the following:
- The study is commissioned by Intel and delivered by Forrester Consulting. It is not meant to be a competitive analysis.
- Forrester makes no assumptions as to the potential ROI that other organizations will receive. Forrester strongly advises that readers use their own estimates within the framework provided in the report to determine the appropriateness of an investment in Intel AI.
- Intel reviewed and provided feedback to Forrester. Forrester maintains editorial control over the study and its findings and does not accept changes to the study that contradict Forrester’s findings or obscure the meaning.
- Intel provided the customer names for the interview(s) but did not participate in the interviews.
Total Economic Impact™ (TEI) is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists vendors in communicating the value proposition of their products and services to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of IT initiatives to both senior management and other key business stakeholders. The TEI methodology consists of four components to evaluate investment value: benefits, costs, risks, and flexibility.
© Forrester Research, Inc. All rights reserved. Forrester is a registered trademark of Forrester Research, Inc.
Appendix A: Endnotes
1 Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists vendors in communicating the value proposition of their products and services to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of IT initiatives to both senior management and other key business stakeholders