The Impact of AI on Enterprise Infrastructure

Mike Gualtieri, Principal Analyst at Forrester Research, discusses the impact of enterprise AI on IT needs. AI is the fastest growing workload on the planet, and it will have a big impact on how enterprises acquire and provision infrastructure.

Transcript

I'm Mike Gualtieri, principal analyst at Forrester Research. I cover enterprise AI. And I want to talk to you today about the impact of artificial intelligence on infrastructure needs, what type of infrastructure is going to be needed as AI expands in enterprises.

In a recent research that we published, we called AI the fastest growing workload on the planet. And it's going to have a big impact on how enterprises acquire and provision infrastructure, AI infrastructure in particular. So if we look at the current state of enterprise AI, it's still nascent, but it's growing. It's growing rapidly. So more than 50% of enterprises say they're doing something with AI.

But when we look deeper at what they're actually doing, for the most part, it's a half dozen, maybe a dozen, use cases, some more, some even less. But it's still nascent when you think when we analyze how many potential use cases there actually are, it's not 30. It's not 50 use cases. We think that there are 500, even 1,000 use cases, because there's a potential for an AI model or a machine learning model in every business process, in every customer experience. So as AI scales, it's really going to put pressure on infrastructure for those teams to build those models.

Now, one of the unique characteristics of machine learning is that the infrastructure that you use to build that model, which is very data and very compute intensive, is needed in perpetuity. Let me explain. So you have a data science team. They're analyzing very large data sets. They're using very compute-intensive algorithms, both classical and deep learning algorithms. And that requires a lot of infrastructure to create that model. Now, once the model is created, the model is put in production. It's put in an application in production.

Now, the thing about machine learning models that you have to understand is that they're probabilistic. They're probabilistic. Their accuracy depends on the data that was used to train them, right? And that is always historical data.

As new data enters the system, you have to retrain the model. So whatever infrastructure was needed to train that model, it's not a one-off. You have to retrain that model. Some models have to be trained daily, weekly, monthly. So once you start to multiply the use cases going from half a dozen to 100, all of a sudden, the infrastructure needs, they balloon.

So because this is the fastest growing workload on the planet, infrastructure and operational professionals have to think about how they're going to meet that growing demand. And probably the best approach is a hybrid approach, so large enterprise that has a data center, colo facility, their own infrastructure, and also cloud infrastructure. Many of the cloud providers actually have specific instances and services specifically for training AI models. And that makes sense for new use cases, especially makes sense for new use cases, but with the caveat that you have to get the data there if it's not already there.

Now, one of the things we hear is that once models get into production and that retraining takes place, those workloads tend to run hot, a lot of compute, a lot of data. There's a lot of iterations on that training. And cloud can get expensive. So that's one of the key complaints we hear in enterprises.

So there's some balancing that you're going to have to do as an infrastructure professional where you're going to have to A, make infrastructure completely available quickly, whether it's in the cloud or on prem, to these data science teams, to the AI engineering teams, to the data engineering teams that need it. And B, you're going to have to also optimize the cost whilst managing the infrastructure in both. So that's going to be a challenge.

So looking to the 2019, the year that we're in now, we see these use cases ramping up. We think that most enterprises that have a handful of use cases will at least double those use cases. And then looking towards the end of the year, we're going to start to see a lot more use cases. We're not going to get into the 500 use cases per enterprise probably for two or three years. But we're going to start to see the acceleration happen. And there's going to start to be pressure on you to provide that infrastructure.