Seizing AI and the Edge for CSPs

GigaOm hosted webinar featuring AI leaders at Intel discussing the best ways to leverage AI systems and how CSPs can leverage those technologies.

Transcript

Hey, everybody. This is a Gigaom webinar entitled "Seizing AI for the Edge for Cloud Service Providers," or CSPs. My name's Dave Linthicum and I'm going to be your host, master of ceremonies, so to speak, in this great discussion that talks about how to leverage artificial intelligence systems for your benefits and how cloud service providers can leverage those technologies as well.

So again my name's Dave Linthicum. I'm an analyst with GigaOm research. I'm an author, speaker, B-list Geek. I do a lot of podcasting, do a lot of speaking on cloud computing and really looking at this technology and how it's going to be worked. And we have paired with my team today. It's going to be Ananth Sankar, and he's the senior director and head of AI solutions in business development at Intel. So, Ananth, tell us about yourself. What you do at Intel? And tell us how you got into artificial intelligence things.

All right. Good morning, David and rest of the audience here. I lead the AI solutions team and the business development function at Intel, where I'm responsible for helping customers realize their AI mission on Intel portfolio products. And I've been with Intel for nearly two decades. My background has been in the data analytics space. In the last four or five years, very much in the classical machine learning, and more specifically, the deep learning segment. It's just a phenomenal time to be looking at the number of research papers that get published, the amount of use cases that are coming up. It's just a fascinating time to be part of the AI journey.

So before we get started on the topic, I'd love to get some candid opinions from you. What do you think's most changed in the last five years about leveraging artificial intelligence systems, machine learning, deep learning, and coupling this stuff with IoT and edge-based systems?

You know, the concept of artificial intelligence has been around for several years, right? So back in 40, 50 years, the three things that have fundamentally changed which are fueling the growth of artificial intelligence, one, is the unit cost of computing continues to go down. And the second thing that has happened is the data democratization, meaning more companies and governments and academia have started sharing data more publicly available. The third is the tools convergence. So historically, what was only previously possible by enterprise-class grade SAP software, which is only applicable for-- available for a few enterprises, have now become-- comparable tools have become available in the open source for free, right?

So when-- and someone with the right attitude, aptitude, they can really put these three things together to build differentiating, AI-driven applications. So that's fundamentally what has changed, how-- contributed to the rapid growth in the space. And across the variety of vertical segments, you see a number of use cases that are touching-- that are AI enabled, which are touching both the top- and bottom-line for these companies. And many of the companies are fundamentally infusing AI in their workflow, and through which they're unleashing several new use cases and opportunities as well as driving operational efficiencies. So it's just-- those three things combined with the industry innovation that's happening is unleashing lots of new opportunities for many, many of the enterprises and [INAUDIBLE].

Yeah, that's a great answer. Yeah, and just fun fact, the first job I had out of college, and I'm 57, was an AI analyst, and doing this programming back then. So it has been around for a tremendous long time, even as several silverbacks get into the market.

So a couple of housekeeping. Things just keep in mind that you can ask questions during the presentation and we'll try to address as many of those as we can [INAUDIBLE] to keep track. And the other thing, we are going to do some polls during this presentation, so we want-- we're interested in what you think about this technology, including one we're about to do right now. So let's go and get into it.

So this is the agenda, how we're going to get-- basically going to get to it. We're talking about the hype-driven trends of containers, how the use cases that are changing, how people are leveraging things, how services providers are learning to work and play well together with different tools and technologies, cloud infrastructure, how that's changing, how edge and 5G is changing the game. Basically, all the things that are really going to affect your life in the next five years, we're going to talk about during this presentation. And we're going to get a discussion of best practices, and that's going to be systemic throughout the thing. And then giving some time for some Q&A.

So let's get to our first poll. So where do you see the public cloud growing in 2020? So AI, machine learning, big data analytics, serverless, container orchestration. So let's go ahead and launch the poll and see what you guys think. So let me get your perspective on this. What do you think's going to be the top answer?

I would think-- yeah, the fundamental agent of growth for many CSPs-- their top pick-- my top pick would be the AI and the machine learning. Let's see what the rest of the audience thinks. We're a bit-- you know, you have overlapping [CHUCKLES] criteria. You've got to have the data and analytics capabilities to get into the AI phase. So depending on which stage of the AI journey each participant is in, they might choose big data or analytics or AI. Those would be my top two picks.

Kind of what I did here is I picked things that are hyped up in the market right now. Big data analytics, serverless computing, container orchestration, certainly with the Kubernetes stuff, and wanting to see exactly where people are placing their bets right now. So this is hyped technology in competition for who's going to pick what. And I suspect we're going to get AI and machine learning, to your point, because most people have come to this presentation to learn about AI and machine learning. Therefore, they're most interested in that.

So let's go ahead and close the poll, and let's go ahead and launch it. So looks like we have-- where do you see use of public cloud computing and AI machine learning? We've got zero percents. Let's go ahead and move on.

So let's go ahead and launch that poll again. Maybe we had a technical error. I see we're getting a message here. So looks like the GigaOm webinar's organization, 30% serverless, 30% big data analytics, 30% serverless, 10% container orchestration, and-- so looks like we had an even split among the two. Is that what you would have expected?

You know, I would think the big data analytics and machine learning are somewhat interrelated. You've got to have the data and the tools to preprocess, to prepare parameters for the model, the machine learning model. So serverless and containers, I would probably group them in their-- in the DevOps tool box, where you enable the data scientists and analysts to do-- do a good job in terms of consuming the data and being able to process data the fastest possible manner, right?

So the concept of serverless, it's-- you know, it's nothing but providing the ability, the framework, for the end users not to worry about what the infrastructure is that they're consuming, right? So it's-- my bet would have been-- the top one would have been the AI, machine learning. Looks like the poll is somewhat equally split between the top three, and container orchestration is less critical in terms of whether it's an infrastructure as a service play or, you know, in-- as you move up the stack in the platform or software space, the service space, the other tools become equally important.

OK, let's go and move on. I think that's an interesting data point. So let's talk about how existing hype-driven trends are reflective of cloud service provider services, including how machine learning, use of containers and serverless, will likely grow into the future. And that's basically everything that's been hyped.

Right. You know, the-- like I said, the big, big shifts that I'm noticing in my role-- and I've had the privilege to work with a variety of customers, power users whose fundamental business relies on AI, to supercomputing centers who are augmenting AI-based applications for traditional modeling and simulation-type work flows, to enterprise customers who are starting to experiment for how AI can benefit the top or bottom line. So that's a wide spectrum, but depending on the journey that they are in, they-- the process is similar across all the customer segments, but depending on which specific step in the journey, you might have lot more room for growth in terms of realization to impact.

Let's get to the core question here. Is hyped always right? I mean, one of the things that I think that we can do as cloud service providers is really managed by magazine and chase everything that's hyped. And certainly, it's AI-based systems, serverless computing-- everything we just named. I kind of picked the hyped stuff, and it looks like everybody equally thinks that these things are going to be important moving forward. I think that's going to be great.

But the thing is, at some point in time you'll have to innovate and you have to create new things, not necessarily wait for hype to start around particular technology patterns. Is that true, or am I wrong?

No, you're absolutely right. You know, most companies who do not have clarity on the specific business problems that they're trying to solve, right, they tend to struggle, right? So they start experimenting on these new, upcoming technologies, and they don't really where-- have a clear way of showing business impact. So what happens then is they start doing proof of concept, first three or four months, and then the effort fizzles out, because there is no correlation to the actual business problem that they're trying to solve by applying these new, upcoming technologies.

So what I gently recommend to the customers and partners that interact with us, always start with the business problem that you're trying to solve. So why-- what are the compelling, competitive, differentiating capabilities that this new product or solution is going to offer, such that they can improve the top line and bottom line?

For instance, they have to have absolute clarity, so many companies are using AI as the fundamental capability for growing the business, and most of the applications are starting-- we call them as cloud-native applications, so they're-- they develop the applications in cloud, and those applications are meant for cloud users. There's a lot of AI applications, real-world applications in that space, for sure. And other areas where, you know, you have 20, 30-year-old legacy systems where introducing a new change in tools or in workflow requires substantial validation and convincing the analyst community that this is the right tool.

So you know, I see several cases where the proof of concepts and the work that has started just because this technology's something that everything-- everyone else talks about. When you start with the technology, trying to-- start with a solution, looking for a problem to solve. So it should-- in fact, the other-- it should be the other way around, where you start with a problem, business problems that you're trying to solve, and choose the best technology that is going to help you get there.

So are the cloud service providers-- should they be more innovative in what they do, or is it-- are they about the right amount of innovation right now? Generally speaking, not naming a particular provider.

Right, so it's a wide range. Again, the top ones, you know, the popular ones, as the audience know, right? They all are trying to simplify the way through which the data scientists or the end users or application developers can consume the technology, right? So you have multiple ways, multiple entry points, into the cloud service product offerings. You can-- whatever, historically, we had workstations under our desks, and then the servers-- the data center's now at a point where you could log into a console on a web page, and you get your compute instance. And then that's step one, right?

So what you do with it? It's essentially-- process-wise, it's pretty much similar to what the developers did when they got the instance from a server that's hosted under their desk or in the data center, right? So that's one state of infrastructures and service offering. And then you see a lot of cloud service providers moving into the platform as a service, and then more into software as a service offering, which is really the next stage of process simplification, as we call it, right? So you do not have to worry about setting up the software. Everything comes preconfigured, preinstalled, and then you dive right into solving the business problem or creating algorithms to solve those business problems.

And even taking it one stage further, where in the concepts of software as a service, everything is driven as an API, Application Programming Interface. So if you have an image that you want to get tagged or annotated, or if you want to identify the objects in an image, or if you want to identify a pattern in a given data set, there are preprogrammed, predeveloped software that gets exposed as a service, right? So you know, you can-- one can consume that through a standard application programming interface.

So it's the different layers of abstraction and simplification. And to a large extent, there's a lot of similarities in what the cloud service providers are offering. So they all try to play in all three spaces. And then there is also a concept called marketplace that has come up where they also invite third-party service providers and software vendors and system integrators to bring in value-added services in the marketplace. So as an end user, you consume one of the three services that the cloud service provider is offering, infrastructure, platform, or software. You then couple that with the marketplace solutions that are available through other software vendors. So you now have the ability to put the pieces together to accelerate your path to-- or time to solution.

Which is, I would say, seems the common mode of operandum between the cloud service providers. And obviously, the next wave of cloud service providers that are coming, they are certainly thinking through how they want to differentiate and collaborate and compete with other service provider offerings. And especially the world of APIs is significantly pulsing. The amount of-- the rate of change in terms of number of research publications, the number of use cases-- it's just phenomenal. And that is, in fact, fueling the fire, so to speak, in terms of enabling the cloud service providers to offer-- keep offering new services to differentiate amongst themselves.

Yeah, this is a great discussion. Let's move on and talk about the use cases. So where are the AI use cases being in? How are the cloud service providers able to capture these use cases and keep up with what business needs?

Yeah, so there's plenty of examples. We call it AI solutions in every market. You have the-- starting from agriculture, energy, education, government, finance, health, really, the list goes on, more specifically transportation, with autonomous driving cars and vehicles and smart homes, right? We have a number of use cases in retail, understanding the consumer behavior as soon as they walk into the store or being able to offer them personalized recommendations. And you see a number of use cases in the media and analytic space as well as creating thrilling experiences.

Industrial segment is going through a significant evolution, essentially looking at areas where-- for example, one of the recent works that we did, the leading oil and gas energy provider-- you know, they-- the traditional model, they used to send a robot with a video camera to monitor the corrosion levels of the oil rig. So they have thousands of oil rigs in the middle of the ocean. So the traditional model was, the robot goes down the ocean, takes a video of the oil rigs, and someone manually analyzes the 30, 40 minutes' worth of videos to find out which specific nut or bolt is showing corrosion level larger than usual, so then they have to send somebody to take preventive maintenance.

So with the AI tool that we've implemented, we have essentially eliminated the need to have someone watch the 40-minutes video to decide which specific part of the oil rig needs preventive maintenance, right? So the whole process can now happen in less than a minute. So that's the power of AI in the industrial segment.

So you see a lot of-- financial community is taking advantage of-- they have, in fact, been using classical machine learning for a number of years. The concept of neural networks is unleashing a lot more use cases. So with the innovations happening, we are working on building neural network and mathematical models to get closer to human brain level parity, right? So that's another area where neural networks are being applied in financial threat modeling risk analysis and so on.

The other-- the major segment where we see significant traction or adoption of the AI is health care, right? So I was reading, David, a couple of weeks ago, of the number of research papers that are getting published, and the medical research and journals that get published-- you know, it's significant, right? So if you're-- if someone were to be an oncologist, they really would have to spend about 20 hours every day to keep up on all those publications relevant to their domain, which is simply not practical, right? So there are tools that can now summarize-- create summaries based on those research journals so that the oncologist can keep up to date on various innovations that are happening in the industry.

So there are areas where AI is augmenting the human experts. There are areas where AI could potentially speed up the process and therefore make the humans available for doing more value-added work versus just watching a 40-minute video and tagging the specific nut and bolt to go take preventive maintenance. So there's two or three types of scenarios. And then obviously, the machine-- the robot automation-type scenarios where filling out forms or chat bots where, based on the conversation, the AI tools and capabilities are able to be context aware, right? It's not just about identifying or classifying whether you have a dog or a cat in a picture. Now, we are at a point of really understanding the meaning of the given image or a given data set.

You know, for next stage of evolution is going to be to get us towards the human brain level parity, right? So you see the spectrum of the-- looking at the number of innovations happening in the space, I think reaching human brain level parity, it's going to be a reality very soon. Right, so the use cases are plenty, and again, customers are selectively choosing specific areas, obviously starting with the business problem that they're trying to solve or the opportunity that they want to go after, and then decide the tools and tools sets, the right tools and the tool sets that they have available to them.

So let's move on. And this question is actually near and dear to my heart, because I'm doing a lot of multicloud deployment these days. How are cloud service providers learning to work and play well together, including hosting multicloud tools and technology? So this ultimately comes to the point that multicloud is kind of the norm now, and people are learning to allow these clouds to work and play well together, and the public cloud providers seem to exist in silos, not necessarily proprietary, but they do things in their own way, and they want to leverage their own cloud-native tools. How should they think about working and playing well together? Probably a better way to what to format the question.

You know, if you look at the typical type of deployments that are happening, you know, there's one set where, like you said, the cloud-native applications-- I mean, the applications are built in and for cloud consumer, so that's one kind of applications. The tools-- generally, the capabilities and tools are across the infrastructure platform and software as a service are somewhat similar amongst the major cloud service providers.

Now, the devil is in the detail, right? So typically, the enterprise users, when they want to migrate some of their applications from private cloud to hybrid or a public cloud offering, the typical approach they take is they keep the core IP internally, in the local, private cloud processing capability. And then they come up with upstream and downstream type of applications where you have market sensing and you have to do data filtering, preprocessing, and cleaning. Typically, those type of functions, they call it upstream. Then, any postdecision implementation or action-driven applications, they call it downstream.

I've seen customers choose-- everyone prefers to have dual source, right? So they prefer one cloud service provider for upstream and the other provider for downstream. And they keep the core IP in their private cloud offerings. That's another scenario or implementation scenario that we regularly come across.

Now, to answer your question, how do the CSP, the Cloud Service Providers are work-- networking and playing together? There's the concept of industrial, industry standards that are forcing the convergence to happen, right? So I'll give a specific example. The ONNX, O-N-N-X, Open Neural Network eXchange, that has now become popular, where one could train their numerical or mathematical model in any type of software framework setting. Out comes the trained model. You can deploy that model in any other cloud service provider offering, or you can run in your own private cloud instance, or in your own laptop, right? So you start to see there are industry standards emerging to force the collaboration somewhat between the cloud service provider offerings.

So, you know, what is critical is that when you-- when an enterprise decides on the cloud migration journey, they have to be absolutely clear on, where is the data going to reside? And what are the upstream, downstream dependencies? What is the core IP going to be? Because oftentimes, once you migrate applications to a given-- to one particular cloud service provider, it takes effort to migrate them back, again, into your private cloud or to another public cloud service provider, right?

So there are-- you know, industry standards are still evolving, emerging, but the way enterprises today are solving the problem is to split the workflow into upstream, downstream, and keep the core IP locally, in their private cloud. And they seem to be managing that overall architecturally well. But over time, I do expect more and more industry standards will evolve, and things like containers is going simplify. You could package the entire set of dependencies tools and-- that are required, critical for your application, into this container. You can then port that container between the cloud service provider's offering. So that part is only getting better.

I think the core is going to be where you're going to keep your data, because moving the data in and out between the cloud service providers takes time and effort and money. So that's one area where I think industry standards should continue to evolve. And in terms of application space, like I said, the AI space, there are industry standards that are being codeveloped, largely by cloud service pointers and silicon providers and software vendors to help port-- support application migration between the cloud service provider offering. So once we figure out the software and the storage strategy, then the tools and the capabilities about the application level are starting to become more friendlier.

And one thing you just mentioned was the sharing of knowledge models across cloud providers. How far away are we away from that being a reality?

So there are two concepts. Again, you know, you could take your proprietary model that you've built and you could bring that trained, pretrained model-- you know, you could have trained it on your local private cloud or in the data center or in the workstation that you have. You can port that model to any cloud service provider offerings today, because the ability to understand the model and the parameters, hyperparameters and biases and weights, that is now getting standardized, OK?

So that part is-- so the inconsistency, probably one that enterprises will likely run into is in terms of the tools that they used to preprocess and prepare the data set, right? So if you look at the amount of time one spends to do to be able to train the model, or even to create parameters for training the model, is roughly about 70% of the time spent in the analytics workflow or AI workflow goes towards preparing the data, right? Annotating the data, preparing the data, transforming the data into the format that you can use to derive parameters to start building the model.

So it really depends on how much-- what part of the workflow that you're going to seamlessly, or you intend to seamlessly, migrate between the cloud service providers. So the tools, convergence-wise, it's still not there yet. But that application layer, migrating the trained model from one environment to the other through containers-- I think there's a lot of progress-- good practice that has been made so far in that space.

So moving on and getting to the question how cloud infrastructure is changing and how you can stay ahead of the resource consumption curve, and I'd love to hear about this, because infrastructure has kind of remained static for a while, but now it's starting to change quickly. So what are the trends and directions here? What's changing in the space? What kinds of things do cloud providers need to keep track of?

You have a fantastic mind map here. So that's-- kind of captures all the key touch points and dependencies and the interdependencies in the space. So the concept of-- like I said, the concept of elastic cloud, the on-demand, self-provisioned service has become a reality several years ago, right? So thanks to the media cloud service providers.

So as the workloads mature, the characteristics of the applications continue to evolve, right? So if it's a typical web service-type application, you have certain types of requirements in terms of computational capabilities, the amount of memory your application instance is going to require, the computing cores it's going to require, versus if you have a database application or transactional processing type applications, the requirements could be different, versus if you have an instance that you're building to create a neural network model or a machine learning model to train that model. So it's a wide range.

So what's happening is because the cloud service providers are targeting end users or end customers who might be building applications across these spectrums, right, so they're having to build elastic cloud capabilities that can offer those services in a cost-effective way and with high ease of use and productivity to these end users.

But internally, what this happen-- what this results in is that ultimately, the infrastructure teams and the cloud service providers that are supporting engineering and providing these impressive services-- their success gets measured based on the total cost of ownership, right? How much money do they have to spend to support, let's say, 100,000 compute instances? And they have to find ways to create fungible infrastructure that is not really built for one type of application, but it should have enough flexibility to support emerging past, present, and future applications that have a variety of characteristics. So the concept of multitenancy is very good. It's not going to go away anytime soon, right?

But at the same time, when you have hundreds of users on a given compute server rack, when those users, not all of them, are going to be running applications all the time. So how do you look for cycles that are now unused? And potentially, that gives the cloud service provider the opportunity to do over-subscription, and therefore increase their resource utilization. But they can get they can go to only if they can guarantee service levels, right?

So it's-- so the key is, how do you create a fungible infrastructure in an area where the application needs and demands and characteristics are changing, right? So that's the fundamental challenge that the cloud service providers have. And the general-purpose CPUs and compute processing units or server processing units versus the off-loadable accelerators-- the concept has been getting a lot of attention as well, especially in the large cloud service providers. They have enough consolidation and workload where they have a way to keep all the general purpose compute on the CPUs and find ways to consolidate, aggregate any application that can be offloaded to a purpose-built GPU or a application-specific silicon integrated circuit, silicon. So they're getting to that level of maturity as well, right? But ultimately, it's going to be key for all of them to provide a fungible infrastructure.

So moving on, it's a question that I'm interested in as well. What aspects of AI, Artificial Intelligence, should cloud service providers focus on now, and how can cloud service providers man a multibillion dollar market that everybody's going after right now? So what advice would you give? I guess this is a cross between business trends and technology trends, isn't it?

Right, it is, indeed. And we call-- we call two things internally, AI in the workflow and IO in the workflow, right? So there are businesses, new businesses, being built with AI being the fundamental change agent, right? So look at a lot of startup companies that have come up. The fundamental business relies on AI, applying AI-based principles to start offering differentiated services. That's one kind of capability.

And the second thing is the traditional enterprise companies who have legacy applications they want to innovate where applying-- infusing AI-type-based principles is helping them achieve operational efficiencies that were not previously possible. So that's the second kind.

Third kind, you know, you look at the number of scientific simulation, research-type-- pharmaceutical research-type companies, the conventional modeling and simulations ways that they've been taken-- taking, it's going to demand-- you know, if they continue in the original, conventional way of doing modeling and simulation, the computing demand is going to grow 10, 20, 30 times more, right? So they simply can't continue that journey. So they're looking at new ways to either augment or potentially replace some of the traditional modeling and simulation type applications.

So those are the three types of users all major cloud service providers are targeting, right? So the first is the cloud native applications where AI's fundamentally a change agent for the business model. Second is set of enterprise customers who are in the process of migrating some of their legacy applications into this cloud-native, AI-based applications. And the third category is the supercomputing-type users, power users, who are looking at augmenting or replacing the traditional modeling and simulation with AI-infused opportunities.

Now, the cloud service providers have to have-- offer the capabilities for all three segments, right? So number one. And they have to have a fungible environment at the infrastructure level, because like I said before, the characteristics of these applications, the workloads, are very different, right? So they cannot build one platform for one particular workflow, so they have to have enough fungibility and flexibility to be able to support both past, present, and future-type application needs.

So many of the cloud service providers also have the ability to look at opportunities where whether it makes sense to move the data from the edge to the cloud for a-- for the analysis to occur, or is it more practical to collect, process, and analyze the data at the edge, and then only send the aggregated data to the back end for refining your model, right? So you have that trend. What do you decide-- how does one decide what stays on edge, what goes to the cloud, is also starting to get some significant traction.

So the cloud service providers have to obviously look into the user segments or the end-customer segments that they're targeting, which are the typical three that I bucketed. And then how are they differentiating amongst themselves in terms of infrastructure offerings? And then third is, do they have enough tools and capabilities to democratize this whole process? It has to be simple, right?

So the data scientists are one of the highest-paid jobs these days, so you want the data scientists to be inventing, innovating, [INAUDIBLE], right? So you don't want them to be configuring servers and installing software applications. So you could-- if you can automate all of that, you provide a notebook-type infrastructure or an environment that seamlessly connects the workflow from all the way from data collection to the model building and deploying. That's another area where the cloud service providers should continue to innovate.

Let's go to another poll. We'll redo this, because we're going to talk about 5G next. And then go ahead, and I'm going to give our operator enough time to have the whole question up there so you guys will have a chance to respond this time.

So when do you think 5G will make a difference in cloud adoption? And only one that apply, 2020, 2021, 2022, or after 2022. I'm really interested in your take on this, because it seems like we have a idea that basically 5G is going to change the universe on us. And some say it's not rolling out fast enough. Some say it's rolling out too fast. We don't have phones that support 5G that we carry, where there's apparently people don't understand that we need different phones. For some reason, it says 5G edge on my [CHUCKLES] ITT info. I don't know what that means. So what do you think is the driver and the force multiplier in 5G as it relates to cloud computing?

Let's go ahead and look at the poll. And so where do you think we're going to see the adoption come? So looks like we're up to 38% who have voted things like that. So what's your feel?

You know--

[INTERPOSING VOICES]

Yeah. So I think 2022. You know, there's still-- a lot of use cases are still emerging in the early stages. Yeah, I would pick 2022. Let's see what the audience is saying.

OK. Let's-- looks like 60% have voted. So you guys please get in and make the votes. We'll close it off in about a minute. So let's do some betting here. I think it's going to be 2021 that people are kind of-- 2020 is kind of close. I don't think they're going to get the roll-outs by then. But I think the perception is-- not necessarily the reality-- is that 5G is going to make a huge difference by 2021. Even bringing cloud computing to areas of this country and areas of other countries that typically couldn't get internet connection, so they couldn't use software as a service or Amazon, Microsoft, Google, those sorts of things.

And it's really kind of a game changer when you see internet come to communities that didn't have it before. Do you think we're going to see a cultural shift as well as a shift toward cloud computing?

I think the use cases, David, are going to force that convergence to occur faster than probably what we're anticipating. But like I said, the infrastructure has to be available. Again, depending on the part of the world that each of us is, and the available time.

So let's go ahead and close out the poll, if it makes sense, and we'll go ahead and look at the results. So it looks like-- OK. Go back to my slides. And looks like 18% 2021. I'm getting them back. 53%, 2022. And 29%. So here's the way it stacks up. So it looks like we have-- so with 60% of the votes in, 18% for 2021, 53% for 2022, which is why my bet was, and 29% after 2022.

So does that line up with what you think or you find in the marketplace? All right. Let's go ahead move on.

So this is the big focus right now, is how edge computing is changing the game and why you're already behind. And think that the cloud providers are placing huge bets on the edge. I noticed that in the big conferences that we had the last couple of weeks, certainly Dreamforce and re:Invent and things like that, edge computing was a primary driver. And ultimately, we had the ability to leverage this as a core enabler of cloud computing, because it looks like we have-- edge-based systems are able to carry the open internet down to various customers and consumers of cloud-based systems that typically couldn't consume it.

So ultimately, the number of 5G connections that we see-- here is a forecast in North America. It's going to go up. My guess in this-- and I'd to get our cohost's guess-- he's rebooting his-- got a webinar right now. But I think ultimately, we're going to be able to look at how this technology is going to enable us to have access to various systems in the cloud that were typically nonexistent. So bandwidth has typically been a limitation and reasons why people didn't leverage cloud computing. They just didn't have the bandwidth supported to move information back and forth. And it does take a lot of information, moving up in the open internet to have the public clouds consume information and produce information.

And so without the bandwidth, you basically had to push cloud computing down the road and stick with internal systems, things like that. So ultimately, this is about your ability to look at the viability of this technology and the viability of the cloud technology and how one will need access to the other. So the bets now are being made that cloud computing can, in essence, provide the capability to enhance various businesses and various organizations and even various cultures that typically didn't have access to these resources.

So I think our guest is back. I'd love to get his perspective on that.

Hey, David. Sorry about the disconnect. So, yeah, I think I completely agree with the assessment. So the number of the real-world applications that can fundamentally benefit from 5G capabilities is increasing, right, starting with autonomous vehicles on-- you want the device to communicate and make decisions locally as much as possible, right, to specific cases where maybe a fraction of a second can make a huge difference, a the financial difference, for certain businesses.

So it's a wide range of application possibilities that exist that can potentially benefit from 5G. I agree with the data point that you're sharing here, which is we expect to-- the applications to increase over time. And we have the concept of zero latency or near-zero latency, right? So where, depending on the application, if your device is collecting the data and sending that data was collected at the device to the wireless tower and then to the data center to do the processing then bringing the device-- the data back to the tower and then to your device-- you know, you're talking about significant round-trip latency, right? So especially in cases where you're dealing with people's lives, you want the decisions to be made locally as much as possible, and preferably from-- in the shortest possible connection point, right?

So that's-- 5G is fundamentally going to enable many of those use cases in the telecommunication space and autonomous vehicles space and in health care, right? That's another area where we expect seamless interaction between the devices, connected devices. The applications are going to only grow in the space. And the cloud service providers are fundamentally well positioned to win in this space. And many of them have strong backend data server presence today.

And as I pointed out, many of the cloud service providers are also targeting and moving, migrating, some of their service offerings to the edge locations so that they have the ability to connect to devices in the fastest-- in the shortest possible way. And therefore, they have the ability to provide response-- collective intelligence in the shortest possible time frame, back to those end users.

So I'd also like to learn more about Intel's role in all this, so the role of Intel in providing the power of the clouds. And you know, this is not an advertising speech. This is not a pitch. But tell us-- give us the details in terms of what you guys are doing behind the scenes to accelerate the capabilities in cloud service providers? Because you're-- you know, there's always the Intel inside, and basically, the core, basic, native capabilities always come down to products that you guys perceive. But tell us exactly what you're doing and how you're advancing the CSP space these days?

Yeah, that's a great leading question. Thanks for it. Intel-- we have dedicated teams working, collaborating with cloud service providers, all the major ones, right? So what we really try to strive to do, David, is to really understand what are the pain points that the cloud service providers are trying to solve, right? So we really try to understand the perspective of the customers of those cloud service providers, right? So everyone wants to make it easy, make it more secure, more-- applications to be more intelligent, and the workflow process, the workbench concept of offering the tools and libraries to make the process of deriving intelligence easier for the end users.

So we have lots of engagements and interactions at the daily level with all major support service providers, right? So traditionally, Intel has been building and bringing the CPUs to market, the Xeon server CPUs. And as I pointed out, early on in my conversation, there are certain workloads, AI being one of them, where the workloads are getting consolidated. Typically, the traditional, general purpose computing-type applications, computing demand, was roughly doubling every 18 months to 24 months, right?

But if you look at the new AI-driven applications, the computing demand is doubling roughly every three to four months. This is specifically the model training type aspects, right? So you then have to have the ability to offer both ends of the spectrum. So we have products of-- all the way from the milliwatt range to hundreds of watts targeting the edge devices, surveillance cameras, to the power-hungry, extremely high-dense, high-compute type infrastructures.

So you see-- I always use this analogy, where CPUs or general-purpose computer capabilities-- you kind of correlate them to someone who competes in decathlon, right? So you have to be good at doing 10 different things. You have to be really good at doing all those 10 things to the best of your ability. But if you're building-- you know, if you want to grow somebody an athlete who wants to-- who you want to excel in, let's say, 100-meter dash, you have to grow that person and train him or her the right way, right? So we have purpose-built accelerators for those specific type of applications.

So AI, for the large part, still runs on the standard CPUs. In specific areas where, especially for training, the consolidation is-- workload consolidation is starting to gear towards-- more towards general-purpose compute or the GPUs, generic Graphics Processing Units, or even the application-specific integrated circuits. So Intel has the portfolio across all these capabilities and offerings, so all the way from the smallest possible device that sits in the edge devices or in the cameras to the servers that are sitting in-- under the desk in a workstation form factor to the servers that are-- capabilities that are in the data center.

So we have product offerings. We're trying to simplify the software through common applications-- programming interfaces, and then therefore simplify the consumption of those capabilities and technologies that we bring into market to-- from an application developer standpoint, so they do not have to worry about which hardware they-- their application runs on. They have to make it somewhat seamless and transparent. That's some of the significant areas of investment that we're making in terms of application programming interfaces.

We continue to invest in the broadest portfolio of products, because the needs are different. Applications will continue to evolve. So we will continue in that journey as well, while keeping the focus on, how can we from customers of our customers problem? Meaning, the consumers of cloud technologies, what specific capabilities, what type of applications, what type of specific needs and ease of use requirements they have? How can we bring them down to the silicon and software level? Is where we have been focusing on. And we-- I expect that we'll continue to play in that space.

And the final topic, and also something that's very exciting to me, so where does open source play with cloud computing? And how should cloud service providers consider [INAUDIBLE] open source as a part of their technologies strategy?

It is going to be fundamentally critical for all of them to thrive. Like I said, the three things, the cost of computing going down, more data becoming available, and the third critical element is these open source concepts, right? So tools and libraries. So, you know, what was only previously possible with software that would have cost someone millions of dollars can-- the exact same work can now be performed with open source tools that are freely available, right?

So it is-- many cloud service providers are embracing the open source tool chain. And then they will continue to do that, because containers and Kubernetes and these concepts have often the fundamental ingredients for accelerating this-- the innovation that happens in this space, not only in the AI space, but do overall analytics in the web service application space.

So open source-- you know, you see the amount of innovation that's been happening, from the Linux days to where we are now. It's just phenomenal, phenomenal change. And I expect that all cloud service providers will continue to support open source tool chains. And yet, they have to differentiate among themselves so that they all are somewhat competing for the same business opportunity, right? So there, you see the fundamental platform is based on-- predominantly on open source tools and software offerings. Then they-- most of the cloud service providers try to differentiate on the manageability software layer or the ease of use. And also, at the top of it, they have the native applications that they're trying to bring into the market, and thereby differentiating amongst themselves.

But ultimately, the foundation for the cloud service provider offerings is-- today, is open source, and I expect it will only increase as time goes by.

Now, we're going to move to questions. We just have a few minutes and just a few questions. So one of the questions that I thought-- I took out of the queue first because it's something I'm interested in, where do you get an education on AI systems out there? Where do you go for information, either if you're a professional and you're trying to get updated on industry news, trends, and where things are going, or you're just starting out and you're trying to, in essence, learn about AI and how it applies to cloud computer?

Yeah, so at Intel, we have launched a couple of efforts in this space, the AI Developer Cloud. This is one area where we offer trainings, prebuilt trainings. We have a variety of sample applications that we teach the application developers to build through the coursework. That's one area I strongly encourage the developers to take advantage of.

And the second area is where we launched the program called the AI Builder. So essentially, what we realized is that as the customers are-- in both ends of the spectrum, there are power users whose business is fundamentally based on AI to [INAUDIBLE] for developers and customers who are kind of exploring you for how AI can help. So we launched this program, AI Builder program, where we bring the community of developers and software vendors and system integrators to guide them on how to build and optimize and accelerate applications on-- their AI applications on Intel-based product portfolio.

So we have closer to 200 proof points that we have collectively developed with industry partners. So it's specific applications in, let's say, for example, in health care or in retail, we're doing customer sentiment analysis-type workloads, all the way to the highly complex physics particle simulation of the-- crash simulation-type workloads.

So we have-- I strongly encourage both those aspects. One is the AI Developer Cloud for becoming familiar with the tools and libraries and the concepts. Then, once the developers are ready, they can also take advantage of the AI Builder capability [INAUDIBLE] launch so that they can take those learnings to further their implementation at the application level.

So Ananth, I want to thank you very much for sharing your time with us. I found this extremely valuable in understanding really where AI was going and how it applies to the cloud-based market, and specifically cloud service providers and to leverage this technology. So I urge you guys to check out what Intel is doing. I mean, Intel is one of those company's been around for a long time. They have huge value proposition in the cloud computing space. And they're kind of the engine inside that makes things work. And as we move into AI and other technologies that take an inordinate amount of processing, we need to rely on these companies to be innovative and create technology that's going to drive us into the future.

So until next time, best of luck with your cloud computing projects, your AI projects, if you're a cloud service provider in building your technology. And please come back to another GigaOm webinar. I look forward to see you. Cheers, guys.

Thank you.

Stay Connected


Keep tabs on all the latest news with our monthly newsletter.