Navigating the AI Framework Maze

Artificial intelligence (AI) is a fast-growing market, driven by exploding volumes of data from social media, the Internet of Things (IoT), machine-to-machine communication and many more sources. Advances in AI algorithms, along with compute and storage innovations, provide an opportunity for cloud service providers (CSPs) to add AI as a service (AIaaS) to their service portfolios.

As described in the eGuide, “AI for CSPs: from Insight to Action,” companies are teaching machines to learn, think, talk, and listen. AI is producing amazing results in many industries ranging from agriculture to space exploration to health and life sciences. The share of jobs requiring AI skills has grown 4.5X since 2013.1 The time is ripe for CSPs to add AI services to their portfolios.

"AI is the most important technology that anybody on the planet is working on today." 2

Dave Coplin, chief envisioning officer, Microsoft, as quoted in Business Insider

Industry-leading CSPs, such as Google, AWS, and Azure are well-established players in the AIaaS marketplace. But there’s plenty of room for other CSPs to find their AIaaS niche as well. For example, both UCloud and Kingsoft have developed AIaaS offerings.

From cognitive cybersecurity to thought-controlled gaming and virtual companions, from advanced robotics and autonomous cars to real-time emotion analytics, the applications for AIaaS are endless. But with use cases in so many industries, and customer needs varying widely, how do you get started in this burgeoning field?

The trick is to discover what your customers want and what your infrastructure needs to satisfy their requirements, then invest intelligently.

There are many considerations when choosing an AI framework, including the availability of pretrained models, benchmark performance, ease of use, scalability, interoperability, deployment method (bare metal or containers, for example), basic functionality, and community and support.

“When choosing an AI solution, like when buying a car, [we] need to understand what is under the hood to make sure we are buying the best product for our needs.” 3

InfoWorld, June 2017

To get the high performance customers expect, it is beneficial to invest not only in an appropriate AI framework, but also in hardware that works best with the chosen framework. For example, BigDL*, TensorFlow*, Caffe* and MXNet* have versions that have been optimized specifically for Intel® Xeon® Scalable processors. These optimizations increase performance, which can be a competitive advantage. There are also several software libraries available, such as Intel® Math Kernel Library (Intel® MKL) and Intel® Data Analytics Acceleration Library (Intel® DAAL). These libraries can further improve AI workload performance.

Who wouldn’t want to be part of a global market that is expected to reach USD 14.71 billion in 20244? To find out more about the AIaaS opportunity and how to choose the right AI framework for your needs, read the eGuide “AI for CSPs: From Insight to Action ”.


Find out more about AIaaS and how it can help you build your cloud services business. Download the eGuide, ‘AI for CSPs: from Insight to Action’.

Download the eGuide ›

AI Opportunities for CSPs

Read this article to learn how an increasing number of CSPs are recognizing the opportunities of AI for business growth.

Read the article

AI for Cloud Service Providers

Set your cloud services apart by offering your customers cloud-delivered AI solutions. Solve the biggest business challenges.

Watch the video

Cloud Service Providers Resources

Stay ahead of a fast, steep technology curve and increasing competition through differentiation and agility.

Explore CSP resources

Product and Performance Information

1

Forbes, January 2018, “10 Charts That Will Change Your Perspective on Artificial Intelligence's Growth.” https://www.forbes.com/sites/louiscolumbus/2018/01/12/10-charts-that-will-change-your-perspective-on-artificial-intelligences-growth/#2cb4a20b4758.

2

Source – Intel-tested: Response Time refers to average read latency measured at queue depth 1 during 4K random write workload using FIO 3.1. See configuration in Footnote 1 above.