AI: How the infrastructure you have can give you what you need

Factors to Consider for Your AI PoC:

  • What’s the right use case for my organization?

  • What technology do I need, and what do I already have?

  • Do we have the necessary skills and if not, where can we find them or how can we develop them?



The answer to exactly how much more machine learning and deep learning can help you get from your data depends on the use cases involved. It also depends on your organization’s appetite for experimentation.

However, while different organizations are in different stages of their artificial intelligence (AI) journey, the risks of experimenting are low if you leverage your existing infrastructure to get started. To assist, Intel has optimized a number of popular deep learning frameworks to run on Intel® architecture, including TensorFlow*, Theano*, and more.

Additionally, BigDL was created by Intel to bring deep learning to big data. It is a distributed deep learning library for Apache Spark* (DL Library for Apache Spark*) that can run directly on top of existing Spark or Apache Hadoop* clusters, and allows your development teams to write deep learning applications as Scala or Python programs.

BigDL uses Intel® Math Kernel Library (Intel® MKL) and multithreaded programming in each Spark task. This helps it to achieve high performance, enhancing deep learning performance as compared to out-of-the-box open source Torch* or TensorFlow on a single-node Intel® Xeon® processor.

The risks of experimenting with AI are lower because organizations can use their existing data center infrastructure to get started.

Three AI use cases that can have an impact in almost any industry

Intel sees three key areas where enterprises are experimenting with AI on their current data center infrastructure – image recognition, natural language processing (NLP) and predictive maintenance. And when the time comes for these to be scaled up, leveraging open source frameworks on your existing data center architecture can make a real difference in simplifying AI adoption across the enterprise.

1) Image recognition

Image recognition applications are being deployed today for quality control (identifying product defects), security (scanning faces and car license plates), and in healthcare (identifying tumors).

A common challenge for businesses is having enough data to train image classification and recognition algorithms, and pre-processing of images can account for more than half the total time to solution. Intel® Xeon® processors can support applications for data augmentation to help overcome this. These applications rotate and scale images, and adjust the colors, meaning that fewer images are required to effectively train image recognition algorithms (depending on the use case).

CPUs excel at handling data augmentation workloads thanks their power efficiency and high memory bandwidth – up to 100Gbs. This is especially true of the Intel® Xeon® Scalable processor family, which is boosted by the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set.

CPUs excel at handling data augmentation workloads, thanks to their power efficiency and high memory bandwidth – up to 100Gbs.

2) Natural language processing

Voice-activated virtual assistants don’t just process requests accurately, they understand the nature of queries to continuously improve themselves. Similarly, customer experience and satisfaction are being transformed by systems that can process call center recordings or handwritten forms – a treasure trove of previously hidden insights than can be used to identify common complaints or address customer issues faster.

NLP uses a technique called recurrent neural network (RNN) and long short-term memory (LSTM), and when processing the loops and dependencies that characterize these operations theIntel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set also comes into its own.

3) Predictive maintenance

Predictive maintenance differs from image recognition and NLP in that it’s typically based on a much lower data rate, with information captured by sensors monitoring conditions at the edge. Ideally, as much compute as possible should also take place at the edge before going back to the cloud for analysis or a decision. The Intel® Movidius™ Neural Compute Stick, powered by a VPU, is ideally suited to accelerated deep learning development at the edge.

Watch: Artificial intelligence is transforming how organizations are increasingly using predictive maintenance to support critical infrastructure.

Get started with AI now

AI performance is driven by a combination of compute, software optimizations and processor memory bandwidth, and wherever you are in your AI journey, Intel’s broad portfolio of hardware and software offers a rich toolkit for building the most cost-effective deployment architecture for AI workloads.