Intel + AWS = Compatibility from On-Premises to Edge to Cloud

author-image

By

The 14-year collaboration1 between AWS and Intel has resulted in more than 220 Amazon EC2 instance types featuring Intel® Xeon® processors to better address evolving customer needs from on-premises to the edge and cloud.

AWS’ customers want reliability and performance. Intel software engineers work with more than 1,000+ independent software vendors (ISVs) to test and optimize libraries, runtimes, and much more at every layer of the software stack – which translates into robust and consistent application performance as customers migrate from on-premises to the cloud (see Figure 1).

Figure 1: The C5 family of instances provides between a 7 and 34 percent performance improvement for StreamSets Data Collector use cases, while costing only about 2 percent more per hour.2

For hybrid environments, enterprises benefit from latest generation Intel® technology deployed in VMware Cloud on AWS and Amazon Outposts.

  • If you’ve been running Microsoft SQL Server or Oracle Database on-premises, you can expect predictable and consistent application behavior in VMware Cloud on AWS also powered by Intel® architecture.
  • If you’re investing in Amazon Outposts for your edge infrastructure, you can rely upon the same validated Intel software optimizations on the edge as you do on-premises or in the AWS cloud.

 

In the high-performance computing (HPC) space, developers can spend weeks or months researching and managing myriad compute, storage, networking, and software configuration options. To address this challenge, Intel and AWS work together as well as with AWS partners:

  • AWS and Intel recently partnered to introduce AWS ParallelCluster as an Intel® Select Solution to make it easier for scientists, researchers, and IT administrators to deploy, manage, and automatically scale HPC clusters.
  • Working with AWS partner RONIN’s web-based analytics solution, Australia’s CSIRO research agency was able to increase by 60x the number of parallel wildfire simulations with 98% utilization of large Amazon EC2 C5-based clusters featuring 2nd Generation Intel Xeon processors.7

In the artificial intelligence (AI) and machine learning (ML) space, AWS, Intel and partners can bring together a broad portfolio of technologies, products and services to more quickly deliver powerful new solutions:

  • Amazon EC2 instances with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and Intel® Deep Learning Boost (Intel® DL Boost) to respectively double and more than quadruple AI/ML processing.8 Popular machine learning models like GluonCV have robust support for these accelerators.9 10
  • The oneAPI Deep Neural Network Library (oneDNN), formerly known as Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), is upstreamed into popular open source deep learning (DL) frameworks such as Apache MXNet, so whether you choose a fully-managed experience with Amazon SageMaker or use AWS Deep Learning AMIs to build your own custom workflows, you reap the benefits of these Intel optimizations.
  • An example of a solution using SageMaker and MXNet optimizations is AWS DeepRacer. Intel and Amazon collaborated to make reinforcement learning accessible to broader developer audiences through the highly engaging AWS DeepRacer program - a cloud-based 3D racing simulator, where developers join a global racing league to control fully autonomous 1/18th scale race cars.11
  • AWS IoT Greengrass and Amazon SageMaker in conjunction with Intel® FPGAs can accelerate machine learning at the edge. Find out how to train and convert a neural network model for image classification on Intel FPGAs here.
  • The recently released Intel OpenVINO™ AMI on AWS is widely used for accelerating computer vision for machine learning inference but can also speed up data analytics. By using the OpenVINO toolkit and Amazon E2 C5 instances, oil and gas explorers were able to see a 3x improvement in seismic data interpretation.12
  • Intel optimized Pytorch13, TensorFlow14, Caffe and other popular AI/ML frameworks can be paired with Intel® Movidius™ VPUs for image processing and popular Amazon SageMaker, Amazon Personalize and Amazon Forecast services to yield groundbreaking results when combined with the expertise of AWS Premier Partners and Intel hardware vendors.
  • One example of such a portfolio solution is Inawisdom and ADLINK’s computer vision solution for monitoring use of personal protective equipment (PPE).15

For customers who want to take the next step to tune and optimize their own software, Intel offers comprehensive code modernization tool suites including compilers, performance libraries and analysis tools:

  • Simplify the design, development, debug and tuning of parallel code with Intel® Parallel Studio XE for enterprise, HPC and AI applications. Includes Intel® VTune™ Amplifier for identifying I/O bottlenecks and optimizing memory and storage use and Intel® Inspector for finding missing/redundant cache flushes, logging errors and more.
  • Perform systems and Internet of Things (IoT) development more easily using Intel® System Studio, a cross-platform tool that simplifies application development.

Talk to your account team about how Intel software can help improve your proof of concepts and production deployments, and explore more resources for AWS from Intel.

Product and Performance Information

5

Intel benchmarks comparing Intel® Xeon® Gold 5218 processor to Intel® Xeon® Gold 5118 processor. Geomean of est.SPECrate2017_int_base, est. SPECrate2017_fp_base, STREAM TRIAD, Intel Distribution for LINPACK benchmark, server-side Java. Intel® Xeon® Gold 5218 processor configuration: 1-node, 2 x Intel® Xeon® Gold 5218 processor with 384 GB (12 x 32 GB 2,933 [2,666]) total memory, ucode 0x4000013on Red Hat Enterprise Linux (RHEL) 7.6, 3.10.0-957.el7.x86_65, IC18u2, Intel Advanced Vector Extensions 2 (Intel AVX2), Intel Hyper-Threading Technology (Intel HT Technology) on all (off STREAM, LINPACK), Intel Turbo Boost Technology on, result: est. int. throughput=162, est. fp. throughput=172, STREAM TRIAD=185, LINPACK=1,088, server-side Java=98,333, test by Intel on December 7, 2018. Compared to Intel Xeon Gold 5118 processor configuration: 1-node, 2 x Intel Xeon Gold 5118 processor with 384 GB (12 x 32 GB 2,666 [2,400]) total memory, ucode 0x200004D on RHEL 7.6, 3.10.0-957.el7.x86_65, IC18u2, Intel AVX2, Intel HT Technology on all (off STREAM, LINPACK), Intel Turbo Boost Technology on, result: est. int. throughput=119, est. fp. throughput=134, STREAM TRIAD=148.6, LINPACK=822, server-side Java=67,434, test by Intel on November 12, 2018. https://aws.amazon.com/blogs/awsforsap/new-sap-certifications-for-aws-instances-and-world-record-benchmark-results/.