Artificial intelligence (AI) data scientists and application developers rely on their software. For over 50 years, we have worked with our customers to ensure that their applications work seamlessly on our hardware, taking into account every layer of the solution stack, including applications, orchestration and hardware. We have also worked closely with the AI ecosystem for a number of years, both optimizing and developing a broad range of software tools, frameworks, and libraries that will satisfy the most demanding data science needs.
As data volumes grow to Petabyte-scale, our aim is to ensure that our customers and partner ecosystem can easily build, run and scale AI workloads using their existing Intel® architecture, without the need to make significant investments of time and money in building new software stacks.
Below we will outline some of the key AI software offerings which Intel helps optimize, that can help your organization accelerate time-to-insight with machine learning and deep learning, using Intel® hardware.
Accelerate Time-to-Insight Using Analytics and Machine Learning
The Intel® Distribution for Python is a ready-to-use, integrated package that delivers faster application performance on Intel® platforms. With it you can accelerate compute-intense applications—including numeric, scientific, data analytics, and machine learning–that use NumPy, SciPy, scikit-learn, Pandas, XGBoost, and more. Included with the Intel distribution of Python is daal4py, which provides a native Python interface for oneAPI Data Analytics Library (oneDAL).
The Intel oneAPI Data Analytics Library (oneDAL) reduces the time it takes to develop high-performance data science applications, including machine learning algorithms, enabling developers and data scientists to build apps that make predictions faster, and analyze larger data sets with available compute resources.
The Intels Distribution of Modin is a lightweight DataFrame that helps to scale Pandas workflows seamlessly across multi-cores, multi-nodes and accelerate analytics with performant backends, such as OmniSci. Modin has APIs identical to Pandas and achieves drop-in acceleration with just one line of code change for data preprocessing.
Implement Deep Learning Faster
Many of the more complex AI use cases today, such as computer vision or speech recognition, depend on deep learning algorithms. Deep learning frameworks and libraries offer data scientists, developers, and researchers the ability to use higher-level programming languages, such as Python or R, to train and deploy algorithms based on deep neural networks.
Intel has worked with the AI ecosystem to contribute code to the most popular deep learning frameworks so that they are optimized for Intel architecture. These include optimizations for:
TensorFlow: Based on Python, this deep learning framework is designed for flexible implementation and extensibility on modern deep neural networks. In collaboration with Google, TensorFlow has been directly optimized for Intel architecture to achieve high performance on Intel® Xeon® Scalable processors, providing up to 3.75x inferencing speed up when using Intel® Deep Learning Boost 1. Using code from the TensorFlow main branch on GitHub, a developer need only build from source with the “config” variable set to the Intel Math Kernel Library (--config=mkl) to take full advantage of underlying Intel® CPU instructions.
PyTorch: This Python package provides one of the fastest implementations of dynamic neural networks. In collaboration with Facebook, this popular framework now implements Intel optimizations to provide superior performance on Intel architecture, most notably Intel Xeon Scalable processors. This is achieved using the oneAPI Deep Neural Network Library (oneDNN) directly and making sure PyTorch is ready for our next generation of performance improvements both in software and hardware.
The Intel® Distribution of OpenVINO Toolkit allows developers to implement computer vision and more deep learning inference solutions quickly and efficiently across multiple hardware platforms. Using a common API, OpenVINO supports deep learning inference across Intel® architectures and AI accelerators, including CPU, integrated graphics, VPU and FPGA. It enables complex, real-time computer vision, audio and language recognition workloads to be run from edge to cloud.
Seamlessly Scale from Big Data Analytics to AI
Each stage of the AI pipeline (data processing, model training, inference) has different compute requirements, but it’s generally not efficient or economical to give each stage its own platform to run on. Analytics Zoo is an open source platform designed to consolidate all AI storage and pipeline workloads seamlessly into one environment.
With Analytics Zoo, you can seamlessly scale your AI models to big data clusters with thousands of nodes for distributed training or inference. Built on top of the open source platform of Apache Spark, this unified analytics and AI platform has an extensible architecture to support more libraries and frameworks such as TensorFlow, Keras, and BigDL. This helps improve productivity and efficiency while speeding time to results.
A shortcut to optimized AI
Of course, building your own AI applications from scratch may not be the right choice for every organization. You may choose to implement a ready-made solution, working with a specialist system integrator (SI), independent software vendor (ISV) and/or original equipment manufacturer (OEM). In this in case, a good place to start is the Intel® AI Builders ecosystem, which brings together expert solution providers from a wide range of industries and geographies. For example, a manufacturing organization looking to add intelligence to its existing equipment may use Asquared IoT’s Equilips 4.0, which offers an embedded AI solution that works with virtually any manufacturing infrastructure—whether legacy or new—and eliminates the need for any network communication or external computing infrastructure. It is based on innovative sensing technologies that use non-touch, non-intrusive sensing methods such as audio and visual analytics, meaning it can be easily and cost-effectively retrofitted to any machine, creating smart infrastructure that helps generate the data for increased visibility into production lines, proactive maintenance, improved decision-making, and operational efficiency.
Explore the various options to achieving your AI goals in the article ‘Four Paths to AI’ or by learning more about the technologies that underpin Intel architecture’s AI capabilities: Intel Xeon Scalable processors and Intel Deep Learning Boost.
1 3.75x improvement with AI Inferencing Intel Select Solution. The solution was tested with KPI Targets: OpenVINO/ ResNet50 on INT8 on 02-26-2019 with the following hardware and software configuration:
Base configuration: 1 Node, 2x Intel® Xeon® Gold 6248; 1x Intel® Server Board S2600WFT; Total Memory 192 GB, 12 slots/16 GB/2666 MT/s DDR4 RDIMM; HyperThreading: Enable; Turbo: Enable; Storage(boot): Intel® SSD DC P4101; Storage(capacity): At least 2 TB Intel® SSD DC P4610 PCIe NVMe; OS/Software: CentOS Linux release 7.6.1810 (Core) with Kernel 3.10.0-957.el7.x86_64; Framework version: OpenVINO 2018 R5 445; Dataset:sample image from benchmark tool; Model topology: ResNet 50 v1; Batch Size: 4; nireq: 20. The solution was tested with KPI Targets: TensorFlow/ ResNet50 on INT8 on 03-07-2019 with the following hardware and software configuration:
Base configuration: 1 Node, 2x Intel® Xeon® Gold 6248; 1x Intel® Server Board S2600WFT; Total Memory 192 GB, 12 slots/16 GB/2666 MT/s DDR4 RDIMM; HyperThreading: Enable; Turbo: Enable; Storage(boot): Intel® SSD DC P4101; Storage(capacity): At least 2 TB Intel® SSD DC P4610 PCIe NVMe; OS/Software: CentOS Linux release 7.6.1810 (Core) with Kernel 3.10.0-957.el7.x86_64; Framework version: intelaipg/intel-optimizedtensorflow:PR25765-devel-mkl; Dataset: Synthetic from benchmark tool; Model topology: ResNet 50 v1; Batch Size: 80
Notices and Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.