AbbVie Accelerates Natural Language Processing

Intel® Artificial Intelligence Technologies improve translations for biopharmaceutical research.

Executive Summary

AbbVie is a research-based biopharmaceutical company that serves more than 30 million patients in 175 countries. With its global scale, AbbVie partnered with Intel to optimize processes for its more than 47,000 employees. This whitepaper highlights two use cases that are important to AbbVie’s research. The first is Abbelfish Machine Translation, AbbVie’s language translation service based on the Transformer NLP model, that leverages second-generation Intel® Xeon® Scalable processors and the Intel® Optimization for TensorFlow with Intel oneAPI Deep Neural Network Library (oneDNN). AbbVie was able to achieve a 1.9x improvement in throughput for Abbelfish language translation using Intel Optimization for TensorFlow 1.15 with oneAPI Deep Neural Network Library when compared to TensorFlow 1.15 without oneDNN.1 The second use case is AbbVie Search, which is a BERT-based NLP model. AbbVie Search scans research documents based on scientific questions and returns relevant results that enable the discovery of new treatments for patients, pharmaceuticals, and manufacturing methods. Using the Intel Distribution of OpenVINO toolkit, AbbVie Search was accelerated by 5.3x over unoptimized TensorFlow 1.15 on the same second-generation Intel Xeon processor hardware.1 AbbVie’s NLP AI deployments demonstrate how CPUs can be highly effective for edge AI inference in a large organization without the need for additional hardware acceleration.

Read the full white paper “Accelerating Natural Language Processing Inference Models using Processor Optimized Capabilities”.

Explore Related Products and Solutions

Intel® Xeon® Scalable Processors

Drive actionable insight, count on hardware-based security, and deploy dynamic service delivery with Intel® Xeon® Scalable processors.

Learn more

Intel® oneAPi Toolkit

The Intel® oneAPI Deep Neural Network Library helps developers improve productivity and enhance the performance of their deep learning frameworks.

Learn more

OpenVINO™ Toolkit

Build end-to-end computer vision solutions quickly and consistently on Intel® architecture and our deep learning framework.

Learn more

Product and Performance Information

1All tests were performed by Intel in June 2020. Intel® Xeon® Gold 6252N Processor @ 2.30 GHz, two sockets, 24 cores per socket, 394 GB DDR4 RAM, Intel Hyper-Threading Technology enabled, Intel® Turbo Boost enabled, NUMA enabled, BIOS version 4.1.12, Microcode 0x500002c, Ubuntu 18.04.4 LTS, Linux Kernel 4.15.0-101-generic, Spectre/Meltdown mitigated, Software: Intel® Optimization for TensorFlow version 1.15 with DNNL and Intel® Distribution of OpenVINO™ toolkit. For more complete information about performance and benchmark results, visit https://www.intel.com/benchmarks.