AbbVie Accelerates Natural Language Processing

Intel® Artificial Intelligence Technologies improve translations for biopharmaceutical research.

At a Glance:

  • AbbVie is a research-based biopharmaceutical company that serves more than 30 million patients in 175 countries.

  • Abbelfish Machine Translation, AbbVie’s language translation service based on the Transformer NLP model, uses 2nd Gen Intel® Xeon® Scalable processors, the Intel® Optimization for TensorFlow and the Intel® oneAPI Deep Neural Network Library (oneDNN).

BUILT IN - ARTICLE INTRO SECOND COMPONENT

Executive Summary

AbbVie is a research-based biopharmaceutical company that serves more than 30 million patients in 175 countries. With its global scale, AbbVie partnered with Intel to optimize processes for its more than 47,000 employees. This whitepaper highlights two use cases that are important to AbbVie’s research. The first is Abbelfish Machine Translation, AbbVie’s language translation service based on the Transformer NLP model, that leverages second-generation Intel® Xeon® Scalable processors and the Intel® Optimization for TensorFlow with Intel oneAPI Deep Neural Network Library (oneDNN). AbbVie was able to achieve a 1.9x improvement in throughput for Abbelfish language translation using Intel Optimization for TensorFlow 1.15 with oneAPI Deep Neural Network Library when compared to TensorFlow 1.15 without oneDNN.1 The second use case is AbbVie Search, which is a BERT-based NLP model. AbbVie Search scans research documents based on scientific questions and returns relevant results that enable the discovery of new treatments for patients, pharmaceuticals, and manufacturing methods. Using the Intel Distribution of OpenVINO toolkit, AbbVie Search was accelerated by 5.3x over unoptimized TensorFlow 1.15 on the same second-generation Intel Xeon processor hardware.1 AbbVie’s NLP AI deployments demonstrate how CPUs can be highly effective for edge AI inference in a large organization without the need for additional hardware acceleration.

Read the full white paper “Accelerating Natural Language Processing Inference Models using Processor Optimized Capabilities”.

Download the PDF ›

Explore Related Products and Solutions

Product and Performance Information

1All tests were performed by Intel in June 2020. Intel® Xeon® Gold 6252N Processor @ 2.30 GHz, two sockets, 24 cores per socket, 394 GB DDR4 RAM, Intel Hyper-Threading Technology enabled, Intel® Turbo Boost enabled, NUMA enabled, BIOS version 4.1.12, Microcode 0x500002c, Ubuntu 18.04.4 LTS, Linux Kernel 4.15.0-101-generic, Spectre/Meltdown mitigated, Software: Intel® Optimization for TensorFlow version 1.15 with DNNL and Intel® Distribution of OpenVINO™ toolkit. For more complete information about performance and benchmark results, visit https://www.intel.com/benchmarks.