AbbVie Accelerates Natural Language Processing

Intel® Artificial Intelligence Technologies improve translations for biopharmaceutical research.

At a glance:

  • AbbVie is a research-based biopharmaceutical company that serves more than 30 million patients in 175 countries.

  • Abbelfish Machine Translation, AbbVie’s language translation service based on the Transformer NLP model, uses 2nd Gen Intel® Xeon® Scalable processors, the Intel® Optimization for TensorFlow and the Intel® oneAPI Deep Neural Network Library (oneDNN).

author-image

By

Executive Summary

AbbVie is a research-based biopharmaceutical company that serves more than 30 million patients in 175 countries. With its global scale, AbbVie partnered with Intel to optimize processes for its more than 47,000 employees. This whitepaper highlights two use cases that are important to AbbVie’s research. The first is Abbelfish Machine Translation, AbbVie’s language translation service based on the Transformer NLP model, that leverages second-generation Intel® Xeon® Scalable processors and the Intel® Optimization for TensorFlow with Intel oneAPI Deep Neural Network Library (oneDNN). AbbVie was able to achieve a 1.9x improvement in throughput for Abbelfish language translation using Intel Optimization for TensorFlow 1.15 with oneAPI Deep Neural Network Library when compared to TensorFlow 1.15 without oneDNN.1 The second use case is AbbVie Search, which is a BERT-based NLP model. AbbVie Search scans research documents based on scientific questions and returns relevant results that enable the discovery of new treatments for patients, pharmaceuticals, and manufacturing methods. Using the Intel Distribution of OpenVINO toolkit, AbbVie Search was accelerated by 5.3x over unoptimized TensorFlow 1.15 on the same second-generation Intel Xeon processor hardware.1 AbbVie’s NLP AI deployments demonstrate how CPUs can be highly effective for edge AI inference in a large organization without the need for additional hardware acceleration.

Read the full white paper “Accelerating Natural Language Processing Inference Models using Processor Optimized Capabilities”.