Open Domain Question Answering System - A Deep Learning Based NLP Solution (White Paper)

Published: 10/06/2020

Natural language processing (NLP) systems like chatbots, document classification systems based on deep neural networks have drastically improved the ability to gain the knowledge stored in text form from humongous amount of information stored on the web or on Wikipedia. These deep neural networks require parallel processing capabilities across multiple cores for real-time latency. Intel® Math Kernel Library (MKL) is the fastest and most-used math library for Intel® based systems to speed up numerical application performance by providing highly optimized, vectorized and threaded math functions. Intel® VTune™ Profiler collects the key profiling data and presents a powerful interface to simplify the algorithm performance analysis. Intel® and Kakao Enterprise explored these Intel® technologies to improve the performance of neural networks in their NLP API service. Kakao Enterprise found that 1.14x speed-up in processing time after integrating their code with Intel® MKL which utilizes the AVX-512 capabilities of 2nd generation of Intel® Xeon® Scalable processors.

Technologies Used:

  • Intel® MKL
  • Intel® VTune™ Profiler
  • AVX-512
  • Intel® Xeon® Scalable processors

Attachment Size
Kakao_Enterprise_NLP_whitepaper.pdf 3.5 MB

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.