This reference kit uses the healthcare payers industry as an example of how this kit can be applied. Payers in this industry employ hundreds of clinicians to review millions of claim-related documents for relevant data points to then draw a conclusion. Clinicians remain in high demand, which will continue to increase because of an aging population and reduced revenue from the recent pandemic. Organizations are pressured to save costs without impacting the quality of patient care. At the same time, information is exponentially increasing, often in nonstandard formats, making it difficult for payers to manage.
According to Gartner†, each claim requiring human intervention costs about $4 to process, and auto adjudicated claims cost about $1 to process. Manual rework costs about $25 per claim to process. With more than 3 billion claims processed in the US per year, automation assisted claims processing can be a valuable cost savings measure.
To read and classify document text, Named Entity Recognition (NER), a subtask of information extraction, is used to locate and classify named entities in texts into predefined categories. These categories could include the names of persons, organizations, locations, expressions of times, quantities, monetary values, and percentages.
What Is Included
In collaboration with Accenture*, Intel developed an AI reference kit that can be used in an application to help payers automate the review of claims. Each reference kit includes:
- Training data
- An open source, trained model
- User guides
- oneAPI components
At a Glance
- Industry: Healthcare, cross-industry back-office document automation
- Task: Information extraction to locate and classify named entities in text into nine predefined categories or tags.
- Real data consisting of 48,000 sentences with corresponding parts of speech and labeled tags in .csv format
- 3 features
- 90:10 split (training:validation)
- Type of Learning: Natural language processing (NLP) supervised deep learning
- Models: BERT for NER (Named Entity Recognition) model
- Output: Predicts the named entity tagging for each word in the sentence into one of the nine predefined categories.
- Intel® AI Portfolio:
- Intel® AI Analytics Toolkit
- Intel® Optimization for TensorFlow*
The experiment performed in this kit expedites the classification of key information from the text in the document into predefined categories.
While GPUs are the natural choice for deep learning and AI processing to achieve a higher frame per second (FPS) rate, they are also expensive and memory consuming. This experiment applies model quantization using Intel® technology (Intel® QuickAssist Technology and Intel® Neural Compressor), which compresses the models using the CPU for processing while maintaining accuracy and speeding up the inference time of named-entity tagging.
Optimized with Intel oneAPI for Better Performance
Performance was tested on Microsoft Azure* Standard_D8_V5 using 3rd generation Intel® Xeon® processors to optimize the kit.
To build a named entity recognition solution using the BERT transfer learning approach at scale, data scientists need to train models using substantial datasets and run inference more frequently to accommodate the variety of text in the documents that could change with every claim. The data scientist needs to evaluate data classifications to tag and categorize data so that it can be better understood and analyzed. This task requires repetitive training and retraining, making the job tedious.
With over 90% faster inferencing on Intel®-optimized software, data scientists can accommodate new text constructs and contexts in documents, accelerate the machine learning pipeline, and achieve better accuracy.
For healthcare payers, the Documentation Automation reference kit can minimize a labor-intensive claims process, provide potentially significant cost savings, and deliver greater customer satisfaction from more accurate and timely responses. It also allows healthcare payers a way to scale their business without the constraint of hiring more clinicians to review.