Story at a Glance
- L&T Technology Services (LTTS) is a global engineering research and development leader driving business success for medical OEMs.
- The company has been leveraging deep learning to improve the turnaround times for chest X-ray readings, in particular for our chest X-ray radiology suite, Chest-rAI*.
- Recently, we started using the OpenVINO™ toolkit, which has helped us achieve even better results.
- We also have leveraged AI Tools to enhance Chest-rAi's capabilities, making it 1.84 times faster on an Intel® Xeon® Platinum 8380 CPU at 2.30 GHz (formerly code named Ice Lake) compared to a CPU without the AI Tools and OpenVINO toolkits.
- The biggest benefit is in the turnaround time, which has been reduced from 8 weeks to 2 weeks; this enables products to be marketed more quickly and gives radiologists more time to analyze and report on X-rays. Additionally, using WFX (Work from Anywhere) makes it easier for radiologists to report X-rays from any location.
LTTS’ Chest-rAi*
You may be wondering: What is Chest-rAi? It is a deep learning algorithm that detects and isolates abnormalities in chest X-ray imagery.
The beauty of Chest-rAi is that it's fast and accurate—its accuracy rate is 95%, significantly higher than most other methods currently available.
To further enhance the algorithm’s capabilities, we adopted AI Tools and OpenVINO toolkits. Doing so, we were able to optimize the inference pipeline for Chest-rAi, making it 1.84 times faster on an lntel Xeon Platinum 8380 CPU compared to competing CPUs without the Intel toolkits.
Chest-rAi and CARES: A New Paradigm
To address the growing demand for trained radiologists and minimize the risk of errors, LTTS’ Chest-rAiTM solution leverages a novel, deep learning-based architecture: Convolution Attention-based sentence REconstruction and Scoring, or CARES. The solution has been found to be effective for the identification and localization of radiological findings in chest X-ray imagery.
Chest rAiTM generates a clinically meaningful description to aid radiologists, delivering:
- CNN (convolutional neural network) feature extraction,
- Ocular opacity and anatomical knowledge-infused attention-based graphs,
- Hierarchical two-level LSTM (long short-term memory)-based reconstruction module, and
- Pretrained transformers, along with clustering and scoring models, to help generate more grammatically and clinically accurate reports.
The solution is combined with a novel scoring mechanism—Radiological Finding Quality Index—to evaluate the exact radiological findings, localization, and size/severity for each such term present in the report generated.
CARES embraces a linear approach, leveraging attention-based CNN for image feature extraction, multilabel classification, and multiple visual attention-based LSTMs for generating the report. Figure 1 illustrates the scheme.

Figure 1: Convolution Attention-based sentence REconstruction Scoring (CARES)
The multitask learning framework for multilabel classification leverages a common CNN-based base feature extraction model for all labels. This was backed up by an Attention layer, Global Pooling layer, and a Fully Connected layer for generating predictions for each label. Each label has a separate FC layer (fully connected layer), which enables the model to generate predictions for multiple labels simultaneously without one impacting the other.
A separate attention layer is used for each tag. This is done to ensure the model can attend to different regions of the image simultaneously and generate feature maps relevant for the given label, helping derive a two-fold benefit, viz:
- Improved classification accuracy, as the output of each attention layer is focused only on the part of the image relevant to that label, and
- A better convolutional feature map (encoded image features) for the decoder.
LTTS Optimizes Chest-rAi with AI Tools and OpenVINO Toolkit
You may be wondering how LTTS is able to optimize turnaround times for Chest-rAi with the OpenVINO toolkit and AI Tools. Let us explain.
First, LTTS leveraged a set of extensions from the AI Tools that made it easy to deploy PyTorch* models on Intel processors. By doing so, the LTTS team was able to create an optimized inference pipeline and reduce:
- Turnaround time for Chest-rAi from eight weeks to just two weeks
- Chest-rAi model (FP-32) size by approximately 39%
- Inference time by 46%
Most of the gains were observed in Densenet 121 and Densenet 169 family architectures. The optimized inference pipeline for Chest-rAiTM was 1.84 times faster on an lntel Xeon Platinum 8380 CPU.
Second, LTTS adopted the OpenVINO toolkit, a suite of tools that helps developers optimize performance of deep learning workloads across a variety of different devices. This toolkit is based on the Intel® Deep Learning SDK, which has been optimized for use with Intel processors. To strengthen Chest-rAi’s capabilities, LTTS chose OpenVINO toolkit and Intel® Extension for PyTorch* as software components to optimize Chest-rAi models. These optimizations helped LTTS developers reduce the size of the models, driving scalability without increasing or upgrading the hardware.
The Results of LTTS' Optimization
The OpenVINO toolkit and AI Tools have helped streamline and drive improvements in the turnaround time for Chest-rAi diagnosis. Since a quicker response time can have a significant impact on the overall health diagnosis for the user, this has been a vital and transformative impact on the solution capabilities. As per LTTS:
“Not only are we faster, but we're also more accurate. Our Chest-rAi algorithm is now able to achieve an error rate of less than 1%. We're proud of the work we've done, and we think you'll be impressed with the results.”
How You Can Use OpenVINO Toolkit and AI Toolkits to Optimize Your Own Models
Let's take a look at how LTTS used these kits to speed up the inference pipeline for Chest-rAi:
First, LTTS’ engineering team used OpenVINO toolkit to create a custom acceleration package that optimized the performance of the model as shared above.
Second, they used the AI Tools to convert the model into a format that could be run on their infrastructure. This optimized the inference pipeline for Chest-rAi, making it 1.84 times faster (compared to the models that were not Intel-optimized) on an lntel Xeon Platinum 8380 CPU.
The takeaway is that you can use either the OpenVINO toolkit or AI Tools to speed up your own models; you’ll find success no matter which one you use. So, if you're looking for a performance boost, these are two tools you should definitely check out.
What's Next for LTTS and Chest-rAi?
LTTS continues to leverage the capabilities of AI Tools and OpenVINO toolkits to optimize turnaround times for Chest-rAi. We're seeing some really impressive results with this combination and are committed to making sure that our customers get the best possible experience.
LTTS is also looking into ways to further reduce the size of the Chest-rAi inference pipeline. This is something that has been a challenge for us, but we're confident that we can find a solution that meets our high standards.
And finally, LTTS along with Intel AI tools, solutions, and frameworks is exploring new ways to improve our products and services.
Learn more at: