Brain Tumor Segmentation Reference Implementation

Published: 06/08/2021  

Last Updated: 10/29/2021

Overview 

Use the Intel® Distribution of OpenVINO™ toolkit to detect brain tumors in MRI images. A U-Net topology-based pretrained model from open source datasets helps predict results and compare accuracy with provided ground truth results using the Sørensen–Dice coefficient.

Select Configure & Download to download the reference implementation and the software listed below.  

Configure & Download 

  • Time to Complete: Approximately 30-40 minutes
  • Programming Language: Python*
  • Available Software: Intel® Distribution of OpenVINO™ toolkit 2021.4

Recommended Hardware

The hardware below is recommended for use with this reference implementation. See the Recommended Hardware page for other suggestions. 


Target System Requirements 

  • 6th to 11th Generation Intel® Core™ processors with Intel® Iris® Plus Graphics or Intel® HD Graphics
  • Disk space needed: 20 GB
  • RAM usage by application: 1.5 – 2 GB
  • Ubuntu* 20.04 LTS

NOTE: We recommend that you use a 4.14+ Linux* kernel with this software. Run the following command to determine your kernel version: uname -a


How It Works 

Using a combination of different computer vision techniques, this reference implementation performs brain tumor image segmentation on MRI scans, compares the accuracy with ground truth using Sørensen–Dice coefficient, and plots the performance comparison between TensorFlow* and OpenVINO™ optimized model. 

  • Train the model using an open source dataset from the Medical Segmentation Decathlon for segmenting brain tumors in MRI scans. More Information.
  • Optimize the model using OpenVINO™ optimizer.
  • Use the optimized model with the inference engine to predict results and compare accuracy with provided ground truth results using the Sørensen–Dice coefficient. 
  • Plot the performance comparison between TensorFlow* and OpenVINO™ optimized model.
  • Store the output images from the segmented brain tumor locally.
architecture diagram
Figure 1. Architecture Diagram

 

The Dice coefficient (the standard metric for the BraTS dataset used in the application) for this model is about 0.82-0.88. Menze et al. reported that expert neuroradiologists manually segmented these tumors with a cross-rater Dice score of 0.75-0.85, meaning that the model’s predictions are on par with what expert physicians have made. The below MRI brain scans highlight brain tumor matter segmented using deep learning. 

MRI Scan and Tumor Indication
Figure 2. MRI Scan and Tumor Indication

 

The U-Net architecture is used to create deep learning models for segmenting nerves in ultrasound images, lungs in CT scans, and even interference in radio telescopes. 

U-Net is designed like an auto-encoder. It has an encoding path (“contracting”) paired with a decoding path (“expanding”) which gives it the “U” shape. However, in contrast to the auto-encoder, U-Net predicts a pixelwise segmentation map of the input image rather than classifying the input image as a whole. For each pixel in the original image, it asks the question: “To which class does this pixel belong?” This flexibility allows U-Net to predict different parts of the tumor simultaneously.

U-Net Architecture Diagram
Figure 3. U-Net Architecture Diagram

Get Started  


Prerequisites

  1. Ensure that InfluxDB* and Grafana* services are not running on your system. The ports 8086 and 3000 must be free. The application will not run if ports are occupied.
  2. Ensure that at least 20 GB of storage and 2 GB of free RAM space are available on the system.
  3. Ensure that internet is available on the system. (Set proxies if running on a proxy network.)

 

Which model to use 

The application uses a pre-trained model (saved_model_frozen.pb), that is provided in the /resources directory in GitHub.

This model is trained using the Task01_BrainTumour.tar dataset from the Medical Segmentation Decathlon, made available under the (CC BY-SA 4.0) license. Instructions on how to train your model can be found on GitHub.  

 

What input to use 

The application uses MRI scans from Task01_BrainTumour.h5 (a subset of 8 images from the BraTS dataset taken at random), that is provided in the /resources directory in GitHub.

Note: You can also provide your own patient data file (in nii.gz format) from the BraTS dataset and run the application to see the inference. See Providing User Specific Data for Inference

 

Install the Reference Implementation 

 Select Configure & Download to download the reference implementation and then follow the steps below to install it.

Configure & Download 

1. Open a new terminal, go to the downloaded folder and unzip the RI package. 

unzip brain_tumor_segmentation.zip

2. Go to the brain_tumor_segmentation directory. 

cd brain_tumor_segmentation

3. Change permission of the executable edgesoftware file. 

chmod 755 edgesoftware

4. Run the command below to install the reference implementation: 

./edgesoftware install

5. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download. 

Screenshot of Installation Start
Figure 4. Installation Start

 

 

6. When the installation is complete, you see the message Installation of package complete and the installation status for each module. 

Screenshot of Installation Success
Figure 5. Installation Success

 


Run the Application

1. Enter the following command to see the application containers successfully created and running: 

docker ps

You will see output similar to:

Screenshot of List of Containers
Figure 6. List of Containers

 

2. Open the Grafana dashboard by visiting http://localhost:3000/ on your browser. Enter username and password as admin.

Screenshot of Grafana Login Screen
Figure 7. Grafana Login Screen

 

3. Go to the Dashboards menu and click on Manage. The list of available dashboards is shown. In this case, only the Brain Tumor Segmentation dashboard is available.

Screenshot of Manage Dashboards in Grafana
Figure 8. Manage Dashboards in Grafana

 

Screenshot of List of Available Dashboards
Figure 9. List of Available Dashboards

 

4. Open the Brain Tumor Segmentation dashboard to view the inferenced images and the comparison metrics between the OpenVINO™ toolkit and TensorFlow inference.

Ensure you select the refresh every 5s option to view the live dashboard as shown in the image below.

Screenshot of Brain Tumor Segmentation Dashboard
Figure 10. Brain Tumor Segmentation Dashboard

 

The Grafana dashboard shows the following noteworthy items:

  • The image in the dashboard shows the MRI scan (left), the ground truth marked by medical experts [also known as the mask] (center) and prediction by the u-net model (right). There is also the time taken for inference by OpenVINO™ toolkit (per image) on the top and the DICE score to estimate accuracy of inference against the ground truth.
  • The circular gauges on the top-right indicate the comparison between the FPS rates of inference by OpenVINO™ toolkit and TensorFlow.
  • The circular gauges on the mid-right indicate the comparison between the avergage inference time (in ms) by OpenVINO™ toolkit and TensorFlow for the complete set of images (one iteration).
  • The timeseries chart (bottom) shows the live time taken for inference (in ms) comparison between OpenVINO™ toolkit and TensorFlow.

5. To view the inference images, go to the installation directory and then to the location shown below: 

<installation_directory>/brain_tumor_segmentation/Brain_Tumor_Segmentation_2021.4/Brain_Tumor_Segmentation/brain-tumor-segmentation/resources/output/inference_results/

 

Screenshot of Saved Images Folder
Figure 11. Saved Images Folder

Configure the Reference Implementation

Open the config.json file located in the resources folder:

<installation_directory>/brain_tumor_segmentation/Brain_Tumor_Segmentation_2021.4/Brain_Tumor_Segmentation/brain-tumor-segmentation/resources/

 

Screenshot of Config File Folder Location
Figure 12. Config File Folder Location

 

The file contents are various parameters that affect the operation of this application:

{
    "defaultdata": 1,
    "imagedatafile": "../resources/nii_images/MRIScans/BRATS_006.nii.gz",
    "maskdatafile": "../resources/nii_images/Masks/BRATS_006.nii.gz",
    "n_iter": 300,
    "maxqueuesize": 25,
    "targetdevice": "cpu",
    "saveimage": 1
}

 

Description of parameters:

  • defaultdata: This parameter toggles the application to use either default data (1) or user specific data from BraTS dataset (0) for inference. 
  • imagedatafile: This parameter contains the path for the mri scan image data file when user specific data is provided (defaultdata = 0) in nii.gz format.
  • maskdatafile: This parameter contains the path for the mask (ground truth) image data file when user specific data is provided (defaultdata = 0) in nii.gz format.
  • n_iter: This parameter specifies the number of iterations the application must run the set of images to get a long running estimate of inference by OpenVINO™ toolkit and TensorFlow.
  • maxqueuesize: Since Flask server is used to publish images into Grafana, a queue implementation is in place to store images from which the server pushes into the port for viewing. This parameter controls the number of images in queue to reduce the RAM usage (since the queue uses volatile memory). In short, lower the number, lower the RAM usage; however, you may lose a few images since the program will discard images when the queue is full.
  • targetdevice: This parameter helps select the target device (CPU/GPU/VPU, etc.) on which the inference must run.
  • saveimage: This parameter when toggled OFF (0) disables the saving of images on local storage. ON (1) will store the inference images and overwrite the existing ones when the application is re-run. 

Providing User Specific Data for Inference

  1. Ensure the defaultdata is set to ‘0’.
  2. Download the BraTS dataset.
  3. Extract the dataset and select any patient file number from the Imagestr and the labelstr folders. The Imagestr folder contains the MRI scans where each file (nii,gz format) has 155 brain MRI images of a single patient slice by slice. The labelstr folder contains the files for the ground truth of the 155 slices of scans. 
  4. Ensure same patient file from Imagestr and labelstr are selected (BRATS_xxx.nii.gz).
  5. Create a folder inside the resources folder to contain the user specific images. The example below uses user_image
  6. Create two separate folders to contain the mri scan and mask file. (Any name is ok.) The example below uses mri_scan and mask
  7. Enter the folder location and file name for the mri scan and mask in the config file as shown in the image below:
     
    Screenshot of User Specific Images Configuration
    Figure 13. User Specific Images Configuration

     
  8. Save the config file and restart the Docker container by entering the following command:
    docker restart <container_id or name>
    The <container_id or name> value can be obtained from the docker ps command. 

The patient file contains 155 images and each iteration of the application will take more time to compute. After the first iteration, the inference images are stored in the inference_results folder (if saveimage = 1).

 

Screenshot of Inference Images Folder Post User Specific Data
Figure 14. Inference Images Folder Post User Specific Data

 

 


Summary and Next Steps 

With this application you successfully used the Intel® Distribution of OpenVINO™ Toolkit to create a solution that detects brain tumors in MRI images.

As a next step, try going through the application code available in the app folder and add a post inference code to isolate the tumor and generate a 3D model for the tumor since this application runs the slice by slice 155 2D images of a single brain.

 


Learn More 

To continue learning, see the following guides and software resources:  


Troubleshooting

Known Issues


Installation Failure

The root cause can be analyzed by looking at the installation logs in the /var/log/esb-cli/Brain_Tumor_Segmentation_2021.4/Brain_Tumor_Segmentation/install.log file. 

Unable to Run apt Install While Building Images

This issue occurs when proxy is configured for the network in use.
Solution: Set the proxy inside the containers too while they are built.

Add the below mentioned two lines in the top of the app dockerfile (<installation_directory>/brain_tumor_segmentation/Brain_Tumor_Segmentation_2021.4/Brain_Tumor_Segmentation/brain-tumor-segmentation/app/) and the Grafana dockerfile (<installation_directory>/brain_tumor_segmentation/Brain_Tumor_Segmentation_2021.4/Brain_Tumor_Segmentation/brain-tumor-segmentation/utils/grafana/) after the license information.

ENV HTTP_PROXY <your http proxy>
ENV HTTPS_PROXY <your https proxy>

Address Already in Use  

If running the application results in Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use, use the following command to check and force stop the process: 

sudo kill $(pgrep grafana) 


Note: If the issue persists, it may be possible that Grafana is running in a Docker container. In that case, stop the container using:

sudo docker stop $(sudo docker ps -q --filter expose=3000)

 

If running the application results in Error starting userland proxy: listen tcp4 0.0.0.0:8086: bind: address already in use, use the following command to check and force stop the process: 

sudo kill $(pgrep influxdb)

 

Note: If the issue persists, it may be possible that InfluxDB is running in a Docker container. In that case, stop the container using: 

sudo docker stop $(sudo docker ps -q --filter expose=8086)

 

Grafana Dashboard Not Showing Image Sequence

This might happen due to internal Flask Server and Ajax panel compatibility issue. Run http://localhost:5000/ in another tab of the browser to view the image sequence.

File Not Found

This might happen due to incorrect file location of the image and mask data files in the config file. Recheck the folder path and ensure the location starts with ../resources/ since the code runs inside the app folder.

Config Parameter Errors

Config file parameters have very specific input requirements, which will be clearly explained in the logs as error messages.

Support Forum 

If you're unable to resolve your issues, contact the  Support Forum.

 

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.