Detecting Invasive Ductal Carcinoma with Convolutional Neural Networks

Published: 05/04/2018  

Last Updated: 02/20/2019


This article, Detecting Invasive Ductal Carcinoma with Convolutional Neural Networks, shows how existing deep learning technologies can be utilized to train artificial intelligence (AI) to be able to detect invasive ductal carcinoma (IDC)1 (breast cancer) in unlabeled histology images. More specifically, I show how to train a convolutional neural network2 using TensorFlow*3 and transfer learning4 using a dataset of negative and positive histology images using Intel® AI DevCloud. In addition to showing how artificial intelligence can be used to detect IDC, I also show how the Internet of Things (IoT) can be used in conjunction with AI to create automated systems that can be used in the medical industry.

Breast cancer is an ongoing concern and one of the most common forms of cancer in women. In 2018 there is expected to be an estimated 266,120 new diagnoses in the United States alone. The use of Artificial Intelligence can drastically reduce the need for medical staff to examine mammography slides manually, saving not only time, but money, and ultimately lives. In this article I show how we can use Intel® technologies to create a deep learning neural network that is able to detect IDC.


I do not have medical training or direct experience in the medical industry, I trained a classifier that I originally trained for facial recognition, with an open IDC dataset (Predict IDC in Breast Cancer Histology Images) which worked well. This inspired me to make the project a permanent project and release the code and experience with the hope of attracting a developer/medical community that could help make the project better. 

Introducing the IDC Classifier

To create the IDC classifier, I use the Intel® AI DevCloud5 to train the neural network, an Intel® Movidius™ product6 for carrying out inference on the edge, and an UP Squared*7 device to serve the trained model making it accessible via an API, and an IoT connected alarm system built using a Raspberry Pi*8 device that demonstrates the potential of using the IoT via the IoT JumpWay*9 combined with AI to create intelligent, automated medical systems.

The project evolved from a computer vision project that I have been developing for a number of years named TASS-AI10. TASS-AI is an open source facial recognition project that has been implemented using a number of different techniques, frameworks, and software developer kits (SDKs).

Invasive Ductal Carcinoma

IDC is one of the most common forms of breast cancer. The cancer starts in the milk duct of the breast and invades the surrounding tissue. This form of cancer makes up around 80 percent of all breast cancer diagnosis, with more than 180,000 women a year in the United States alone being diagnosed with IDC, according to the American Cancer Society.

Convolutional Neural Networks

Inception v3 architecture diagram

Figure 1. Inception v3 architecture (Source).

Convolutional neural networks are a type of deep learning11 neural network. These types of neural nets are widely used in computer vision and have pushed the capabilities of computer vision over the last few years, performing exceptionally better than older, more traditional neural networks; however, studies show12 that there are trade-offs related to training times and accuracy.

Transfer Learning

Inception v3 model diagram

Figure 2. Inception V3 Transfer Learning (Source)

Transfer learning allows you to retrain the final layer of an existing model, resulting in a significant decrease in not only training time, but also the size of the dataset required. One of the most famous models that can be used for transfer learning is the Inception V3 model created by Google*.13 This model was trained on over a million images from 1,000 classes (See the list of classes here) from the original ImageNet dataset which was trained with over 1 million training images, the Tensorflow version has 1,001 classes which is due to an additional "background' class not used in the original ImageNet. Being able to retrain the final layer means that you can maintain the knowledge that the model had learned during its original training and apply it to your smaller dataset, resulting in highly accurate classifications without the need for extensive training and computational power. In one version of TASS, I retrained the Inception V3 model using transfer learning on a Raspberry Pi 3 device, so that should give you some idea of the capabilities of transfer learning.

Intel® DevCloud

The Intel® DevCloud is a platform for training machine learning and deep learning models. The platform is made up of a cluster of servers using Intel® Xeon® Scalable processors. The platform is free and provides a number of frameworks and tools including TensorFlow, Caffe*, Keras*, and Theano*, as well as the Intel® Distribution for Python*. The Intel® DevCloud is great for people getting started with learning how to train machine learning and deep learning models, as graphics processing units (GPUs) can be quite expensive, and access to the DevCloud is free.

In this project I use the Intel® DevCloud to sort the data, train the model, and evaluate it. To accompany this article I created a full tutorial and provided all of the code you need to replicate the entire project; read the full tutorial and access the source code.

Intel® Movidius™ Neural Compute Stick

The Intel® Movidius™ Neural Compute Stick is a fairly new piece of hardware used for enhancing the inference process of computer vision models on low-powered edge devices. The Intel Movidius product is a USB appliance that can be plugged into devices such as Raspberry Pi and UP Squared, and basically takes the processing power off the device and onto the Intel Movidius brand chip, making the classification process a lot faster. Developers can train their models using their existing TensorFlow and Caffe scripts and, by installing the Intel Movidius Neural Compute Stick SDK on their development machine, can compile a graph that is compatible with the Intel Movidius product. A less-bulky API can be installed on the lower-powered device allowing inference to be carried out via the Intel Movidius product.

Ready to Code

Hopefully, by now you are eager to get started with the technical walkthrough of creating your own computer vision program for classifying negative and positive breast cancer cells, so let’s get to the nitty gritty. Here I walk you through the steps for training and compiling the graph for the Intel Movidius product. For the full walkthrough, including the IoT connected device, please follow the GitHub* repository. Before following the rest of this tutorial, please follow the steps in the repository regarding setting up your IoT JumpWay device, as this step is required before the classification test happens.

Installing the Intel Movidius Neural Compute Stick SDK on Your Development Device

The first thing you need to do is to install the Intel Movidius Neural Compute Stick SDK on your development device. This is used to convert the trained model into a format that is compatible with the Intel Movidius product.

 $ mkdir -p ~/workspace
 $ cd ~/workspace
 $ git clone
 $ cd ~/workspace/ncsdk
 $ make install

Next, plug your Intel Movidius product into your device and issue the following commands:

$ cd ~/workspace/ncsdk
$ make examples

Installing the Intel Movidius Neural Compute Stick SDK on Your Inference Device

Next, you need to install the Intel Movidius Neural Compute Stick SDK on your Raspberry Pi 3/UP Squared device. This is used by the classifier to carry out inference on local images or images received via the API we will create. Make sure you have the Intel Movidius product plugged in.

 $ mkdir -p ~/workspace
 $ cd ~/workspace
 $ git clone
 $ cd ~/workspace/ncsdk/api/src
 $ make
 $ sudo make install
 $ cd ~/workspace
 $ git clone
 $ cd ncappzoo/apps/hello_ncs_py
 $ python3

Preparing Your Training Data

For this tutorial, I used a dataset from Kaggle* (Predict IDC in Breast Cancer Histology Images), but you are free to use any dataset you like. I have uploaded the collection I used for positive and negative images that you will find in the model/train directory. Once you decide on your dataset you need to arrange your data into the model/train directory. Each subdirectory should be named with integers; I used 0 and 1 to represent positive and negative. In my testing I used 4400 positive and 4400 negative examples, giving an overall training accuracy of 0.8596 (See Training Results below) and an average confidence of 0.96 on correct identifications. The data provided is 50px x 50px; as Inception V3 was trained on images of size 299px x 299px, the images are resized to 299px x 299px. Ideally the images would be that size already so you may want to try different datasets and see how your results vary.

Fine-Tuning Your Parameters

You can fine-tune the settings of the network at any time by editing the classifier settings in the model/confs.json file.

    "InceptionThreshold": 0.54,

Time to Start Training

Now you are ready to upload the files and folders outlined below to the Intel® DevCloud.


Once uploaded, follow the instructions in DevCloudTrainer.ipynb, this notebook will help you sort your data, train your model and evaluate it.

Training Results

Training Accuracy Tensorboard graph

Figure 3. Training Accuracy Tensorboard

Training Total Loss graph

Figure 4. Training Total Loss

Evaluate Your Model

Once you have completed your training on the Intel® DevCloud, complete the notebook by running the evaluation job.

Evaluation Results

INFO:tensorflow:Global Step 1: Streaming Accuracy: 0.0000 (2.03 sec/step)
INFO:tensorflow:Global Step 2: Streaming Accuracy: 0.8889 (0.59 sec/step)
INFO:tensorflow:Global Step 3: Streaming Accuracy: 0.8750 (0.67 sec/step)
INFO:tensorflow:Global Step 4: Streaming Accuracy: 0.8981 (0.65 sec/step)
INFO:tensorflow:Global Step 5: Streaming Accuracy: 0.8681 (0.76 sec/step)
INFO:tensorflow:Global Step 6: Streaming Accuracy: 0.8722 (0.64 sec/step)
INFO:tensorflow:Global Step 7: Streaming Accuracy: 0.8843 (0.64 sec/step)


INFO:tensorflow:Global Step 68: Streaming Accuracy: 0.8922 (0.81 sec/step)
INFO:tensorflow:Global Step 69: Streaming Accuracy: 0.8926 (0.70 sec/step)
INFO:tensorflow:Global Step 70: Streaming Accuracy: 0.8921 (0.63 sec/step)
INFO:tensorflow:Global Step 71: Streaming Accuracy: 0.8929 (0.84 sec/step)
INFO:tensorflow:Global Step 72: Streaming Accuracy: 0.8932 (0.75 sec/step)
INFO:tensorflow:Global Step 73: Streaming Accuracy: 0.8935 (0.61 sec/step)
INFO:tensorflow:Global Step 74: Streaming Accuracy: 0.8942 (0.67 sec/step)
INFO:tensorflow:Final Streaming Accuracy: 0.8941

So here we can see that the evaluation shows a final streaming accuracy of 0.8941.

evaluation accuracy graph

Figure 5. Evaluation Accuracy

evaluation total loss graph

Figure 6. Evaluation Total Loss

Download Your Model

When the training completes you need to download model/DevCloudIDC.pb and model/classes.txt to the model directory on your development machine. Ensure that the Intel Movidius product is set up and connected, and then run the following commands on your development machine:

$ cd ~/IoT-JumpWay-Intel-Examples/master/Intel-Movidius/IDC-Classification
$ ./

The contents of are as follows:

#IDC Classification Trainer
mvNCCompile model/DevCloudIDC.pb -in=input -on=InceptionV3/Predictions/Softmax -o igraph
python3.5 InceptionTest
  1. Compile the model for the Intel Movidius product
  2. Test

Testing on Unknown Images

Once the shell script has finished the testing program will start. In my example I had two classes, 0 and 1 (IDC negative and IDC positive); a classification of 0 shows that the AI thinks the image is not IDC positive, and a classification of 1 is positive.

-- Loaded Test Image model/test/negative.png

-- STARTED: :  2018-04-24 14:14:26.780554

-- ENDED:  2018-04-24 14:14:28.691870
-- TIME: 1.9114031791687012

inception-v3 on NCS
0 0 0.9873
1 1 0.01238

-- Loaded Test Image model/test/positive.png

-- STARTED: :  2018-04-24 14:14:28.699254

-- ENDED:  2018-04-24 14:14:30.577683
-- TIME: 1.878432035446167ß

TASS Identified IDC with a confidence of 0.945

-- Published to Device Sensors Channel

inception-v3 on NCS
1 1 0.945
0 0 0.05542

-- ENDED:  2018-04-24 14:14:30.579247
-- TESTED:  2
-- TIME(secs): 3.984593152999878

So, on the development machine you should see results similar to the ones above. We can see in my results that the program has successfully classified both the negative and the positive. Now it is time to test this out on the edge.

Inference on the Edge

Now that it is all trained and tested, it is time to set up the server that will serve the API. For this I have provided and

The following instructions will help you set up your server and test a positive and negative prediction:

  1. If you used the Predict IDC in Breast Cancer Histology Images dataset, you can use the positive.png and negative.png as they are from that dataset; if not, you should choose a positive and negative example from your testing set and replace these images.
  2. The server is currently set to start up on localhost. If you would like to change this you need to edit line 281 of and line 38 of to match your desired host. Once you have things working, if you are going to be leaving this running and access it from the outside world, you should secure it with Let's Encrypt* or similar.
  3. Upload the following files and folders to the UP Squared or Raspberry Pi 3 device that you are going to use for the server.
  4. Open up a terminal and navigate to the folder containing, then issue the following command. This starts the server and waits to receive images for classification.
  5. $ python3.5
    If you have followed all of the above steps, you can now start the client on your development machine with the following commands:
$ python3.5

This sends a positive and negative histology slide to the Raspberry Pi 3 or UP Squared device, which will return the predictions.

!! Welcome to IDC Classification Client, please wait while the program initiates !!

-- Running on Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609]

-- Imported Required Modules
-- IDC Classification Client Initiated

{'Response': 'OK', 'ResponseMessage': 'IDC Detected!', 'Results': 1}
{'Response': 'OK', 'ResponseMessage': 'IDC Not Detected!', 'Results': 0}
* Running on (Press CTRL+C to quit)

-- STARTED: :  2018-04-24 14:25:36.465183

-- Loading Sample
-- Loaded Sample
-- STARTED: :  2018-04-24 14:25:36.476371

-- ENDED:  2018-04-24 14:25:38.386121
-- TIME: 1.9097554683685303

TASS Identified IDC with a confidence of 0.945

-- Published: 2
-- Published to Device Warnings Channel

-- Published: 3
-- Published to Device Sensors Channel

inception-v3 on NCS
1 1 0.945
0 0 0.05542

-- ENDED:  2018-04-24 14:25:38.389217
-- TESTED:  1
-- TIME(secs): 1.9240257740020752 - - [24/Apr/2018 14:25:38] "POST /api/infer HTTP/1.1" 200 -

-- STARTED: :  2018-04-24 14:25:43.422319

-- Loading Sample
-- Loaded Sample
-- STARTED: :  2018-04-24 14:25:43.432647

-- ENDED:  2018-04-24 14:25:45.310354
-- TIME: 1.877711534500122

-- Published: 4
-- Published to Device Warnings Channel

-- Published: 5
-- Published to Device Sensors Channel

inception-v3 on NCS
0 0 0.9873
1 1 0.01238

-- ENDED:  2018-04-24 14:25:45.313174
-- TESTED:  1
-- TIME(secs): 1.89084792137146 - - [24/Apr/2018 14:25:45] "POST /api/infer HTTP/1.1" 200 -

Here we can see that, using the Intel Movidius product on an UP Squared device, there is no difference in classification accuracy to the development machine; which in my case was a Linux* device with NVIDIA* GTX 750ti, and only a slight difference in the time it took the classification process to complete. It is interesting to note here that the results above were actually more accurate than training the model on my GPU.

IoT Connectivity

To set up the IoT device you are welcome to complete the tutorial on the GitHub repo, but I will go through in some detail here on exactly what this part of the project does, and explain how the proof of concept provided could be used in other medical applications.

The device we create is an IoT connected alarm system built on a Raspberry Pi device. Once set up, the results that are captured from the classification of images sent to the server trigger actions on the IoT that communicate with the Raspberry Pi device. In this case, the actions are turning on a red LED and a buzzer when cancer is detected, and turning on a blue LED when the classification results in no cancer being detected. Obviously this is a very simple proof of concept, but it shows a possibility for powerful applications that can save time for medical staff and, hopefully, in the right hands could help save lives through early and accurate detection.

Detecting False Negatives

During testing on a larger dataset, I began to notice that misclassifications were happening resulting in false negatives, and that in most cases the false negatives had images in the opposite class that were very similar:


Reducing False Negatives in the Invasive Ductal Carcinoma Classifier

Reducing False Negatives in the Invasive Ductal Carcinoma Classifier (Source)


I was happy to get the opportunity to demonstrate the second stage of the IDC Classifier at Intel AI DevJam & ICML (International Conference on Machine Learning) in Sweden, July 2018. This stage of the project was based around tricking the classifier with very similar images from opposite classes, producing false negatives, and finding a way to be able to detect these images in advance, marking them as needing further (human) inspection. 

You can read an article I wrote about about the project here.

The Acute Myeloid/Lymphoblastic Leukemia Project

The IDC Classifier is now part of two open source projects aimed at helping understand, and find solutions for types of cancer. The Peter Moss Acute Myeloid/Lymphoblastic Leukemia Project is the second of the two projects, and a proof of concept will be demoed at Embedded World with Intel this February. You can find out more about the project in my article: Inception V3 Deep Convolutional Architecture For Classifying Acute Myeloid/Lymphoblastic Leukemia.

Over the next few months the Breast Cancer AI repository which is the new home for the IDC Classifier and related projects, will be updated with code and techniques from the AML/ALL project.


  1. Invasive Ductal Carcinoma
  2. Convolutional Neural Network
  3. TensorFlow
  4. Transfer learning
  5. Intel® DevCloud
  6. Intel Movidius Brand
  7. Intel UP2
  8. Raspberry Pi
  9. IoT JumpWay
  10. TASS
  11. Deep Learning
  12. Comparing Deep Neural Networks and Traditional Vision Algorithms in Mobile Robotics
  13. Rethinking the Inception Architecture for Computer Vision

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at