Iceberg Classification Using Deep Learning on Intel® Architecture

Published: 04/16/2018  

Last Updated: 04/16/2018


Poor detection and drifting icebergs are major threats to marine safety and physical oceanography. As a result of these factors, ships can sink, thereby causing a major loss of human life. To monitor and classify the object as a ship or an iceberg, Synthetic Aperture Radar (SAR) satellite images are used to automatically analyze with the help of deep learning. In this experiment, the Kaggle* iceberg dataset (images provided by the SAR satellite) was considered, and the images were classified using the AlexNet topology and Keras library. The experiments were performed on Intel® Xeon® Gold processor-powered systems, and a training accuracy of 99 percent and inference accuracy of 86 percent were achieved.


An iceberg is a large chunk of ice that has been calved from a glacier. Icebergs come in different shapes and sizes. Because most of an iceberg’s mass is below the water surface, it drifts with the ocean currents. This poses risks to ships and their navigation and infrastructure. Currently, many companies and institutions are using aircrafts and shore-based support to monitor the risk from icebergs. This monitoring is challenging in harsh weather conditions and remote areas.

To mitigate these risks, Statoil, an international energy company, is working closely with the Centre for Cold Ocean Resources Engineering (C-Core) to remotely sense icebergs and ships using SAR satellites. The SAR satellites are not light dependent and can capture images of the targets even in darkness, clouds, fog, and harsh weather conditions. The main objective of this experiment on Intel® architecture was to automatically classify a satellite image as an iceberg or a ship.

For this experiment, AlexNet topology with the Keras library was used to train and inference an iceberg classification on an Intel® Xeon® Gold processor. The iceberg dataset was taken from Kaggle, and the approach was to train the model from scratch.

Choosing the Environment


Experiments were performed on an Intel Xeon Gold processor-powered system, as described in Table 1.

Table 1. Intel® Xeon® Gold processor configuration.

Components Details
Architecture x86_64
CPU op-mode(s) 32 bit, 64 bit
Byte order Little-endian
CPU(s) 24
Core(s) per socket Six
Socket(s) Two
CPU family Six
Model 85
Model name Intel® Xeon® Gold 6128 processor 3.40 GHz


The Keras framework along with the Intel® Distribution for Python* were used as the software configuration, as described in Table 2.

Table 2. Software configuration.

Software/Library Version
Keras 2.1.2
Python* 3.6 (Intel optimized)


The iceberg dataset was considered from the Statoil/C-CORE Iceberg Classifier Challenge. The machine-generated images were labeled by human experts with geographic knowledge of the target. All the images were 75x75 pixels.

For train.json, each image consisted of the following fields:

  • Id: The ID of the image.
  • band_1, band_2: The flattened image data. Each band has 75x75 pixel values in the list, hence the list has 5,625 elements. These values are not the non-negative integers in image files because they have physical meanings. These are the float numbers with unit dB. Band 1 and Band 2 are the signals that are characterized by radar backscatter produced from different polarizations at a particular incidence angle. The polarizations correspond to HH (transmit or receive horizontally) and HV (transmit horizontally and receive vertically).
  • inc_angle: The incidence angle at which the image was taken. This field has missing data marked “na,” and those images with na incidence angles are all in the training data to prevent leakage.
  • is_iceberg: The target variable. S is set to “1” if it is an iceberg and “0” if it is a ship. This field only exists in train.json.

In inferencing, the trained AlexNet model is used to predict is_iceberg field.

AlexNet Architecture

AlexNet is one of the deep convolutional neural networks designed to deal with complex image classification tasks on an ImageNet dataset. AlexNet has five convolutional layers, three sub-sampling layers, and three fully connected layers. Its total trainable parameters are in crores. The arrangement and configuration of all the layers of AlexNet are shown in Figure 1.

AlexNet architecture
Figure 1. AlexNet architecture (credit: CV-Tricks4).

Execution Steps

This section explains the steps followed in the end-to-end process for training and inferencing iceberg classification model on AlexNet architecture. The steps include:

  1. Preparing the input
  2. Model training and inference

Preparing Input

  • We downloaded the dataset from Kaggle
  • We converted “band1” and “band2” to a float array.
  • The dataset consisted of 1,604 images. These images were split into:
    • 1,074 JPEG images for training
    • 530 images for inferencing

Handling missing values

In the dataset, the inc_angle field had more than 130 missing values. Therefore, the k-nearest neighbors imputation technique was used, where N=3. The chart in Figure 2 represents the missing values before and after applying the imputation technique.

Before imputation
Figure 2. Before imputation.

After imputation
Figure 3. After imputation.

Model Training and Testing

After installing Keras and the Intel® Distribution for Python*, the next step was to train the model. The training technique adopted was to train all the layers from scratch.

The following command was used to download the iceberg dataset and train the algorithm with imputed data toward classifying the iceberg images and producing the inference results:

python ~/iceberg_kaggle/

Dataset download
Figure 4. Dataset download.


Training was performed in two runs. The first run was to decide on the optimizer to be used, and in the second run training was performed with the selected optimizer for 5,000 epochs.


To achieve better accuracy, Keras optimizers, such as the stochastic gradient descent (SGD) and ADAM (adaptive moment estimation), were used to control the gradients. Table 3 lists the results for 250 epochs.

Table 3. Results with optimizers.

Optimizers Results Value
SGD Training accuracy 62.5 percent
Loss 0.37
ADAM Training accuracy 61.6 percent
Loss 6.0

Because the loss was less in SGD, as compared to ADAM, we proceeded with the second run using the SGD optimizer.


Training was performed on 1,074 images, see Figure 5, with a batch size of 128 images for 5,000 epochs. Table 4 shows the results.

Table 4. Training results.

Results Value
Accuracy 99 percent
Loss 0.0009

Training snapshot
Figure 5. Training snapshot.


Inference was performed with two different datasets. The first inference measured the inference accuracy on the subset of train.json. The second inference was performed on test.json to submit the predictions to Kaggle.


Inferencing was performed on 530 images, see Figure 6. Table 5 shows the results.

Table 5. Inference results.

Results Value
Accuracy 87 percent
Loss 0.1009

Figure 6. Inferencing.

Inference 2

Inferencing on test.json was performed to predict the is_iceberg value and submitted to Kaggle.

Because there are no ground truth values, we cannot calculate the accuracy.

For each test ID, the probability that the image is an iceberg is as shown in Table 6.

Table 6. Sample output.

Sample output


In this paper, we showed how training from scratch and the testing of the iceberg classification was performed using the AlexNet topology with Keras and an iceberg dataset in the Intel® Xeon® Gold processor environment. The experiment was extended by applying different imputation techniques on the inc_angle field because it had missing values. We also observed better accuracy using the Keras SGD optimization technique.

About the Author

Manda Lakshmi Bhavani and Rajeswari Ponnuru are part of the Intel and Tata Consultancy Services relationship, working on Intel® AI Developer Program evangelization.


1. Keras tutorial

2. AlexNet

3. Kaggle Statoil/C-CORE Iceberg Classifier Challenge

4. AlextNet architecture

Related Resources

Keras optimizations

Understanding AlexNet

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at