Deep Learning Optimization for Edge Devices: Analysis of Training Quantization Parameters

Published in: IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society. This paper focuses on convolution neural network quantization problem. The quantization has a distinct stage of data conversion from floating-point into integer-point numbers. In general, the process of quantization is associated with the reduction of the matrix dimension via limited precision of the numbers. However, the training and inference stages of deep learning neural network are limited by the space of the memory and a variety of factors including programming complexity and even reliability of the system. On the whole the process of quantization becomes more and more popular due to significant impact on performance and minimal accuracy loss. Various techniques for networks quantization have been already proposed, including quantization aware training and integer arithmetic-only inference. Yet, a detailed comparison of various quantization configurations, combining all proposed methods haven't been presented yet. This comparison is important to understand selection of quantization hyperparameters during training to optimize networks for inference while preserving their robustness. In this work, we perform in-depth analysis of parameters in the quantization aware training, the process of simulating precision loss in the forward pass by quantizing and dequantizing tensors. Specifically, we modify rounding modes, input preprocessing, output data signedness, bitwidth of the quantization and locations of precision loss simulation to evaluate how they affect accuracy of deep neural network aimed at performing efficient calculations on resource-constrained devices.

Authors

Alicja Kwasniewska

Software Engineer, AI Applications

View authors bio

Maciej Szankin

Software Engineer, AI Applications

View authors bio

Mateusz Ozga

Jason Wolfe

Arun Das

Adam Zajac

Jacek Ruminski

Paul Rad

Related Content

Sparse DNNs with Improved Adversarial Robustness

Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications. By converting…

View publication

Towards Understanding End-of-trip Instructions in a Taxi Ride…

We introduce a dataset containing human-authored descriptions of target locations in an "end-of-trip in a taxi ride" scenario. We describe…

View publication

Constructing Deep Neural Networks by Bayesian Network Structure…

We introduce a principled approach for unsupervised structure learning of deep neural networks. We propose a new interpretation for depth…

View publication

Predicting Future User Activities with Constrained Non-Linear Models

Prediction of future user activities from their history, all past activities, is a challenging problem. One reason is that the…

View publication

Stay Connected


Keep tabs on all the latest news with our monthly newsletter.