The University of Southampton
University of Southampton Institutional Repository

Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms

Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms
Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms
Inference for Deep Neural Networks is increasingly being executed locally on mobile and embedded platforms due to its advantages in latency, privacy and connectivity. Since modern System on Chips typically execute a combination of different and dynamic workloads concurrently, it is challenging to consistently meet inference time/energy budget at runtime because of the local computing resources available to the DNNs vary considerably. To address this challenge, a variety of dynamic DNNs were proposed. However, these works have significant memory overhead, limited runtime recoverable compression rate and narrow dynamic ranges of performance scaling. In this paper, we present a dynamic DNN using incremental training and group convolution pruning. The channels of the DNN convolution layer are divided into groups, which are then trained incrementally. At runtime, later groups can be pruned for inference time/energy reduction or added back for accuracy recovery without model retraining. In addition, we combine task mapping and Dynamic Voltage Frequency Scaling (DVFS) with our dynamic DNN to deliver finer trade-off between accuracy and time/power/energy over a wider dynamic range. We illustrate the approach by modifying AlexNet for the CIFAR10 image dataset and evaluate our work on two heterogeneous hardware platforms: Odroid XU3 (ARM big.LITTLE CPUs) and Nvidia Jetson Nano (CPU and GPU). Compared to the existing works, our approach can provide up to 2.36x (energy) and 2.73x (time) wider dynamic range with a 2.4x smaller memory footprint at the same compression rate. It achieved 10.6x (energy) and 41.6x (time) wider dynamic range by combining with task mapping and DVFS.
1-6
Xun, Lei
51a0da82-6979-49a8-8eff-ada011f5aff5
Tran-Thanh, Long
633282bf-f7ff-4137-ada6-6d4f19262676
Al-Hashimi, Bashir
0b29c671-a6d2-459c-af68-c4614dce3b5d
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020
Xun, Lei
51a0da82-6979-49a8-8eff-ada011f5aff5
Tran-Thanh, Long
633282bf-f7ff-4137-ada6-6d4f19262676
Al-Hashimi, Bashir
0b29c671-a6d2-459c-af68-c4614dce3b5d
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020

Xun, Lei, Tran-Thanh, Long, Al-Hashimi, Bashir and Merrett, Geoff (2020) Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms. In 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD 2019). pp. 1-6 .

Record type: Conference or Workshop Item (Paper)

Abstract

Inference for Deep Neural Networks is increasingly being executed locally on mobile and embedded platforms due to its advantages in latency, privacy and connectivity. Since modern System on Chips typically execute a combination of different and dynamic workloads concurrently, it is challenging to consistently meet inference time/energy budget at runtime because of the local computing resources available to the DNNs vary considerably. To address this challenge, a variety of dynamic DNNs were proposed. However, these works have significant memory overhead, limited runtime recoverable compression rate and narrow dynamic ranges of performance scaling. In this paper, we present a dynamic DNN using incremental training and group convolution pruning. The channels of the DNN convolution layer are divided into groups, which are then trained incrementally. At runtime, later groups can be pruned for inference time/energy reduction or added back for accuracy recovery without model retraining. In addition, we combine task mapping and Dynamic Voltage Frequency Scaling (DVFS) with our dynamic DNN to deliver finer trade-off between accuracy and time/power/energy over a wider dynamic range. We illustrate the approach by modifying AlexNet for the CIFAR10 image dataset and evaluate our work on two heterogeneous hardware platforms: Odroid XU3 (ARM big.LITTLE CPUs) and Nvidia Jetson Nano (CPU and GPU). Compared to the existing works, our approach can provide up to 2.36x (energy) and 2.73x (time) wider dynamic range with a 2.4x smaller memory footprint at the same compression rate. It achieved 10.6x (energy) and 41.6x (time) wider dynamic range by combining with task mapping and DVFS.

Text
Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms - Version of Record
Available under License Creative Commons Attribution.
Download (611kB)
Text
Incremental training and group convolution pruning for runtime DNN performance scaling on heterogeneous embedded platforms_v2 - Other
Available under License Creative Commons Attribution.
Download (606kB)

More information

Accepted/In Press date: 15 January 2020
Published date: 2020

Identifiers

Local EPrints ID: 437992
URI: http://eprints.soton.ac.uk/id/eprint/437992
PURE UUID: 30609f57-eaa9-48b2-bcf1-61b82ced648a
ORCID for Geoff Merrett: ORCID iD orcid.org/0000-0003-4980-3894

Catalogue record

Date deposited: 25 Feb 2020 17:31
Last modified: 17 Mar 2024 03:02

Export record

Contributors

Author: Lei Xun
Author: Long Tran-Thanh
Author: Bashir Al-Hashimi
Author: Geoff Merrett ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×