The University of Southampton
University of Southampton Institutional Repository

Sparse deep neural networks for embedded intelligence

Sparse deep neural networks for embedded intelligence
Sparse deep neural networks for embedded intelligence
Deep learning is becoming more widespread due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their application. This work addresses the problem of memory requirements by proposing a regularization approach to compress the memory footprint of the models. It is shown that the sparsity-inducing regularization problem can be solved effectively using an enhanced stochastic variance reduced gradient optimization approach. Experimental evaluation of our approach shows that it can reduce the memory requirements both in the convolutional and fully connected layers by up to 300$\times$ without affecting overall test accuracy.
machine learning (artificial intelligence), Compression Ratio, embedded systems
30-38
IEEE Computer Society
Bi, Jia
e07a78d1-62dd-4b1d-b223-4107aa3627c7
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Bi, Jia
e07a78d1-62dd-4b1d-b223-4107aa3627c7
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868

Bi, Jia and Gunn, Steve R. (2018) Sparse deep neural networks for embedded intelligence. In 30th International Conference on Tools with Artificial Intelligence(ICTAI). IEEE Computer Society. pp. 30-38 . (doi:10.1109/ICTAI.2018.00016).

Record type: Conference or Workshop Item (Paper)

Abstract

Deep learning is becoming more widespread due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their application. This work addresses the problem of memory requirements by proposing a regularization approach to compress the memory footprint of the models. It is shown that the sparsity-inducing regularization problem can be solved effectively using an enhanced stochastic variance reduced gradient optimization approach. Experimental evaluation of our approach shows that it can reduce the memory requirements both in the convolutional and fully connected layers by up to 300$\times$ without affecting overall test accuracy.

Text
bare_conf - Accepted Manuscript
Restricted to Repository staff only
Request a copy

More information

Accepted/In Press date: 5 August 2018
Published date: 5 November 2018
Venue - Dates: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence, Greece, volos, 2018-11-02 - 2019-03-05
Keywords: machine learning (artificial intelligence), Compression Ratio, embedded systems

Identifiers

Local EPrints ID: 426560
URI: http://eprints.soton.ac.uk/id/eprint/426560
PURE UUID: 1671835b-795d-4e80-88b7-c8df408e9463

Catalogue record

Date deposited: 30 Nov 2018 17:30
Last modified: 15 Mar 2024 23:04

Export record

Altmetrics

Contributors

Author: Jia Bi
Author: Steve R. Gunn

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×