GhostShiftAddNet: More Features from Energy-Efficient Operations
GhostShiftAddNet: More Features from Energy-Efficient Operations
Deep convolutional neural networks (CNNs) are computationally and memory intensive. In CNNs, intensive multiplication can have resource implications that may challenge the ability for effective deployment of inference on resource-constrained edge devices. This paper proposes GhostShiftAddNet, where the motivation is to implement a hardware-efficient deep network: a multiplication-free CNN with less redundant features. We introduce a new bottleneck block, GhostSA, that converts all multiplications in the block to cheap operations. The bottleneck uses an appropriate number of bit-shift filters to process intrinsic feature maps, then applies a series of transformations that consist of bit-shifts with addition operations to generate more feature maps that fully learn information underlying intrinsic features. We schedule the number of bit-shift and addition operations for different hardware platforms. We conduct extensive experiments and ablation studies with desktop and embedded (Jetson Nano) devices for implementation and measurements. We demonstrate the proposed GhostSA block can replace bottleneck blocks in the backbone of state-of-the-art networks architectures and gives improved performance on image classification benchmarks. Further, our GhostShiftAddNet can achieve higher classification accuracy by using fewer FLOPs and parameters (reduced by up to 3x) than GhostNet. When compared to GhostNet, inference latency on the Jetson Nano is improved by about 1.3x and 2x on GPU and CPU respectively.
Bi, Jia
8b23da1b-a6d6-43f4-9752-04a825093b3b
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020
22 November 2021
Bi, Jia
8b23da1b-a6d6-43f4-9752-04a825093b3b
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020
Bi, Jia, Hare, Jonathon and Merrett, Geoff
(2021)
GhostShiftAddNet: More Features from Energy-Efficient Operations.
In The 32nd British Machine Vision Conference 2021.
Record type:
Conference or Workshop Item
(Paper)
Abstract
Deep convolutional neural networks (CNNs) are computationally and memory intensive. In CNNs, intensive multiplication can have resource implications that may challenge the ability for effective deployment of inference on resource-constrained edge devices. This paper proposes GhostShiftAddNet, where the motivation is to implement a hardware-efficient deep network: a multiplication-free CNN with less redundant features. We introduce a new bottleneck block, GhostSA, that converts all multiplications in the block to cheap operations. The bottleneck uses an appropriate number of bit-shift filters to process intrinsic feature maps, then applies a series of transformations that consist of bit-shifts with addition operations to generate more feature maps that fully learn information underlying intrinsic features. We schedule the number of bit-shift and addition operations for different hardware platforms. We conduct extensive experiments and ablation studies with desktop and embedded (Jetson Nano) devices for implementation and measurements. We demonstrate the proposed GhostSA block can replace bottleneck blocks in the backbone of state-of-the-art networks architectures and gives improved performance on image classification benchmarks. Further, our GhostShiftAddNet can achieve higher classification accuracy by using fewer FLOPs and parameters (reduced by up to 3x) than GhostNet. When compared to GhostNet, inference latency on the Jetson Nano is improved by about 1.3x and 2x on GPU and CPU respectively.
Text
British_Machine_Vision_Conference
- Accepted Manuscript
Restricted to Repository staff only
Request a copy
More information
Published date: 22 November 2021
Identifiers
Local EPrints ID: 454801
URI: http://eprints.soton.ac.uk/id/eprint/454801
PURE UUID: 59056e81-9a0a-429b-9212-b12a507beafa
Catalogue record
Date deposited: 23 Feb 2022 17:44
Last modified: 17 Mar 2024 03:05
Export record
Contributors
Author:
Jia Bi
Author:
Jonathon Hare
Author:
Geoff Merrett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics