Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware
Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware
Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of 1.32\times lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.
dynamic neural networks, early-exiting, FPGA, low resource, hardware architecture for machine learning, early-exiting, low resource, Dynamic neural networks, hardware architecture for machine learning, field-programmable gate array (FPGA)
2195 - 2203
Dimitriou, Anastasios
02f87799-17dc-4271-96c3-8b30e64e659e
Xun, Lei
d30d0c37-7c17-4eed-b02c-1a0f81844f17
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Merrett, Geoff V.
89b3a696-41de-44c3-89aa-b0aa29f54020
June 2025
Dimitriou, Anastasios
02f87799-17dc-4271-96c3-8b30e64e659e
Xun, Lei
d30d0c37-7c17-4eed-b02c-1a0f81844f17
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Merrett, Geoff V.
89b3a696-41de-44c3-89aa-b0aa29f54020
Dimitriou, Anastasios, Xun, Lei, Hare, Jonathon and Merrett, Geoff V.
(2025)
Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 44 (6), .
(doi:10.1109/TCAD.2024.3519055).
Abstract
Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of 1.32\times lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.
Text
Realisation of Early-Exit Dynamic Neural Networks
- Accepted Manuscript
Text
Realisation of Early-Exit Dynamic Neural Networks
- Other
Restricted to Repository staff only
Request a copy
More information
Submitted date: 25 July 2024
Accepted/In Press date: 29 November 2024
e-pub ahead of print date: 16 December 2024
Published date: June 2025
Keywords:
dynamic neural networks, early-exiting, FPGA, low resource, hardware architecture for machine learning, early-exiting, low resource, Dynamic neural networks, hardware architecture for machine learning, field-programmable gate array (FPGA)
Identifiers
Local EPrints ID: 499024
URI: http://eprints.soton.ac.uk/id/eprint/499024
ISSN: 0278-0070
PURE UUID: 6349c6c4-f2a5-485b-8ced-254bf43c335d
Catalogue record
Date deposited: 07 Mar 2025 17:34
Last modified: 11 Mar 2026 02:40
Export record
Altmetrics
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics