The University of Southampton
University of Southampton Institutional Repository

Reduced-order neural network synthesis with robustness guarantees

Reduced-order neural network synthesis with robustness guarantees
Reduced-order neural network synthesis with robustness guarantees
In the wake of the explosive growth in smartphones and cyber-physical systems, there has been an accelerating shift in how data is generated away from centralised data towards on-device generated data. In response, machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient. However, our understanding of how these device orientated algorithms behave and should be trained is still fairly limited. To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approximating the input/output mapping of a larger one is introduced. The reduced order neural network’s weights and biases are generated from a convex semi-definite programme that minimises the worst-case approximation error with respect to the larger network. Worst case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures. What differentiates the proposed approach to existing methods for generating small neural networks, e.g. pruning, is the inclusion of the worst-case approximation error directly within the training cost function, which should add robustness to out-of-sample data-points. Numerical examples highlight the potential of the proposed approach. The overriding goal of this paper is to generalise recent results in the robustness analysis of neural networks to a robust synthesis problem for their weights and biases.
Approximation error, Artificial neural networks, Biological neural networks, Machine learning algorithms, Neural network compression, Neural networks, Neurons, Robustness, reduced order systems, robustness
2162-237X
1-10
Drummond, Ross
54d0e246-7c22-49da-a6d6-1b8b81b5790c
Turner, Matthew C.
6befa01e-0045-4806-9c91-a107c53acba0
Duncan, Stephen R.
51bbf209-1f4e-4fd1-b4e0-a803b0e43063
Drummond, Ross
54d0e246-7c22-49da-a6d6-1b8b81b5790c
Turner, Matthew C.
6befa01e-0045-4806-9c91-a107c53acba0
Duncan, Stephen R.
51bbf209-1f4e-4fd1-b4e0-a803b0e43063

Drummond, Ross, Turner, Matthew C. and Duncan, Stephen R. (2022) Reduced-order neural network synthesis with robustness guarantees. IEEE Transactions on Neural Networks and Learning Systems, 1-10. (doi:10.1109/TNNLS.2022.3182893).

Record type: Article

Abstract

In the wake of the explosive growth in smartphones and cyber-physical systems, there has been an accelerating shift in how data is generated away from centralised data towards on-device generated data. In response, machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient. However, our understanding of how these device orientated algorithms behave and should be trained is still fairly limited. To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approximating the input/output mapping of a larger one is introduced. The reduced order neural network’s weights and biases are generated from a convex semi-definite programme that minimises the worst-case approximation error with respect to the larger network. Worst case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures. What differentiates the proposed approach to existing methods for generating small neural networks, e.g. pruning, is the inclusion of the worst-case approximation error directly within the training cost function, which should add robustness to out-of-sample data-points. Numerical examples highlight the potential of the proposed approach. The overriding goal of this paper is to generalise recent results in the robustness analysis of neural networks to a robust synthesis problem for their weights and biases.

Text
2102.09284 - Accepted Manuscript
Download (736kB)

More information

Accepted/In Press date: 7 June 2022
e-pub ahead of print date: 23 June 2022
Additional Information: Publisher Copyright:IEEE; Nextrode Project of the Faraday Institution (EPSRC) (Grant Number: EP/M009521/1) U (Grant Number: 0000DONOTUSETHIS0000.K)
Keywords: Approximation error, Artificial neural networks, Biological neural networks, Machine learning algorithms, Neural network compression, Neural networks, Neurons, Robustness, reduced order systems, robustness

Identifiers

Local EPrints ID: 469697
URI: http://eprints.soton.ac.uk/id/eprint/469697
ISSN: 2162-237X
PURE UUID: 2552da93-6b55-43a9-a06a-16516c9c1114

Catalogue record

Date deposited: 22 Sep 2022 16:38
Last modified: 16 Mar 2024 21:41

Export record

Altmetrics

Contributors

Author: Ross Drummond
Author: Matthew C. Turner
Author: Stephen R. Duncan

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×