The University of Southampton
University of Southampton Institutional Repository

Singular value decomposition for the efficient design of neural networks

Singular value decomposition for the efficient design of neural networks
Singular value decomposition for the efficient design of neural networks

With advances in computational power and the development of new algorithmic approaches, machine learning models have become in-creasingly large and complex. The undoubted benefits of such models are realised at the expense of the extensive computational re-sources used by large datasets, lengthy training times, and the implementation of trained models. The sustainable use of such computational resources is now being questioned. The work described in this paper aims to better understand the potential for designing machine learning models with maximum efficiency, both in terms of training and of implementation. The fundamental basis of the work is in the singular value decomposition (SVD) of weight matrices in neural network architectures. This decomposition provides a rational basis for disposing of unnecessary information accumulated in the network during training. Whilst some authors have previously made use of the SVD, the novel work described here enables the repeated application of the SVD during network training. This enables the progressive reduction in the dimensions of hidden layers in the network. This in turn enables the right sizing of matrix dimensions as training progresses. The application of the method is illustrated by tackling the signal processing problem of the localisation of a moving sound source. The learning rates, accuracy of localisation, and efficiency of computational implementation are compared for network architectures comprising multilayer perceptrons (MLPs) and recurrent neural networks (RNNs), both architectures having either real or complex elements. The results presented show that the design of networks based on the progressive application of the SVD during training can drastically reduce the training time and the computational requirements of all such models with little or no loss in performance.

Machine learning, acoustics, signal processing
2161-0363
IEEE
Paul, Vlad S.
a643f880-7e70-4ae0-a27b-4e77c3c451de
Nelson, Philip A.
5c6f5cc9-ea52-4fe2-9edf-05d696b0c1a9
Paul, Vlad S.
a643f880-7e70-4ae0-a27b-4e77c3c451de
Nelson, Philip A.
5c6f5cc9-ea52-4fe2-9edf-05d696b0c1a9

Paul, Vlad S. and Nelson, Philip A. (2024) Singular value decomposition for the efficient design of neural networks. In 34th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2024 - Proceedings. IEEE. 6 pp . (doi:10.1109/MLSP58920.2024.10734821).

Record type: Conference or Workshop Item (Paper)

Abstract

With advances in computational power and the development of new algorithmic approaches, machine learning models have become in-creasingly large and complex. The undoubted benefits of such models are realised at the expense of the extensive computational re-sources used by large datasets, lengthy training times, and the implementation of trained models. The sustainable use of such computational resources is now being questioned. The work described in this paper aims to better understand the potential for designing machine learning models with maximum efficiency, both in terms of training and of implementation. The fundamental basis of the work is in the singular value decomposition (SVD) of weight matrices in neural network architectures. This decomposition provides a rational basis for disposing of unnecessary information accumulated in the network during training. Whilst some authors have previously made use of the SVD, the novel work described here enables the repeated application of the SVD during network training. This enables the progressive reduction in the dimensions of hidden layers in the network. This in turn enables the right sizing of matrix dimensions as training progresses. The application of the method is illustrated by tackling the signal processing problem of the localisation of a moving sound source. The learning rates, accuracy of localisation, and efficiency of computational implementation are compared for network architectures comprising multilayer perceptrons (MLPs) and recurrent neural networks (RNNs), both architectures having either real or complex elements. The results presented show that the design of networks based on the progressive application of the SVD during training can drastically reduce the training time and the computational requirements of all such models with little or no loss in performance.

Text
Paul & Nelson - Singular Value Decomposition for the Efficient Design of Neural Networks - Version of Record
Restricted to Repository staff only
Request a copy

More information

e-pub ahead of print date: 4 November 2024
Keywords: Machine learning, acoustics, signal processing

Identifiers

Local EPrints ID: 509270
URI: http://eprints.soton.ac.uk/id/eprint/509270
ISSN: 2161-0363
PURE UUID: de71ed97-65c3-428f-90cf-05293d9fb79d
ORCID for Vlad S. Paul: ORCID iD orcid.org/0000-0002-5562-6102
ORCID for Philip A. Nelson: ORCID iD orcid.org/0000-0002-9563-3235

Catalogue record

Date deposited: 17 Feb 2026 17:41
Last modified: 18 Feb 2026 02:31

Export record

Altmetrics

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×