Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra
Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra
Neural networks are increasingly being applied to problems in acoustics and audio signal processing. Large audio datasets are being generated for use in training machine learning algorithms, and the reduction of training times is of increasing relevance. The work presented here begins by reformulating the analysis of the classical multilayer perceptron to show the explicit dependence of network parameters on the properties of the weight matrices in the network. This analysis then allows the application of the singular value decomposition (SVD) to the weight matrices. An algorithm is presented that makes use of regular applications of the SVD to progressively reduce the dimensionality of the network. This results in significant reductions in network training times of up to 50% with very little or no loss in accuracy. The use of the algorithm is demonstrated by applying it to a number of acoustical classification problems that help quantify the extent to which closely related spectra can be distinguished by machine learning.
4119-4133
Paul, Vlad Stefan
a643f880-7e70-4ae0-a27b-4e77c3c451de
Nelson, Philip
5c6f5cc9-ea52-4fe2-9edf-05d696b0c1a9
14 June 2021
Paul, Vlad Stefan
a643f880-7e70-4ae0-a27b-4e77c3c451de
Nelson, Philip
5c6f5cc9-ea52-4fe2-9edf-05d696b0c1a9
Paul, Vlad Stefan and Nelson, Philip
(2021)
Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra.
The Journal of The Acoustical Society of America, 149 (6), .
(doi:10.1121/10.0005126).
Abstract
Neural networks are increasingly being applied to problems in acoustics and audio signal processing. Large audio datasets are being generated for use in training machine learning algorithms, and the reduction of training times is of increasing relevance. The work presented here begins by reformulating the analysis of the classical multilayer perceptron to show the explicit dependence of network parameters on the properties of the weight matrices in the network. This analysis then allows the application of the singular value decomposition (SVD) to the weight matrices. An algorithm is presented that makes use of regular applications of the SVD to progressively reduce the dimensionality of the network. This results in significant reductions in network training times of up to 50% with very little or no loss in accuracy. The use of the algorithm is demonstrated by applying it to a number of acoustical classification problems that help quantify the extent to which closely related spectra can be distinguished by machine learning.
Text
Matrix analysis for fast learning
- Accepted Manuscript
More information
Accepted/In Press date: 11 May 2021
e-pub ahead of print date: 14 June 2021
Published date: 14 June 2021
Additional Information:
Funding Information:
V.S.P. is grateful for the support of the UK Engineering and Physical Sciences Research Council through the Doctoral Training Partnership.
Publisher Copyright:
© 2021 Acoustical Society of America.
Identifiers
Local EPrints ID: 450284
URI: http://eprints.soton.ac.uk/id/eprint/450284
ISSN: 0001-4966
PURE UUID: 1f8241d7-7e1f-40ca-895a-3797966d6826
Catalogue record
Date deposited: 20 Jul 2021 16:32
Last modified: 17 Mar 2024 06:34
Export record
Altmetrics
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics