Neural network interpretation using descrambler groups
Neural network interpretation using descrambler groups
The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here a group-theoretical procedure that attempts to bring inner-layer signaling into a human-readable form, the assumption being that this form exists and has identifiable and quantifiable features—for example, smoothness or locality. We applied the proposed method to DEERNet (a DSP network used in electron spin resonance) and managed to descramble it. We found considerable internal sophistication: the network spontaneously invents a bandpass filter, a notch filter, a frequency axis rescaling transformation, frequency-division multiplexing, group embedding, spectral filtering regularization, and a map from harmonic functions into Chebyshev polynomials—in 10 min of unattended training from a random initial guess.
Amey, Jake L
8602eaeb-4ad5-48b7-911e-546fe88e8f73
Keeley, Jake
d6a30ebd-36e7-4064-90c0-0192d9c9e5f3
Choudhury, Tajwar
fd1447a5-5828-4aae-9f6d-4e3ef75c5568
Kuprov, Ilya
bb07f28a-5038-4524-8146-e3fc8344c065
26 January 2021
Amey, Jake L
8602eaeb-4ad5-48b7-911e-546fe88e8f73
Keeley, Jake
d6a30ebd-36e7-4064-90c0-0192d9c9e5f3
Choudhury, Tajwar
fd1447a5-5828-4aae-9f6d-4e3ef75c5568
Kuprov, Ilya
bb07f28a-5038-4524-8146-e3fc8344c065
Amey, Jake L, Keeley, Jake, Choudhury, Tajwar and Kuprov, Ilya
(2021)
Neural network interpretation using descrambler groups.
Proceedings of the National Academy of Sciences, 118 (5).
(doi:10.1073/pnas.2016917118).
Abstract
The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here a group-theoretical procedure that attempts to bring inner-layer signaling into a human-readable form, the assumption being that this form exists and has identifiable and quantifiable features—for example, smoothness or locality. We applied the proposed method to DEERNet (a DSP network used in electron spin resonance) and managed to descramble it. We found considerable internal sophistication: the network spontaneously invents a bandpass filter, a notch filter, a frequency axis rescaling transformation, frequency-division multiplexing, group embedding, spectral filtering regularization, and a map from harmonic functions into Chebyshev polynomials—in 10 min of unattended training from a random initial guess.
Text
amey-et-al-neural-network-interpretation-using-descrambler-groups
- Version of Record
More information
Accepted/In Press date: 10 December 2020
Published date: 26 January 2021
Identifiers
Local EPrints ID: 501435
URI: http://eprints.soton.ac.uk/id/eprint/501435
ISSN: 0027-8424
PURE UUID: 3d33dd52-b0df-4cfc-b14b-a77c93ab4fd2
Catalogue record
Date deposited: 30 May 2025 17:15
Last modified: 22 Aug 2025 02:06
Export record
Altmetrics
Contributors
Author:
Jake L Amey
Author:
Jake Keeley
Author:
Tajwar Choudhury
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics