Wood, J. and Shawe-Taylor, J.
Representation theory and invariant neural networks.
Discrete Applied Mathematics, 69, (1-2), .
Full text not available from this repository.
A feedforward neural network is a computational device used for pattern recognition. In many recognition problems, certain transformations exist which, when applied to a pattern, leave its classification unchanged. Invariance under a given group of transformations is therefore typically a desirable property of pattern classifiers. In this paper, we present a methodology, based on representation theory, for the construction of a neural network invariant under any given finite linear group. Such networks show improved generalisation abilities and may also learn faster than corresponding networks without inbuilt invariance. We hope in the future to generalise this theory to approximate invariance under continuous groups.
Actions (login required)