Representation Theory and Invariant Neural Networks
Wood, J. and Shawe-Taylor, J. (1996) Representation Theory and Invariant Neural Networks. Discrete Applied Mathematics, 69, (1-2), 33-60.
Full text not available from this repository.
A feedforward neural network is a computational device used for pattern recognition. In many recognition problems, certain transformations exist which, when applied to a pattern, leave its classification unchanged. Invariance under a given group of transformations is therefore typically a desirable property of pattern classifiers. In this paper, we present a methodology, based on representation theory, for the construction of a neural network invariant under any given finite linear group. Such networks show improved generalization abilities and may also learn faster than corresponding networks without in-built invariance. We hope in the future to generalize this theory to approximate invariance under continuous groups.
|Divisions:||Faculty of Physical and Applied Science > Electronics and Computer Science
|Date Deposited:||20 Aug 2004|
|Last Modified:||09 Aug 2012 23:54|
|Contributors:||Wood, J. (Author)
Shawe-Taylor, J. (Author)
|Publisher:||Elsevier Science Publishers B. V.|
|Further Information:||Google Scholar|
|ISI Citation Count:||3|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)