The Effect of Hebbian Learning on Optimisation in Hopfield Networks
The Effect of Hebbian Learning on Optimisation in Hopfield Networks
In neural networks, two specific dynamical behaviours are well known: 1) Networks naturally find patterns of activation that locally minimise constraints among interactions. This can be understood as the local minimisation of an energy or potential function, or the optimisation of an objective function. 2) In distinct scenarios, Hebbian learning can create new interactions that form associative memories of activation patterns. In this paper we show that these two behaviours have a surprising interaction – that learning of this type significantly improves the ability of a neural network to find configurations that satisfy constraints/perform effective optimisation. Specifically, the network develops a memory of the attractors that it has visited, but importantly, is able to generalise over previously visited attractors to increase the basin of attraction of superior attractors before they are visited. The network is ultimately transformed into a different network that has only one basin of attraction, but this attractor corresponds to a configuration that is very low energy in the original network. The new network thus finds optimised configurations that were unattainable (had exponentially small basins of attraction) in the original network dynamics.
Watson, Richard A.
ce199dfc-d5d4-4edf-bd7b-f9e224c96c75
Buckley, C. L.
403be04e-fca5-4f1c-b0c4-d84401f51d51
Mills, Rob
3d53d4bc-e1de-4807-b89b-f5813f2172a7
Watson, Richard A.
ce199dfc-d5d4-4edf-bd7b-f9e224c96c75
Buckley, C. L.
403be04e-fca5-4f1c-b0c4-d84401f51d51
Mills, Rob
3d53d4bc-e1de-4807-b89b-f5813f2172a7
Watson, Richard A., Buckley, C. L. and Mills, Rob
(2009)
The Effect of Hebbian Learning on Optimisation in Hopfield Networks
(In Press)
Record type:
Monograph
(Project Report)
Abstract
In neural networks, two specific dynamical behaviours are well known: 1) Networks naturally find patterns of activation that locally minimise constraints among interactions. This can be understood as the local minimisation of an energy or potential function, or the optimisation of an objective function. 2) In distinct scenarios, Hebbian learning can create new interactions that form associative memories of activation patterns. In this paper we show that these two behaviours have a surprising interaction – that learning of this type significantly improves the ability of a neural network to find configurations that satisfy constraints/perform effective optimisation. Specifically, the network develops a memory of the attractors that it has visited, but importantly, is able to generalise over previously visited attractors to increase the basin of attraction of superior attractors before they are visited. The network is ultimately transformed into a different network that has only one basin of attraction, but this attractor corresponds to a configuration that is very low energy in the original network. The new network thus finds optimised configurations that were unattainable (had exponentially small basins of attraction) in the original network dynamics.
Text
TR_watson-buckley-mills-2009.pdf
- Other
Text
TR_watson-buckley-mills-2009b.pdf
- Other
More information
Accepted/In Press date: 10 June 2009
Organisations:
Agents, Interactions & Complexity, EEE
Identifiers
Local EPrints ID: 267543
URI: http://eprints.soton.ac.uk/id/eprint/267543
PURE UUID: e9da08f8-0fe4-4dbb-95b7-6cd613df9fd3
Catalogue record
Date deposited: 09 Jun 2009 22:19
Last modified: 15 Mar 2024 03:21
Export record
Contributors
Author:
Richard A. Watson
Author:
C. L. Buckley
Author:
Rob Mills
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics