Categorical Perception and the Evolution of Supervised Learning in Neural Nets
Categorical Perception and the Evolution of Supervised Learning in Neural Nets
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets.
Harnad, Stevan
442ee520-71a1-4283-8e01-106693487d8b
Hanson, S.J.
7e65c197-6cd2-4295-b905-2c4248efebdb
Lubin, J.
93a3b1cd-8bfc-4e9f-9134-0ecf3929e55d
Powers, D.W.
265ede89-8445-45c6-b5f2-ef6449690152
Reeker, L.
029b6c85-a326-4ea4-a216-0db4cc6e14b1
1991
Harnad, Stevan
442ee520-71a1-4283-8e01-106693487d8b
Hanson, S.J.
7e65c197-6cd2-4295-b905-2c4248efebdb
Lubin, J.
93a3b1cd-8bfc-4e9f-9134-0ecf3929e55d
Powers, D.W.
265ede89-8445-45c6-b5f2-ef6449690152
Reeker, L.
029b6c85-a326-4ea4-a216-0db4cc6e14b1
Harnad, Stevan, Hanson, S.J. and Lubin, J.
(1991)
Categorical Perception and the Evolution of Supervised Learning in Neural Nets.
Powers, D.W. and Reeker, L.
(eds.)
Proceedings of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology.
Record type:
Conference or Workshop Item
(Other)
Abstract
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets.
Text
harnad91.cpnets.html
- Other
More information
Published date: 1991
Additional Information:
In: Proceedings of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG, pp. 65-74. [Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991] http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.cpnets.html
Venue - Dates:
Proceedings of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology, 1991-01-01
Organisations:
Web & Internet Science
Identifiers
Local EPrints ID: 253378
URI: http://eprints.soton.ac.uk/id/eprint/253378
PURE UUID: 51deb89b-b7c2-4c2d-b8d1-ed4d913eee92
Catalogue record
Date deposited: 26 May 2000
Last modified: 15 Mar 2024 02:48
Export record
Contributors
Author:
Stevan Harnad
Author:
S.J. Hanson
Author:
J. Lubin
Editor:
D.W. Powers
Editor:
L. Reeker
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics