Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding
Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a one-dimensional continuum and then to sort them into 3 categories, CP arises as a natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during auto-association learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about.
191-206
Harnad, Stevan
442ee520-71a1-4283-8e01-106693487d8b
1995
Harnad, Stevan
442ee520-71a1-4283-8e01-106693487d8b
Harnad, Stevan
(1995)
Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
Honavar, V. and Uhr, L.
(eds.)
In Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration.
Academic Press.
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a one-dimensional continuum and then to sort them into 3 categories, CP arises as a natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during auto-association learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about.
Text
harnad95.cpnets.html
- Other
More information
Published date: 1995
Venue - Dates:
Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration, 1995-01-01
Organisations:
Web & Internet Science
Identifiers
Local EPrints ID: 253357
URI: http://eprints.soton.ac.uk/id/eprint/253357
PURE UUID: 450126b7-ebf2-407d-b074-135d606c7a75
Catalogue record
Date deposited: 25 May 2000
Last modified: 15 Mar 2024 02:48
Export record
Contributors
Author:
Stevan Harnad
Editor:
V. Honavar
Editor:
L. Uhr
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics