Stevan Harnad (2000) The Convergence Argument in Mind-modelling:. Psycoloquy: 11(078) Ai Cognitive Science (18)

Volume: 11 (next, prev) Issue: 078 (next, prev) Article: 18 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(078): The Convergence Argument in Mind-modelling:

THE CONVERGENCE ARGUMENT IN MIND-MODELLING:
SCALING UP FROM TOYLAND TO THE TOTAL TURING TEST.
Commentary on Green on AI-Cognitive-Science

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

The Turing Test is just a methodological constraint forcing us to scale up to an organisms' full functional capacity. This is still just an epistemic matter, not an ontic one. Even a candidate in which we have successfully reverse-engineered all human capacities is not guaranteed to have a mind. The right level of convergence, however, is total robotic capacity; symbolic capacity alone (the standard Turing Test) is underdetermined, whereas full neurosimilitude is overdetermined.

1. I do not agree with Green (1993a/2000a) that cognitive science needs to do ontology. Deciding what really exists should be left to the basic sciences and philosophy. Cognitive science (if we don't put on airs) is really just a branch of reverse (bio)engineering, as Dan Dennett (1994) has suggested. Forward engineering applies basic science and engineering principles to the design and building of systems (e.g. suspension bridges, furnaces, rockets) that meet certain functional specifications. Reverse engineering tries to second-guess the design of systems that have already been built (by the Blind Watchmaker) to meet certain adaptive specifications (Harnad 1994).

2. That's really all there is to it -- except for one little wrinkle: natural cognitive systems have minds; there's someone at home in there, in those systems that are meeting those functional specifications (Harnad 1982). So unfortunately there is no guarantee that successfully second-guessing what it would take to meet the functional specifications will explain (or generate) a mind (Harnad 1991).

3. Never mind. We have our work cut out for us meeting those functional specifications in the first place. Let us not confuse the problem of underdetermination with the extra order of uncertainty posed by the question of whether or not there is someone at home in the systems we design. Here is a Convergence Argument (Harnad 1989) that ought to answer worries of the kind voiced by Plate (1993/2000) and Zelazo (1993/2000) in their commentaries on Green: theirs is the "more than one way to skin a cat" worry, and it is a valid one when it comes to ("toy") models for small, arbitrary subsets of our total functional capacity: there are arbitrarily many ways to capture our calculating skills, our chess-playing skills, our scene-describing skills; but there are fewer and fewer ways to capture all these skills in the same system: the degrees of freedom for skinning ALL possible cats with the SAME resources are much narrower than those for skinning just one with ANY resources. I have accordingly argued that as we scale up from toy tasks to our full performance capacity -- the capacity to pass the Total Turing Test (T3) -- the degree of underdetermination of our models will shrink to the normal level of underdetermination of scientific theories by empirical data: physics may not have converged on the ONLY possible way to design a universe, but we have to live with that, reconciled to going with our best shot (streamlined, perhaps, by Occam's Razor).

4. And that would be my reply to Fodor's (1991) "Disneyland" objection: the goal is not "to construct a machine that would be indistinguishable from the real world for the length of a conversation" (p. 279). That would just be a toy task, and its solution could be just an arbitrary trick (Harnad 1992b). But if Fodor (1981) really thinks a T3-scale model, too, "could be trivially accomplished", I would be interested to know why there are not more of them around -- indeed, why there are none even faintly in sight! For a T3 model must have life-size capacities, and must be able to generate them life-long, just as we do. Speaking of this as "the mapping of inputs onto outputs" is about as perspicuous as speaking of Newtonian mechanics as the mapping of pool- shots onto pool-games -- or, to pick an engineering example, explaining how planes fly as just the mapping of flight courses onto flying conditions.

5. The real problem of AI isn't that it's trivial or that its findings are hopelessly underdetermined. It's that the only thing it can hope to generate is a VIRTUAL MIND -- a symbol system that is systematically interpretable as if it had a mind, but doesn't (Hayes et al. 1992). Searle's (1980) Chinese Room Argument, which simply reminded us that the (life-long pen-pal version of the) Turing Test (T2) could be implemented and passed by Searle himself in Chinese without his understanding Chinese, showed that it cannot be true that every implementation of a T2-scale symbol system would understand Chinese. But both Searle and Fodor are wrong in thinking that it's the Turing Test that's at fault. It is merely COMPUTATIONALISM -- the thesis that there would be a mind in (every implementation of) a (hypothetical, implementation-independent) T2-scale symbol system -- that has been shown to be supremely unlikely.

6. In contrast, T3 -- likewise a Turing Test, but this time calling for our total performance capacity, both symbolic and robotic -- is immune to the Chinese Room Argument (for Searle could not implement the whole T3 robot the way he could implement the whole T2 symbol system, because, for one thing, sensorimotor transduction is NOT implementation-independent computation; Harnad 1989; 1993a/2000a). T3 is also immune to Fodor's Disneyland Argument, because of the Convergence Argument, but so is T2! T2 only LOOKS easy because it looks as if it could be passed with nothing but symbols, symbols whose meanings would be as UNGROUNDED as the meanings of the symbols in a book (Harnad 1990). But I've tried to give reasons why even T2 could only be successfully passed by a system whose symbols were grounded in the robotic capacity to interact with the real world objects, events and states that the symbols were about, in other words, a T3-scale system (1987, 1992a, 2000, 2001).

7. So the problem is NOT with Turing Testing, because Turing Testing is merely the empirical criterion for reverse engineering: the system must meet the right functional specifications, namely, it must have performance capacities totally indistinguishable from our own. The problem is, rather, with ungrounded symbol systems. Nor is the solution to find the right ontology, as Green suggests, or to give up on explaining cognition altogether and settle only for explaining subcognitive "modules", as Fodor (1983) suggests; nor is it even to turn instead to INTERNAL (neural) function (T4: a system that is Turing indistinguishable from us not only in its symbolic and robotic functions, but its neuromolecular ones too), as Searle suggests. The right level of empirical constraint for our particular branch of reverse engineering is T3, and grounding the model's symbolic capacities in its robotic capacities should reduce the functional degrees of freedom to just about the same ones that constrained the Blind Watchmaker who designed us (and who is no more of a mind reader than we are; Harnad 1994, 2000).

8. I've dubbed this position "robotic functionalism" (Harnad 1989) to contrast it with the "symbolic functionalism" of both AI and computationalism in general. According to robotic functionalism, subtotal "toy" modelling (T1) is too underdetermined, T2 symbolic modelling is ungrounded (with the "frame problem" mentioned by Chiappe & Kukla [1993/2000] being one of its fatal symptoms; Harnad 1993b), and T4 neuromimetic modelling is overdetermined (because not all of our internal functions are necessarily RELEVANT to having a mind). Hence T3 is just right for cognitive modelling (T4 neural data would only be relevant if they suggested ways to generate T3 capacity; Harnad 1993a, 1995b).

9. In closing, there is a misconstrual of T3 that I wish to correct. In his response to Plate, Green (1993b/2000b) wrote that for T3 "the computer program not only has to be indistinguishable from humans in its intellectual powers, but also in its (descriptions of its) qualitative (i.e. sensory, perceptual, affective, emotional, etc.) mental states." Note that this is still T2, not T3. Never mind mental states; they're just something we HOPE we're capturing. That a symbol system is systematically interpretable AS IF it had qualitative sensory, perceptual, affective experiences is surely something that we would already require of our correspondence with a pen-pal. This is just the criterion Dennett (1993) has called "heterophenomenology": the candidate must TALK as if it had qualitative experiences just like our own. But what T3 requires is that all those symbols square not only with our interpretations, but also with all of the system's autonomous robotic interactions with what the symbols are about: the system must be able (Turing-indistinguishably from ourselves) to discriminate, categorize, manipulate, name and describe the real-world objects, properties, events and states of affairs that its symbols are interpretable (by us) as being about, based on actual, life-long sensorimotor interactions with them. THAT's what takes the external interpreter out of the loop and grounds the robot's symbols directly in their putative referents (Harnad 1992a, 1995a).

10. Yet even that does not guarantee that there is someone home in there (not even T4 could guarantee that). Perhaps it is only to this extent that cognitive science does have explanatory problems over and above those of the natural and engineering sciences. But that's also where empirical science ends and the only thing left is trust (Harnad 1991, 1993c). And no amount of ontology will remedy that.

REFERENCES

Chiappe, D.L. & Kukla, A. (2000) Artificial Intelligence and Scientific Understanding. PSYCOLOQUY 11(064) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.064.ai-cognitive-science.4.chiappe http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.064

Chiappe, D.L. & Kukla, A. (1993) Artificial Intelligence and Scientific understanding. Cognoscenti 1: 7-9.

Dennett, D.C. (1993) Discussion (passim) In: Bock, G.R. & Marsh, J. (Eds.) Experimental and Theoretical Studies of Consciousness. CIBA Foundation Symposium 174. Chichester: Wiley

Dennett, D.C. (1994) Cognitive Science as Reverse Engineering: Several Meanings of "Top Down" and "Bottom Up". In: Prawitz, D., & Westerstahl, D. (Eds.) International Congress of Logic, Methodology and Philosophy of Science. Dordrecht: Kluwer International Congress of Logic, Methodology, and Philosophy of Science (9th: 1991) http://cogsci.soton.ac.uk/~harnad/Papers/Py104/dennett.eng.html

Fodor, J.A. (1981) The Mind-Body Problem. Scientific American 244: 114-23.

Fodor, J.A. (1983) The Modularity of Mind. Cambridge MA: MIT Press.

Fodor, J.A. (1991) Replies. In B. Loewer & G. Rey (Eds.) Meaning in Mind: Fodor and his Critics (pp. 255-319). Cambridge MA: Blackwell.

Green, C.D. (2000a) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Green, C.D. (1993a) Is AI the Right Method For Cognitive Science? Cognoscenti 1: 1-5

Green, C.D. (2000b) Empirical Science and Conceptual Analysis Go Hand in Hand. PSYCOLOQUY 11(071) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psycoloquy.00.11.071.ai-cognitive-science.11.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.071

Green, C.D. (1993b) Ontology Rules! (But not Absolutely). Cognoscenti 1: 21-28.

Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.consciousness.html

Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad87.categorization.html

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad89.searle.html

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html

Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds.) Connectionism in Context Springer Verlag. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.symbol.object.html

Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

Harnad, S. (1993a) Grounding Symbols in the Analog World with Neural Nets. Think 2(1) 12 - 78 (Special issue on "Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.). http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad93.symb.anal.net.html http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

Harnad, S. (1993b) Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem. PSYCOLOQUY 4(34) frame-problem.11 http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad93.frameproblem.html

Harnad, S. (1993c) Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component. Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301. Reprinted in: C.G. Langton (Ed.). Artificial Life: An Overview. MIT Press 1995. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.artlife2.html

Harnad, S. (1995a) Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The Artificial Life Route to Artificial Intelligence: Building Embodied Situated Agents. New Haven: Lawrence Erlbaum. Pp. 277-286. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.robot.html

Harnad, S, (1995b) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? In: H. Morowitz (ed.) "The Mind, the Brain, and Complex Adaptive Systems." Santa Fe Institute Studies in the Sciences of Complexity. Volume XXII. P. 204-220. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.mind.robot.html

Harnad, S. (2000) Turing Indistinguishability and the Blind Watchmaker. In: Mulhauser, G. (ed.) "Evolving Consciousness" Amsterdam: John Benjamins (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad98.turing.evol.html

Harnad, S. (2001) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information (JoLLI) special issue on "Alan Turing and Artificial Intelligence" (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.virtualmind.html

Plate, T. (2000) Caution: Philosophers at work. PSYCOLOQUY 11(70) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.070.ai-cognitive-science.10.plate http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.070

Plate, T. (1993) Reply to Green. Cognoscenti 1: 13.

Searle, J. R. (1980) Minds, Brains and Programs. Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Zelazo, P.D. (2000) The nature (and artifice) of cognition. PSYCOLOQUY 11(076) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.076.ai-cognitive-science.16.zelazo http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.076

Zelazo, P.D. (1993) The Nature (and Artifice) of Cognition. Cognoscenti 1: 18-20


Volume: 11 (next, prev) Issue: 078 (next, prev) Article: 18 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: