Harnad, S. (1990) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

LOST IN THE HERMENEUTIC HALL OF MIRRORS

Stevan Harnad
Psychology Department
Princeton University
Princeton NJ 08544
harnad@cogsci.soton.ac.uk

What is at issue here is really quite simple, although it is easily overlooked in the layers and layers of overinterpretation that are characteristic of a condition I've dubbed getting lost in the "hermeneutic hall of mirrors," which is the illusion one creates by first projecting an interpretation onto something (say, a cup of tea-leaves or a dream) and then, when challenged to justify that interpretation, merely reading off more and more of it, as if it were answerable only to itself.

Now in the case of mentalistic interpretation the situation is only slightly more complicated. First, we are interpreting not a cup of tea-leaves, but a "symbol system," and, "by definition" , one of the properties of a symbol system is that it must be amenable to a systematic semantic interpretation (Harnad 1990, Fodor & Pylyshyn 1988, Newell 1980). The interpretability, in other words, is a given if we are dealing with a bona fide symbol system at all. But the fact that a system can be interpreted, say, mentalistically, does not mean that the interpretation is correct.[1] "Correct" means something very specific here, though again quite simple: Is it really true that the thing that is being interpreted as if it had a mind has a mind? There are cases -- you and me, for example -- where the answer is a relatively unproblematic "yes." (I say relatively, because there is still the "other-minds" problem (Harnad 1990b): You can't really be sure in any case but your own. I will return to this.) And there are other cases -- a teacup or a thermostat, for example -- where the answer is a relatively unproblematic "no." (Again, because of the other-minds problem, no one can know for sure that "no" is incorrect, except the teacup itself.)

Now the hermeneutic hall of mirrors: As I said, by definition, a symbol system, unlike a teacup, or even a thermostat, must be amenable to a systematic interpretation, otherwise it is not a symbol system. Does that mean that it also has a mind by definition? Surely not. We must still ask whether a mentalistic interpretation is correct -- and correct in a stronger sense than that all the symbolic goings-on can be coherently and systematically interpreted "as if" the system had a mind. Does it really have a mind?

Normally the other-minds problem is an insuperable obstacle to answering the question of whether a mentalistic interpretation is correct. This is the reason we turn to the various forms of Turing Test. But even before Turing (1964) it seemed reasonable to say that the reason I believe you have a mind is that, in all the relevant respects, you are indistinguishable from me. The "relevant respects" have been construed variously. For Turing, indistinguishable pen-pal performance capacity (symbols in, symbols out) was enough: That's the standard Turing Test (TT). Elsewhere (Harnad 1987, 1990) I have tried to show that, for reasons related to the hermeneutic hall of mirrors, symbol manipulation, be it ever so interpretable, is just not enough for having a mind; that our robotic performance capacities (our ability to see, manipulate, name and describe the real objects and states of affairs to which our symbols refer) are at least as important for having a mind as our pen-pal capacities are, and that the latter are in fact grounded in the former. The Total Turing Test (TTT) would accordingly require that the candidate be indistinguishable from a real person not only in its symbolic performance capacity, but in its robotic performance capacity too.

One can be even more demanding and insist that to an empiricist every empirical datum is relevant. So the TTTT would require indistinguishability from me in every observable respect, including my neurons and my molecules. Now that is probably getting too personal. I have argued (Harnad 1989) that the TTT should be enough to cut the degrees of freedom down to the normal size for any underdetermined scientific theory, but it is important to note that even the TTTT is no guarantee: It is still possible that my TTTT-indistinguishable lookalike has no mind, be he ever so interpretable as if he had one. I take it that this kind of possibility, which is simply the other-minds problem writ large, is not one that we should trouble ourselves with too long, so let's go back a couple of steps to the TT and the TTT:

Much time has been wasted in discussions of Searle because of vagueness about what kind of system is in fact the target of his criticism (Searle 1980 is in part guilty for not having been completely unequivocal about this, but I take it that as of Harnad 1989 there is no longer any excuse for ambiguity on this score): Searle is really only attacking one specific kind of system -- an implemented symbol manipulating system. Let's call this a "symbol cruncher" (SC) to remind us of exactly what the target is. A SC is a device that manipulates symbols (which can be scratches on paper, holes on a tape, or states of flip-flops in a machine) purely on the basis of their shapes (not on the basis of their interpretations) and these symbols and symbol manipulations can be given a systematic semantic interpretation. Computers are real-life examples of such systems. Turing machines are their idealized version.

Now SCs have some very powerful properties. They can compute just about anything; they have been shown to be able to generate a lot of intelligent machine performance, and they seem to be able to "simulate" just about anything, including things that are not themselves SCs -- airplanes and furnaces, for example -- in the sense that SCs' symbols and symbol manipulations are "systematically interpretable" as being equivalent to the things they simulate ("Turing Equivalence").

Now here is an important distinction that consistently fails to be made: Turing equivalence does not mean (type) identity. A symbolic simulation of an airplane or a furnace is not identical to an airplane or a furnace; in particular, it lacks certain critical properties -- the essential ones in the case of planes and furnaces: A SC can neither fly nor heat. It is merely interpretable as if it flew or heated.

It is a characteristic symptom of being lost in the hermeneutic hall of mirrors even to have to be reminded of the foregoing. To put it another way: From the fact that everything is Turing Equivalent to a SC it does not follow that everything is a SC. And pure SCs are the only target of Searle's criticism (which is not to say that they are not a formidable target).

We now need to remind ourselves again about the specific question that is at the heart of all this: My interpretation that this particular kind of system (a pure symbol cruncher that can pass the TT) -- has a mind: Is that interpretation correct ? It will clearly not do to say, by way of reply, that it must be correct because everything is just a SC. That's simply not true. What may be true is that everything is Turing Equivalent to a SC, and can be simulated by a SC whose symbols and symbol manipulations can be systematically interpreted as if they were that (nonsymbolic) thing. But since in the case of having a mind ("thinking," for short) the question is about the presence or absence of something about which there is a fact of the matter -- be it ever so difficult to confirm without being the thing in question -- the answer surely cannot be a guaranteed "yes" simply because of the formal properties of SCs, be they ever so interpretable as having minds. For if that were true, it would be just as much a fact about SCs that they could heat or fly, simply because they could be systematically interpreted as if they could.

So the interpretation that something really thinks (or flies, or heats) must require stronger grounds than merely that the interpretation can be systematically made -- stronger grounds, in other words, than the TT, because pen-pal interactions are completely parasitic on the interpretations we project on them. Once we start to interpret the scribblings of our correspondent as meaning what they can be interpreted to mean (and as long as the interpretation continues to be systematically sustained, which is the premise of the TT), we're lost in the same hermeneutic hall of mirrors where the interpreters of ape "language" (see Terrace 1979) found themselves -- along with the interpreters of "virtual systems" in their computers (Pylyshyn 1984), and those who enjoin us to interpret a chess-playing computer program as thinking it should get its queen out early (Dennett 1983). The property of systematicity (which, let it be repeated, is assumed as a premise in all of this) guarantees that the interpretation will continue to corroborate itself: That's the hermeneutic circle.

Is there a way out of the hermeneutic circle? Normally, because of the other-minds problem, there is not. But in the special case of pure symbol crunching it happens that there is. I call it "Searle's Periscope," and it provides a privileged peek at the other side of the other-minds barrier in this one special case. Recall that normally it is impossible to say for sure one way or the other whether any system has a mind except by being that system, and normally the only system you can be is the one you are already, which is not the one at issue in the other-minds problem. The real target of Searle's criticism, however, is a hypothesis I have dubbed "symbolic functionalism" (also known as "Strong AI," and the "symbolic" or "representational" theory of mind) according to which thinking is just symbol crunching (T=SC).

T=SC has three prominent strengths, plus one fatal weakness, the one that makes it penetrable by Searle's Periscope. Two of its strengths have already been mentioned: Turing computational power and the successes of AI. The third strength pertains to the mind/body problem itself: We all (if we admit it) have difficulty seeing how a mental state could be the same as a physical state. To be more specific, if someone gave us a complete physical explanation of what was going on in the brain during a toothache, even if we were prepared to believe that the explanation was true and complete, we'd still have trouble seeing how or why the feeling of having a toothache was the same as the activity of the physical system that had just been described to us. Why should such a system, exactly as described, have any feelings at all? What is it about the physical system that makes the interpretation that it has a toothache correct ? The mental state and the physical description don't seem to coalesce into a happy unit; they still seem somehow disjoint.

The third strength of T=SC was accordingly that it was supposed to give us some intuitive closure on this disjointness of the mental and the physical: The insight was supposed to come from the software/hardware distinction and the implementation-independence of the symbolic level of description. For, according to T=SC, what makes a mental state mental is simply that it is the implementation of the right symbol system. But that symbol system can be implemented in countless ways, many of them differing radically in their physical realization. So we need trouble ourselves no further about what it is about the physical implementation that makes it mental: It is only that it is the implementation of the right symbol system. The particulars of the hardware are irrelevant. The conceptual disjointness of the mental and the physical was merely a valid reflection of the independence of the symbolic and implementational levels of description! Yet another reason for believing T=SC.

Let's call this special feature of T=SC "teleportability": Discover the right symbols and symbol manipulations for being a mental state and every system that implements those symbols and symbol manipulations must have that mental state, purely because it is an implementation of the right symbol system; the mental state is teleportable. Well, Searle simply took T=SC at its word on this score, and followed it through to its absurd conclusion: For unlike the computer over there, doing the symbol crunching, and powerless to confirm or disconfirm whether our interpretation that it understands Chinese is correct, Searle, having memorized the requisite symbols and symbol manipulation rules, and having thereby become (by teleportation) yet another implementation of the same symbol system, is in a position to report that he in fact does not understand Chinese. And, in virtue of this special Periscope on the alleged mind of that computer over there, it couldn't be understanding Chinese either, at least not purely in virtue of being the implementation of the same symbol system. QED.

Now that really should have been the end of it. A specific hypothesis -- thinking is symbol crunching -- looks promising, but then is found to lead to an absurd conclusion, so it is abandoned. But what do believers in T=SC do? What do they produce by way of counterargument or counterevidence in order to resurrect the hypothesis that has failed? Why, the hypothesis itself, yet again, this time wrapped in still more elaborate interpretations of precisely the sort that Searle's argument showed to be incorrect! For the "System Reply" is merely a reiteration of T=SC, at a higher and higher counterfactual (I would call it science-fictional) price: We are to suppose that Searle, purely in virtue of having memorized a lot of meaningless symbols, is now suffering from multiple personality disorder: He has another mind, but doesn't know it. He must have another mind! After all, thinking is just symbol crunching, so there's got to be another mind in there someplace! (This reminds one of nothing so much as the optimistic little boy looking confidently for the proverbial pony in the pile...) Unfortunately, all evidence (except to the minds of those who are hopelessly lost in their hermeneutic hall of mirrors and merely reading off the reflections of their own projected interpretations) is that multiple personality is caused by early sexual abuse, and that the only mental state that memorizing a bunch of meaningless symbols can cause is intense boredom, perhaps a headache.

A second rebuttal, again just a reiteration of the discredited hypothesis itself, is that Searle must be wrong, because our own neurons are just as vulnerable to his argument, and we know they can think. Unfortunately, this depends on presupposing that the reason neurons can think is that they are just doing symbol crunching (T=SC), which is what has just been shown to be incorrect! Brains no more think in virtue of crunching symbols than planes fly or furnaces heat in virtue of crunching symbols. Nor would symbolic simulations of any of these three systems (which may well be possible, because of the Church/Turing Thesis to the effect that just about everything is simulable by a SC), think, fly or heat, respectively, be they ever so systematically interpretable as doing so. So truths about neurons, whatever they are, do not vindicate SCs.

A third rebuttal, again symptomatic of being unable to see beyond the hall of mirrors, is: Well, this would leave no alternative but dualism and mysticism, for what else could it be but T=SC? Answer: Nonsymbolic functions such as transduction and analog transformations could play a much more important role than symbol crunching in implementing a mind in a robot, grounding the interpretations of the robot's internal symbols in its sensorimotor capacity to discriminate and identify the real objects and states of affairs its symbols stand for (Harnad 1987, 1990). There are more functions under the sun than are dreamt of in symbolic functionalist philosophy. For a robotic functionalist like myself, the TTT is the critical test, not the TT, with its complete reliance on hermeneutics. The TTT is no guarantor either, of course; hence groundedness may not be sufficient to ensure the presence of mental states, but at least it gets us out of the hermeneutic circle that leaves Dyer unable to distinguish between the real world and symbolic simulations of it. And surely groundedness is as necessary to thinking as air-borneness is to flying.

I will close with some brief specific disagreements with Dyer:

(1) The TT has a formal empirical constraint and an informal intuitive one. The former is that the system must be able to do anything a pen-pal can do; the latter is that we must be unable to distinguish it from a real pen pal. And the Test is as open-ended as a lifetime. That's all. The TT Dyer describes has some rather arbitrary stipulations.

(2) The "COUNT" simulation adds nothing new to the discussion; it just substitutes a system that is interpretable as understanding arithmetic for a system that is interpretable as understanding Chinese. Gussying it up with parts that are interpretable as "knowledge," "concepts," "episodic memory," and "natural language understanding" (!), not to mention "realizing" and "deciding," just gets you more and more lost in the hermeneutic hall of mirrors, so you forget that those interpretations are all of the same kind, all come from the same source (the interpreter), and all are open to the same objection. So one cannot be used to validate the other. (Reminder: An abacus does not have a mind for numbers either, be it ever so interpretable as adding them. And the fact that some real people do mathematics as mindless symbol manipulation, the way Searle does Chinese, provides no support for T=SC, any more than it implies that such people have an alternate personality who is doing it mindfully...)

(3) The fact that conscious minds have unconscious processes has no bearing whatsoever on the question of whether symbol manipulation is mental, or even whether unconscious mental processes are symbolic. Just as one cannot help oneself to conscious mentalistic interpretations without first providing evidence that breaks out of the interpretative circle, one cannot help oneself to unconscious mentalistic interpretations: For, until further notice, only conscious minds have unconscious "mental" processes. All other unconscious processes are just that: Unconscious, mindless processes, as in teacups and thermostats.

(4) By the same token, "second-order" mental states (realizing that you see, being aware that you want, knowing that [or how!] you know, etc.) are not the issue: Capturing them is a piece of cake once you have captured first-order mental states (feeling, seeing, wanting, thinking -- in other words, anyone's being home at all). But Dyer either simply presupposes first-order states or thinks second-order states can be hung on a skyhook.

(5) The radio analogy is irrelevant; a radio is not just a SC (though a simulated radio would be).

(6) As stated above, the neuron analogies are irrelevant; until further notice, neurons are neither symbols nor symbol crunchers. And only SCs are vulnerable to Searle's argument.

(7) In listing the conceivable outcomes after Searle memorizes the symbols and passes the TT (temporally split, spatially split and merged personalities), Dyer seems to leave out a fourth possibility altogether: No new personality, just meaningless symbol crunching. You apparently have to be outside the hermeneutic hall of mirrors to remember that one...

(8) Dyer's "Information Processing Reply" seems the most circular of all, amounting to "If T=SC then T=SC." Yet both Searle's argument and the symbol grounding problem suggest that T=SC is simply wrong. Nor is the Gothic Church analogy any help, because a Gothic Church is not a SC either (and a symbolically simulated Gothic Church is not a Gothic Church).

(9) Symbol grounding is decidedly not just "setting up a physical correspondence between the symbols in the dictionary and actual objects and actions in the real world," because that still leaves the door wide open for a simulated correspondence, mediated by the mind of the interpreter. In other words, that's still just hermeneutics. Grounding breaks out of the hermeneutic circle by requiring a causal connection between the symbols and their real objects: A TTT-scale robot must have the capacity, indistinguishable from our own, to discriminate, manipulate, categorize, name and describe the real objects and states of affairs in the real world to which its symbols refer, on the basis of the real sensory projections of those real objects and states of affairs. A pure SC could simulate this, could help us test and design it, but it could not be the real system, just as it could not be a real plane or furnace. Nor is it likely that this real TTT-scale system (or our brains) will turn out to be just a SC that does most of the work, plus transducers to connect it to objects "in the right way"; it is more probable that the lion's share of the work will be nonsymbolic (Harnad 1987). But either way, a pure SC will be mindless and ungrounded.

(10) Simulated environments can only produce simulated grounding, just as simulated airplanes can only produce simulated flight. And SCs can't pass the real TTT, only the "simulated TTT," which is really just the TT all over again.

(11) The fact that a computer has tranducers is true but irrelevant. Of course a SC has to be implemented as a SC, otherwise it's just an abstraction. But an implemented SC is the wrong kind of device for certain functions, such as flying, heating and thinking. For those you need a plane, a furnace and a TTT-scale robot (none of which is usefully described as a SC plus transducers).

(12) The artificial life analogy is irrelevant because no one ever claimed that life was just symbol crunching.

FOOTNOTES

1. Searle (1980) would say "intrinsic" rather than "correct," but I think correct is clearer. Searle distinguishes "intrinsic meaning," which the thoughts and utterances of things with minds have, from "derived meaning," which mindless things like books and computers have: The meanings of the words in a book or the states of a computer, be they ever so systematically interpretable, are not intrinsic to the book or the computer. To put it another way, they don't mean anything to the book or the computer, because neither the book nor the computer is the kind of thing that anything means anything to. The words and states have meaning only because real minds like ours project meaning onto them by interpreting them meaningfully. It seems much simpler to come right out and say that the question of whether or not meaning is intrinsic to a system is just the question of whether or not the interpretation that the system has a mind is correct. Searle (1990) now seems to be coming closer to saying just that.

REFERENCES

Dennett, D. C. (1983) Intentional systems in cognitive ethology. Behavioral and Brain Sciences 6: 343 - 90.

Fodor, J.A. & Pylyshyn, Z.W. (1988) Connectionism and Cognitive Architecture: A Critical Analysis Cognition 28:

Harnad, S. (ed.) (1987) "Categorical Perception: The Groundwork of Cognition" . New York: Cambridge University Press.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990a) The Symbol Grounding Problem Physica D 42: 335 - 346.

Harnad, S. (1990b) Other Bodies, Other Minds: A Machine Reincarnation of an Old Philosophical Problem. Minds and Machines 1: (in press)

Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4:135 - 83

Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford

Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-457.

Searle, J.R. (1990) Consciousness, Explanatory Inversion and Cognitive Science Behavioral and Brain Sciences 13: (in press)

Terrace, H. (1979) Nim. NY: Random House.

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, A.R. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall