" DST="" -->
Harnad, S, (1994) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? To appear in: H. Morowitz & J. Singer (eds.) "The Mind, the Brain, and Complex Adaptive Systems." Santa Fe Institute/Addison Wesley.
ABSTRACT: Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability). All scientific theories are underdetermined by data. What is the right level of empirical constraint for cognitive theory? I will argue that T2 is underconstrained (because of the Symbol Grounding Problem and Searle's Chinese Room Argument) and that T4 is overconstrained (because we don't know what neural data, if any, are relevant). T3 is the level at which we solve the "other minds" problem in everyday life, the one at which evolution operates (the Blind Watchmaker is no mind-reader either) and the one at which symbol systems can be grounded in the robotic capacity to name and manipulate the objects their symbols are about. I will illustrate this with a toy model for an important component of T3 -- categorization -- using neural nets that learn category invariance by "warping" similarity space the way it is warped in human categorical perception: within-category similarities are amplified and between-category similarities are attenuated. This analog "shape" constraint is the grounding inherited by the arbitrarily shaped symbol that names the category and by all the symbol combinations it enters into. No matter how tightly one constrains any such model, however, it will always be more underdetermined than normal scientific and engineering theory. This will remain the ineliminable legacy of the mind/body problem.
Those attending this conference and those reading the published volume of papers arising from it will be struck by the radical shifts in focus and content among the various categories of contribution. Immediately preceding mine, you have heard the two most neurobiological of the papers. Pat Goldman-Rakic discussed internal representation in the brains of animals and Larry Squire discussed the brain basis of human memory. Others are presenting data about human behavior, others about computational models, and still others about general classes of physical systems that might share the relevant properties of these three domains -- brain, behavior, and computation -- plus, one hopes, a further property as well, namely, conscious experience: this is the property that, as our brains do whatever they do, as our behavior is generated, as whatever gets computed gets computed, there's somebody home in there, experiencing experiences during most of the time the rest of it is all happening.
It's the status of this last property that I'm going to discuss first. Traditionally, this topic is the purview of the philosopher, particularly in the form of the so-called "mind/body" problem, but these days I find that philosophers, especially those who have become very closely associated with cognitive science and its actual practice, seem to be more dedicated to minimizing this problem (or even declaring it solved or nonexistent) than to giving it its full due, with all the perplexity and dissatisfaction that this inevitably leads to. So although I am not a philosopher, I feel it is my duty to arouse in you some of this perplexity and dissatisfaction -- if only to have it assuaged by the true philosophers who will also be addressing you here.
So here it is: The mind/body problem is a conceptual difficulty we all have with squaring the mental with the physical (Harnad 1982b, 1991). There is no problem in understanding how or why a system could be generating, say, pain behavior: withdrawal of a body part when it is injured, disinclination to use it while it is recovering, avoidance of future situations that resemble the original cause of the injury, and so on. We have no trouble equating all of this with the structure and function in some sort of organ, like the heart, but one whose function is to respond adaptively to injury. Not only is the structural/functional story of pain no conceptual problem, but neither is its evolutionary aspect: It is clear how and why it would be advantageous to an organism to have a nociceptive system like ours. The problem, however, is that while all those adaptive functions of pain are being "enacted," so to speak, we also happen to feel something; there's something it's like to be in pain. Pain is not just an adaptive structural/functional state, it's also an experience, it has a qualitative character (philosophers call such things "qualia") and we all know exactly what that experience (or any experience) is like (what it is like to have qualia); but, until we become committed to some philosophical theory (or preoccupied with some structural/functional model), we cannot for the life of us (if we admit it) see how any structural/functional explanation explains the experience, how it explains the presence of the qualia (Nagel 1974, 1986).
Philosophers are groaning at this point about what they take to be an old canard that I am resurrecting to do its familiar, old, counterproductive mischief to us all over again. Let me quickly anticipate my punchline, to set them at ease, and then let me cast the problem in a thoroughly modern form, to show that this is not quite the same old profitless, irrelevant cavil that it has always been in the history of human inquiry into the mind. My punchline is actually a rather benign methodological one: Abstain from interpreting cognitive models mentalistically until they are empirically complete. Then you can safely call whatever you wish of their inner structures, functions or states "qualia," buttressed by your favorite philosophical theory of why it's perfectly okay to do so, and how nothing is really left out by this, even if you may think, prephilosophically, that something's still missing. It's okay because the empirical story will be complete, so the mentalistic hermeneutics can no longer do any harm (and are probably correct). If you do it any earlier, however, you are in danger of over-interpreting a mindless toy model with a limited empirical scope, instead of widening its empirical scope, which is your only real hope of capturing the mind.
That's the methodological point, and I don't think anyone can quarrel with it in this form, but we will soon take a closer look at it and you will see that it has plenty of points on which there is room for substantive disagreement, with corresponding implications for what kind of empirical direction your research ought to take.
Now I promised to recast the problem in a modern form. Here it is. In the 1950's, the logician Alan Turing, one of the fathers of modern computing, proposed a simple though since much misinterpreted "test" for whether or not a machine has a mind (Turing 1964, 1990): In the original version, the "Turing Test" went like this: You're at a party, and the game being played is that a woman and a man leave the room and you can send notes back and forth to each of them, discussing whatever you like; the object is to guess which is the man and which is the woman (that is why they are out of the room: so you cannot tell by looking). They try to fool you, of course -- and many people have gotten fixated on that aspect of the game (the attempted trickery), but in fact trickery has nothing to do with Turing's point, as you will see. What he suggested was that it is clear how we could go on and on, sending out different pairs of candidates, passing notes to and from each, and guessing, sometimes correctly, sometimes not, which is the man and which is the woman. But suppose that, unbeknownst to us, at some point in the game, a machine were substituted for one or the other of the candidates (it clearly does not matter which); and suppose that the game went on and on like that, notes continuing to be exchanged in both directions, hypotheses about which candidate is male and which female continuing to be generated, sometimes correctly, sometimes incorrectly, as before, but what never occurs to anyone is that the candidate may be neither male nor female, indeed, not a person at all, but a mindless machine.
Let's lay to rest one misconstrual of this game right away. It is unfortunate that Turing chose a party setting, because it makes it all seem too brief, brief enough so it might be easy to be fooled. So before I state the intuition to which Turing wanted to alert us all, let me reformulate the Turing Test not as a brief party game but as a life-long pen-pal correspondence. This is fully in keeping with the point Turing was trying to make, but it sets aside trivial tricks that could depend on the briefness of the test-period, or the gullibility (or even the drunkenness) of the testers on that particular night (Harnad 1992b).
Now the conclusion Turing wished to have us draw from his test: He wanted to show us that if there had never been anything in our correspondence that made us suspect that our pen-pal might not be a person, if his pen-pal performance was indistinguishable (this property has now come to be called "Turing-Indistinguishability") from that of a real person, and, as I've added for clarity, indistinguishable across a life-time, then, Turing wished to suggest to us, if we were ever informed that our pen-pal was indeed a machine rather than a human being, we would really have no nonarbitrary reasons for changing our minds about what we had, across a lifetime of correspondence, quite naturally inferred and countlessly confirmed on the basis of precisely the same empirical evidence on which we would judge a human pen-pal. We would not, in other words, have any nonarbitrary reason for changing our minds about the fact that our correspondent had a mind just because we found out he was a machine.
Be realistic. Think yourself into a life-long correspondence with a pen-pal whom you have never seen but who, after all these years, you feel you know as well as you know anyone. You are told one day by an informant that your pen-pal is and always has been a machine, located in Los Alamos, rather than the aging playboy in Melbourne you had known most of your life. Admit that your immediate intuitive reaction would not be "Damn, I've been fooled by a mindless machine all these years!" but rather an urge to write him one last letter saying "But how could you have deceived me like this all these years, after everything we've been through together?"
That's the point of the Turing Test, and it's not a point about operational definitions or anything like that. It is a point about us, about what we can and can't know about one another, and hence about what our ordinary, everyday judgments about other minds are really based on. These judgments are not based on anything we know about either the biology or the neurobiology of mind -- because we happen to know next to nothing about that today, and even what we know today was not known to our ancestors, who nevertheless managed to attribute minds to one another without any help from biological data: In other words, they did it, and we continue to do it, on the basis of Turing Testing alone: If it's totally indistinguishable from a person with a mind, well then, it has a mind. No facts about brain function inform that judgment, and they never have.
But does Turing Testing really exhaust the totality of the facts on which judgments about mentality are based? Not quite. Another unfortunate feature of Turing's original party game was the necessity of the out-of-sight constraint, so that you would not be cued by the appearance of the candidate -- in the first instance, so you could not see whether it was a man or a woman, and in the second instance, so you would not be prejudiced by the fact that it looked like a machine. But in this era of gender-role metamorphosis, sex-change operations, and loveable cinematic robots and extraterrestrials, I think the "appearance" variable can be safely let out of the closet (as it could have been all along, given a sufficiently convincing candidate): My quite natural generalization of Turing's original Turing Test (henceforth T2) is the "Total Turing Test" (T3), which calls for Turing-Indistinguishability not only in the candidate's pen-pal capacities, in other words, its symbolic capacities, but also in its robotic capacities: its interactions with the objects, events and states of affairs in the world that its symbolic communications are interpretable as being about . These capacities too must now be totally indistinguishable from our own (Harnad 1989).
Unlike the still further requirement of neurobiological indistinguishability (which I will now call T4), T3, robotic indistinguishability (which of course includes T2, symbolic indistinguishability, as a subset) is not an arbitrary constraint that we never draw upon in ordinary life. The symbolic world is very powerful and evocative, but we would certainly become suspicious of a pen-pal if he could never say anything at all about objects that we had enclosed with our letters (e.g., occasional photographs of our aging selves across the years, but in principle any object, presented via any sense modality, at all). And please set aside the urge you are right now feeling to think of some trick whereby a purely symbolic T2 system could get around the problem of dealing with objects slipped in along with its symbols. (I will return to the subject of symbol systems later in this paper.) The point is that trickery is not the issue here. We are discussing the T-hierarchy (T2-T4 -- I'll get to t1 shortly) as an empirical matter, and the candidate must be able to handle every robotic contingency that we can ourselves handle.
I hope you are by now getting the sense that T3 is a pretty tall order for a machine to fill (not that T2 was not quite a handful already). Far from being a call for trickery, successfully generating T3 capacity is a problem in "reverse engineering," as Dan Dennett (in press) has aptly characterized cognitive science. Determining what it takes to pass T3 is an empirical problem, but it is not an empirical problem in one of the basic natural sciences like physics or chemistry, the ones that are trying to discover the fundamental laws of nature. It is more like the problems in the engineering sciences, which apply those laws of nature so as to build systems with certain structural and functional capacities, such as bridges, furnaces, and airplanes, only in cognitive science the engineering must be done in reverse: The systems have already been built (by the Blind Watchmaker, presumably), and we must figure out what gives causal mechanism them their functional capacities -- one way to do this being to build or simulate systems with the same functional capacities.
Well, if it's clear how the direct engineering problem of building a system that can fly is not a matter of trickery -- not a matter of building something that fools us into thinking it's flying, whereas in reality it is not flying -- then it should be equally clear how the reverse engineering problem of generating our T3 capacities is not a matter of trickery either: We want a system that can really do everything we can do, Turing-Indistinguishably from the way we do it, not just something that fools us into thinking it can. In other words, the T tests are answerable both to our intuitions and to the empirical constraints on engineering possibilities.
Could a successful T3 candidate -- a robot that walketh Turing-Indistinguishably among us for a lifetime, perhaps one or more of the "people" in this room right now -- could such a successful candidate be a trick? Well, a trick in what sense? one must at this point ask. That it does have full T3 capacity (and that this capacity is autonomous, rather than being, say, telemetered by another person who is doing the real work -- otherwise that really would be cheating) we are here assuming as given (or rather, as an empirical engineering problem that we have somehow already solved successfully), so that can't be the the basis for any suspicions of trickery -- any more than a plane with flight capacities indistinguishable from those of a DC-11 (assuming, again, that it is autonomous, rather than guided by an elaborate external system of, say, magnets) can be suspected of flying by trickery. Intuitions may differ as to what to call a plane with an internal system of magnets that could do everything a DC-11 could do by magnetic attraction and repulsion off distant objects. Perhaps that should be described as another form of flight, perhaps as something else, but what should be clear here is that the task puts some very strong constraints on the class of potentially successful candidates, and these constraints are engineering constraints, which is to say that they are empirical constraints.
This is the point to remind ourselves of the general problem of underdetermination of theories and models, both in the basic sciences and in engineering, direct and reverse: There is never any guarantee that any empirical theory that fits all the data -- i.e., predicts and causally explains them completely -- is the right theory. Here too, we don't speak about trickery, but of degrees of freedom: Data constrain theories, they cut down on the viable options, but they don't necessarily reduce them to zero. More than one theory may successfully account for the same data; more than one engineering system may generate the same capacities. Since we are assuming here that the empirical scope of all the alternative rival candidates is the same -- that they account for all and only the same data, that they have all and only the same capacities -- the sole remaining constraints are those of economy: Some of the candidates may be simpler, cheaper, or what have you.
But economy is not a matter of trickery or otherwise either. Perhaps we shouldn't have spoken of trickery, then, but of "reality." In physics, we want to know which theory is the "real" theory, rather than some lookalike that can do the same thing, but doesn't do it the way Mother Nature happens to have done it (Harnad 1993e). Let us call this problem the problem of normal underdetermination: If we speak of complete or "Utopian" theories, the ones that successfully predict and explain all data -- past, present and future -- there really is no principled way to pick among the rival candidates: Economy might be one, but Mother Nature might have been a spendthrift. You just have to learn to live with normal underdetermination in physics. In practice, of course, since we are nowhere near Utopia, it is the data themselves that cut down on the alternatives, paring down the candidates to the winning subset that can go the full empirical distance. (I will return to this.)
What are the data in that branch of reverse engineering called cognitive science? We've already lined them up: They correspond to the T-hierarchy I've been referring to. The first level of this hierarchy, t1, consists of "toy" models, systems that generate only a subtotal fragment or subset of our total capacity[1]. Being subtotal, t1 has a far greater degree of underdetermination than necessary. In pre-Utopian physics, every theory is t1 until all the data are accounted for, and then the theory has scaled up directly to T5, the Grand Unified Theory of Everything. The T2 - T4 range is reserved for engineering. In this range, there are autonomous systems, subsets of the universe, such as organisms and machines, with certain circumscribed functional capacities. In conventional forward engineering, we stipulate these capacities in advance: We want a bridge that can span a river, a furnace that can heat a house, or a rocket that can fly to the moon. In reverse engineering these capacities were selected by the Blind Watchmaker: organisms that can survive and reproduce, fish that can swim, people that can talk. Swimming and talking, however, being subtotal and hence t1[2], have unacceptably high degrees of freedom. They open the door to ad hoc solutions that lead nowhere. In forward engineering, it would be as if flight engineers decided to model only a plane's capacity to fall, hop, or coast on the ground: there might be hints there as to how to successfully generate flight, but more likely t1 would send you off on countless wild-goose chases, none of them headed for the goal. In forward engineering this goal is T3: human "robotic" capacity; its direct-engineering homologue would be the flight capacity of a DC-11.
There is room for further calibration in forward engineering too. Consider the difference between the real DC-11 and its T3-Indistinguishable (internal) magnetic equivalent. They are T3 equivalent, but T4-distinguishable, and for economic reasons we may prefer the one or the other. T2, in this case, would be a computer simulation of an airplane and its aeronautic environment, a set of symbols that was systematically interpretable as a plane in flight, just as a lifelong exchange of symbols with a T2 pen-pal is systematically interpretable as a meaningful correspondence with a person with a mind. We clearly want a plane that can really fly, however, and not just a symbol system that can be interpreted as flying, so the only role of T2 in aeronautic engineering is in "virtual world" testing of symbolic planes prior to building real T3 planes (planes with the full "robotic" capacities of real planes in the real air).[3]
Now what about reverse engineering and its underdetermination problems? Here is a quick solution (and it could even be the right one, although I'm betting it isn't, and will give you my reasons shortly). The reverse engineering case could go exactly as it does in the case of forward engineering: You start with t1, modelling toy fragments of an organism's performance capacity, then you try to simulate its total capacity symbolically (T2) and to implement the total capacity robotically (T3), while making sure the candidate is also indistinguishable neuromolecularly (T4) from the organism you are modelling, in other words, totally indistinguishable from it in all of its internal structures and functions.
Why am I betting this is not the way to go about the reverse engineering of the mind? Well, first, I'm impressed by the empirical difficulty of achieving T3 at all, even without the extra handicap of T4 constraints, especially in the human case. Second, empirically speaking, robotic modelling (T3) and brain modelling (T4) are currently, and for the foreseeable future, independent data-domains. Insofar as T3 is concerned, the data are already in: We already know pretty much what it is that people can do; they can discriminate, manipulate, categorize and discourse about the objects, events and states of affairs in the world roughly the way we can; our task is now to model that capacity. Now T3 is a subset of T4; in other words, what people can do is a subset of what their brains can do, but there is a wealth of structural and functional detail about the brain that may not only go well beyond its robotic capacity, but well beyond what we so far know about the brain empirically. In other words, there is no a priori way of knowing how much of all that T4 detail is relevant to the brain's T3 capacity; worse yet, nothing we have learned about the brain so far, empirically or theoretically, has helped us model its T3 capacities.[4]
So since (1) the T3 data are already in whereas the T4 data are not, since (2) the T4 data may not all be relevant to T3, and may even overconstrain it, and since (3) neither T4 data nor T4 theory has so far helped us make any headway on T3, I think it is a much better strategy to assume that T3 is the right level of underdetermination for cognitive modelling. This conclusion is strengthened by the fact that (4) T3 also happens to be the right version of the Turing Test, the one we use with one another in our practical, everyday "solutions" to the other-minds problem. And one final piece of support may come from the fact that (5) the Blind Watchmaker is no mind-reader either, and as blind to functionally equivalent, Turing-Indistinguishable differences as we are, and hence unable to favor one T3-equivalent candidate over another.[5]
There are methodological matters here on which reasonable people could disagree; I am not claiming to have made the case for T3, the robotic level, over T4, the neural level, unassailably. But what about T2? There are those who think even T3 is too low a level for mind-modelling (Newell 1980, Pylyshyn 1984, Dietrich 1990). They point out that most if not all of cognition has a systematic, language-of-thought-like property that is best captured by pure symbol systems (Fodor 1975; Fodor & Pylyshyn 1988). If you want a model to keep in mind as I speak of symbol systems, think of a natural language like English, with its words and its combinatory rules, or think of formal arithmetic, or of a computer programming language. A symbol system is a set of objects, called "symbol tokens" (henceforth just "symbols") that can be rulefully combined and manipulated, and the symbol combinations can be systematically interpreted as meaning something, as being about something. If your model for a symbol system was English, think of words and sentences, such as "the cat is on the mat," and what they can be interpreted as meaning; for arithmetic, think of expressions such as "1 + 1 = 2" and for computing think of the English propositions or arithmetic expressions formulated in your favorite programming language.
The "shape" of the objects in a symbol system (think of words and numerals) is arbitrary in relation to the "shape" of the objects, events, properties or states of affairs that they can be systematically interpreted as being about: The symbol "cat" neither resembles nor is causally connected to the object it refers to; the same is true of the numeral "3." The rules for manipulating and combining the symbols in a symbol system are called "syntactic" because they operate only on the arbitrary shapes of the symbols, not on what they mean: The symbols' meaning is something derived from outside the symbol system, yet a symbol system has the remarkable property that it will bear the weight of a systematic interpretation. Not just any old set of objects, combined any old way, will bear this weight (Harnad 1993c). Symbol systems are a small subset of the combinatory things that you can do with objects, and they have remarkable powers: It was again Alan Turing, among others, who worked out formally what those powers were (see Turing 1990; Lewis & Papadimitriou 1981). They amount to the power of computation, the power to express and do everything that a mathematician would count as doing something -- and, in particular, anything a machine could do. The (Universal) Turing Machine is the archetype not only for the computer, but for any other machine, because it can symbolically emulate any other machine. It was only natural, then, for Turing to suppose that we ourselves were Turing machines.
The initial success of Artificial Intelligence (AI) seemed to bear him out, because, unlike experimental psychologists, unlike even behaviorists (1984; Catania and Harnad 1988), whose specialty was predicting, controlling and explaining human behavior, AI researchers actually managed to generate some initially impressive fragments of behavior with symbol systems: chess playing, question-answering, scene-describing, theorem-proving. Although these were obviously toy models (t1), there was every reason to believe that they would scale up to all of cognition, partly because they were, for a while, the only kind of model that worked, and partly because of the general power of computation. Then there was computation's systematic, language-of-thought-like property (Fodor 1975). And there was also the intuitive and methodological support from Turing's arguments for T2.
Symbol systems also seemed to offer some closure on the problem I raised at the beginning of this paper, the mind/body problem, for a symbol system's properties reside at the formal level: the syntactic rules for symbol manipulation. The symbols themselves are arbitrary objects, and can be physically realized in countless radically different ways -- as scratches on paper rulefully manipulated by people, as holes on a machine's tape, as states of circuits in many different kinds of computer computer -- yet these would all be implementations of the same symbol system. So if the symbolists' hypothesis is correct, that cognition is computation, and hence that mental states are really just implemented symbolic states, then it is no wonder that we have a mind/body problem in puzzling over what it might be about a physical state that makes it a mental state: for there is nothing special about the physical state except that it implements the right symbol system. The physical details are irrelevant. A radically different physical system implementing the same symbol system would be implementing the same mental state. (This is known in computer science as the hardware-independence of the software and virtual levels of description [Hayes et al. 1992].)
It was only natural, given all this, to conclude that any and every implementation of the symbol system that could pass T2 would have a mind. And so it was believed (and so it is still believed by many), although a rather decisive refutation of this hypothesis exists: For John Searle (1980) has pointed out the simple fact (based on the properties of symbols, syntax and implementation-independence) that we would be wrong to conclude that a symbol system that could pass T2 in Chinese would understand Chinese, because he himself could become another implementation of that same symbol system (by memorizing and executing its symbol-manipulation rules) without understanding a word of Chinese. He could be your life-long pen-pal without ever understanding a word you said. Now this, unlike prejudices about what machines and brains are and aren't, would be a nonarbitrary reason for revising your beliefs about whether your lifelong pen-pal had really had a mind, about whether anyone in there had really been understanding what you were saying.
Not that I think you would ever have to confront such an awkward dilemma, for there are good reasons to believe that a pure symbol system could never pass T2 (Harnad 1989): Remember the problem of the photo included with your letter to your pen-pal. A picture is worth more than a thousand words, more than a thousand symbols, in other words. Now consider all the potential words that could be said about all the potential objects, events, and states of affairs you might want to speak about. According to the symbolist hypothesis, all these further symbols and symbol combinations can be anticipated and generated by prior symbols and the syntactic rules for manipulating them (Harnad 1993g). I have likened this to an attempt to learn Chinese from a Chinese/Chinese dictionary. It seems obvious that if you do not know Chinese already then all you can do is go round and round in meaningless symbolic circles this way. To be sure, your quest for a definition would be as systematically meaningful to someone who already knew Chinese as the letter from your pen-pal would be, but the locus of that meaningfulness would not be the symbol system in either case, it would be the mind of the interpreter. Hence it would lead to an infinite regress if you supposed that the mind of the interpreter was just a symbol system too.
I've dubbed this the "symbol grounding problem" (Harnad 1990a): The meanings of the symbols in a pure symbol system are ungrounded; they are systematically interpretable by a system with a mind, but the locus of that interpretation is the mind of the interpreter, rather than the symbol system itself (just as the meaning of a book is in the minds of its readers, rather than in the book: the book is merely a bunch of symbols that can be systematically interpreted as meaningful by systems with minds). So if we are any kind of symbol system at all, we are surely grounded symbol systems, because the symbols in our heads surely do not mean what they mean purely in virtue of the fact that they are interpretable as meaningful by still other heads!
How to ground symbols? How to connect them to the objects, features, events and states of affairs that they are systematically interpretable as being about, but without the mediation of other minds? An attempt to find a system that is immune to Searle's argument already suggests an answer, for Searle's argument only works against pure symbol systems and T2. The moment you move to T3, Searle's "periscope," the clever trick that allowed him to penetrate the normally impenetrable other-minds barrier and confirm that there was no one home in there, namely, the implementation independence of computation, fails, and all candidates are again safe from Searle's snooping: For robotic T3 properties, unlike symbolic T2 properties, are not implementation-independent, starting from the most elementary of them, sensory and motor transduction. Searle could manage to be everything the T2 passing computer was (namely, the implementation of a certain symbol system) while still failing to understand, but there is no way to be a T3 robot without, among other things, being its optical transducers and its motor effectors, while failing to see or move (unless you implement only the part that comes after the transducer, but then you're not being the whole system and all bets are off).
So I take immunity to Searle's argument to be another vote for T3; but that's only the beginning. Symbol grounding requires a direct, unmediated connection between symbol and object (Harnad 1992a). Transduction is a good first step, but clearly the connection has to be selective, since not all symbols are connected to all objects. This brings us to the problem -- and I emphasize that it is a T3 problem -- of categorization (Harnad 1987). We need a robot that can pick out and assign a symbolic name to members of object categories based on invariant features in their transducer projections -- the shadows that objects cast on our sense organs. The robot must be able to categorize everything Turing-indistinguishably from the way we do.
Now I said I would come back to the question of neural nets and "brain style" computation. There is no space here to give my critique of (shall we call it) "hegemonic" connectionism or general complex or chaotic systems theories -- the kind that want to take over all of cognition from symbol systems and do it all on their own. If and when they make significant inroads on the T3, such models will have commanded our attention; until then they are tools, just like everything else, including symbol systems (Harnad 1993d). Nor are neural nets in any realistic sense "brain-like," for the simple reason that no one really knows what the brain is like (T4 is even further from our empirical reach than T3). Indeed, connectionist modellers often unwittingly play a double game, offering hybrid tinker toys (t1) whose performance limitations (T3) are masked by their spurious neurosimilitude and whose lack of true brain-likeness (T4) is masked by their toy performance capacities (Harnad 1990d; 1993a). Probably better to keep T3 and T4 criteria separate for now.
On the other hand, as items in the general cognitive armoury, neural nets are naturals for the important toy task of learning the invariants in the sensory projections of objects that allow us to sort them and assign them symbolic category names. For once you have such elementary symbols, grounded in the robotic capacity to discriminate, identify and manipulate the objects they refer to, you can go on to combine them into symbol strings that define still further symbols and describe further objects and states of affairs. This amounts to breaking out of the Chinese/Chinese dictionary-go-round by giving the "dictionary" itself the capacity to pick out the objects its symbols are interpretable as being about, without any external mediation (Harnad 1990b,c).
For example, if you had looked up "ban-ma" in the Chinese/Chinese dictionary, you would have found it defined (in Chinese) as "striped horse," but that would be no help unless "striped" and "horse" were already grounded, somehow. My hypothesis is that grounding consists in the constraint exerted on the otherwise arbitrarily shaped symbols "horse" and "striped" by the nonarbitrary shapes of the analog sensory projections of horses and stripes and by the neural nets that have learned to filter those projections for the invariant features that allow things to be reliably called horses and stripes. The analog and feature-filtering machinery that connects the symbol to the projections of the objects it refers to exerts a functional constraint both on (1) the permissible combinations that those symbols can enter into, a constraint over and above (or, rather, under and below) the usual boolean syntactic constraints of a pure symbol system, and on (2) the way the robot "sees" the world after having sorted, labelled and described it in that particular way. The shape of the robot's world is warped by how it has learned to categorize and describe the objects in it.
I will now try to illustrate briefly the kind of dual analog/symbolic constraint structure that I think may operate in grounded hybrid symbol systems. Figure 1 illustrates the "receptive field" of an innate invariance filter, the red/green/blue feature detector in our color receptor system. Three kinds of units are selectively tuned to certain regions of the visual spectrum. Every color we see is then a weighted combination of the activity of the three, very much like linear combinations of basis vectors in a Euclidean vector space. This invariance filter is innate. One of its side-effects (though this is not the whole story, and indeed the whole story is not yet known; Zeki 1990), is what is called categorical perception (CP), in which the perceived differences within a color category are compressed and the perceived differences between different color categories are expanded (Harnad 1987). This "warping" of similarity space makes members of the same color category look quantitatively and even qualitatively more alike, and members of different categories more different, than one would predict from their actual physical differences. If color space were not warped, it ought to be a graded quantitive continuum, like shades of gray.
Color CP happens to be mostly innate, but CP effects can also be induced by learning to categorize objects in one way rather than another. With my collabarators at Vassar College, Janet Andrews and Kenneth Livingston, we were able to generate CP effects by teaching subjects, through trial and error with feedback, to sort a set of computer generated Mondrian-like textures as having been painted by one painter or another (based on the presence of a subtle invariant feature that we did not describe explicitly to the subjects). The textures were rated for their similarity to one another by the subjects who had learned the categorization and by control subjects who had seen the textures equally often, but without knowing anything about who had painted what. Categorization compressed within category differences and expanded between category differences (Andrews et al., in preparation).
In an analogous computer simulation experiment (Harnad et al. 1991, 1994), we presented to backpropagation neural nets twelve lines varying in length; the nets had to learn to categorize them as "short," "medium" and "long." Compared to control nets that merely performed auto-association (responding with a line that merely matches the length of the input), the categorization nets, like the human subjects, showed CP, with within-category compression and between-category expansion See Figure 2). The locus of the warping can be seen in how the "receptive fields" of the three hidden units changed as a result of categorization training (Figure 3), and the "evolution" of the warping throughout the training can be seen in hidden unit space (Figure 4). The warping occurs because of the way such nets succeed in learning the categorization: First, during auto-association, the hidden-unit representations of each of the lines move as far away from one another as possible; with more analog inputs, this tendency is constrained by their analog structure; categorization is accomplished by partitioning the cubic hidden unit space into three regions with separating planes. So the compression/dilation occurs because of the way the hidden-unit representations must change their locations in order to get on the correct side of this plane, while still being constrained by their analog structure, and with the separating planes exerting a "repulsive" force that is strongest at the category boundaries.
This is the kind of nonarbitrary shape constraint that would be "hanging" from every ground-level symbol in a grounded hybrid symbol system and would be inherited by the higher-level symbols defined in terms of the ground-level symbols. A "thought" in the head of such a robot would then not just be the activitation of a string of symbols, but of all the analog and invariance-detecting machinery to which the symbols were connected, which would in turn ground them in the objects they were about.
Now we come to the question raised in the title of this paper: Does mind piggy-back on our robotic and symbolic capacities? My reply is that all we can do is hope so, because there will never be any way to know better. Even in a grounded robot -- one that is T3-indistinguishable from us, immune to Searle's argument, able to discriminate, identify, manipulate, and discourse about the objects, events and states of affairs that its symbols are systematically interpretable as being about, and able to do so without the mediation of any outside interpreter, thanks to the bottom-up constraints of its grounding -- it is still possible that there is no one home in there, experiencing experiences, no one that the duly grounded symbols are about what they're about to. Just a mindless "Zombie" (Harnad 1993e).
If this Zombie possibility were actually realized, then we would have to turn to T4 to fine-tune the robot by tightening our Turing filter still further. But I have to point out that, without some homologue of Searle's periscope, there would be no way we could know that our T3-scale robot was just a Zombie, that T3 had hence been unsuccessful in generating a mind, and that we accordingly needed to scale up to T4. Nor would we have any way of knowing that there was indeed someone home in a T4 candidate either. I'm not inclined to worry about that sort of thing, however; in fact, as far as I'm concerned, T3 is close enough to Utopia so that you can call in the hermeneuticists at that point and mentally interpret it to your heart's content. I don't believe that a Zombie could slip through an empirical filter as tight as that. It's only the premature mentalistic interpretation of subtotal t1 toys or of purely symbolic T2 modules that I would caution against, for overinterpration will invariably camouflage excess underdetermination, just as premature neurologizing does.
The degrees of underdetermination of mental modelling will always be greater than those of physical modelling, however, regardless of whether we prefer T3 or we hold out for T4, because believing we've captured the mental will always require a leap of faith that believing we've captured the physical does not. This has nothing at all to do with unobservability; quarks are every bit as unobservable as qualia (Harnad 1993e). But, unlike qualia, quarks are allowed to do some independent work in our explanation. Indeed, if quarks still figure in the Utopian Grand Unified Theory of Everything (T5), they will be formally indispensable; remove them and the theory no longer predicts and explains. But for the Utopian Theory of Mind, whether T3 or T4, the qualia will always be a take-it-or-leave-it hermeneutic option, and I think that's probably because allowing the qualia any independent causal role of their own would put the rest of physics at risk (Harnad 1982b; 1993f). So even if God could resolve the remaining indeterminacy, assuring us that our theory was not only successful and complete, but also the right theory, among all the possibilities that underdetermination left open, and hence that our hermeneutics was correct too, this, like any other divine revelation, would still call for a leap of faith on our part in order to believe. This extra element of underdetermination (and whatever perplexity and dissatisfaction it engenders) will remain the unresolvable residue of the mind/body problem.
Dennett, D.C. (in press) Cognitive Science as Reverse Engineering: Several Meanings of "Top Down" and "Bottom Up." In: Prawitz, D., Skyrms, B. & Westerstahl, D. (Eds.) Proceedings of the 9th International Congress of Logic, Methodology and Philosophy of Science. North Holland.
Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (in prep.) Learned Categorical Perception in Human Subjects: Implications for Symbol Grounding.
Catania, A.C. & Harnad, S. (eds.) (1988) The Selection of Behavior. The Operant Behaviorism of BF Skinner: Comments and Consequences. New York: Cambridge University Press.
Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154.
Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell
Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71.
Harnad, S. (1982a) Neoconstructivism: A unifying theme for the cognitive sciences. In: Language, mind and brain (T. Simon & R. Scholes, eds., Hillsdale NJ: Erlbaum), 1 - 11.
Harnad, S. (1982b) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.
Harnad, S. (1984) What are the scope and limits of radical behaviorist theory? The Behavioral and Brain Sciences 7: 720 -721.
Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social Epistemology 4: 167-172.
Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.
Harnad, S. (1990d) Symbols and Nets: Cooperation vs. Competition. Review of: S. Pinker and J. Mehler (Eds.) (1988) Connections and Symbols Connection Science 2: 257-260.
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.
Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.
Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10.
Harnad, S. (1993a) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.).
Harnad, S. (1993b) Artificial Life: Synthetic Versus Virtual. Artificial Life III. Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI.
Harnad, S. (1993c) The Origin of Words: A Psychophysical Hypothesis In Durham, W & Velichkovsky B (Eds.) Muenster: Nodus Pub. [Presented at Zif Conference on Biological and Cultural Aspects of Language Development. January 20 - 22, 1992 University of Bielefeld]
Harnad, S. (1993d) Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component. Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum
Harnad S. (1993e) Discussion (passim) In: Bock, G. & Marsh, J. (Eds.) Experimental and Theoretical Studies of Consciousness. CIBA Foundation Symposium 174. Chichester: Wiley
Harnad, S. (1993f) Turing Indistinguishability and the Blind Watchmaker. Presented at London School of Economics Conference of "Evolution and the Human Sciences" June 1993.
Harnad, S. (1993g) Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem. PSYCOLOQUY 4(34) frame-problem.11.
Harnad, S., Doty, R.W., Goldstein, L., Jaynes, J. & Krauthamer, G. (eds.) (1977) Lateralization in the nervous system. New York: Academic Press.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
Harnad, S. Hanson, S.J. & Lubin, J. (1994) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. In: V. Honavar & L. Uhr (eds) Symbol Processing and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. (in press)
Harnad, S., Steklis, H. D. & Lancaster, J. B. (eds.) (1976) Origins and Evolution of Language and Speech. Annals of the New York Academy of Sciences 280.
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238.
Lewis, H. & C. Papadimitriou. (1981) Elements of the Theory of Computation (Englewood Cliffs, NJ: Prentice-Hall).
Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83: 435 - 451.
Nagel, T. (1986) The view from nowhere. New York: Oxford University Press.
Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford
Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.
Turing, A. M. (1990) Mechanical intelligence (D. C. Ince, ed.) North Holland
Zeki, S. (1990) Colour Vision and Functional Specialisation in the Visual Cortex. Amsterdam: Elsevier
1. Or, according to some more hopeful views, perhaps some autonomous, self-contained modular subcomponents of it.
2. Unless we luck out and they turn to be autonomous modules that can be veridically modelled in total isolation from all other capacities -- a risky methodological assumption to make a priori, in my opinion.
3. There is of course also room for planes containing or telemetrically connected to T2 modules for computer-assisted "smart" flight, but that is another matter.
4. This is one of those controversial points I promised you, once we began fleshing out the T-hierarchy. I will speak about so-called "brain-style computation" and "neural nets" separately below; it is not at all clear yet whether these are fish or fowl, i.e., T2 or T4, whereas it is actually T3 we are after.
5. Especially if the capacities to survive and reproduce are counted among our robotic capacities, as they surely ought to be -- although this may bring in some molecular factors that are more T4 than T3 (Harnad 1993b,f).