Cognition et Mouvement URA CNRS 1166
Universite d'Aix Marseille II
13388 Marseille cedex 13, France
This entirely valid methodological point of Turing's is based on the "other minds" problem (the problem of how I can know that anyone else but me actually has a mind, actually thinks, actually has intelligence or knowledge -- these all come to the same thing): It is arbitrary to ask for more from a machine than I ask from a person, just because it's a machine (especially since no one knows yet what either a person or a machine REALLY is). So if the pen-pal TT is enough to allow us to correctly infer that a real person has a mind, then it must by the same token be enough to allow us to make the same inference about a computer, given that the two are totally indistinguishable to us (not just for a 5-minute party trick or an annual contest, but, in principle, for a lifetime). Neither the appearance of the candidate nor any facts about biology play any role in my judgment about my human pen pal, so there is no reason the same should not be true of my TT-indistinguishable machine pen-pal.
Now, although I too am critical of the TT, I think it is important that its logic -- which was only implicit in Turing's actual writing -- should be made explicit, as I have tried to make it here and in my other writings, so we can see clearly the methodological basis for his proposed criterion. Elsewhere I have gone on to take issue with the TT on the basis of the fact that humans also happen to have a good deal more performance capacity over and above their pen-pal capacity. It is hence arbitrary and equivocal to focus only on pen-pal capacity; but Turing's basic intuition is still correct that the only available basis for inferring a mind is Turing-indistinguishable performance capacity. For TOTAL performance indistinguishability, however, one needs TOTAL, not partial, performance capacity, and that happens to call for all of our robotic performance capacities too: The Total Turing Test (TTT). And, as a bonus, the robotic capacities can be used to GROUND the pen-pal (symbolic) capacities, thereby solving the "symbol grounding problem" (Harnad 1990), which afflicts the pen-pal version of the TT, but not the robotic TTT.[1]
In fact, one of the reasons no computer has yet passed the TT may be that even successful TT capacity has to draw upon robotic capacity. A TT computer pen-pal alone could not even tell you the color of the flower you had enclosed with its birthday letter -- or indeed that you had enclosed a flower at all, unless you mention it in your letter. An infinity of possible interactions with the real world, interactions of which each of us is capable, is completely missing from the TT (and again, "tricks" have nothing to do with it).
The Loebner Prize Competition is accordingly trivial from a scientific standpoint. The scientific point is not to fool some judges, some of the time, but to design a candidate that REALLY has indistinguishable performance capacities (respectively, pen-pal performance [TT] or pen-pal + robotic performance [TTT]); indistinguishable to any judge, and for a lifetime, just as yours and mine are. No tricks! The real thing!
The only open questions are (1) whether there is more than one way to design a candidate to pass the TTT, and if so, (2) do we then need a stronger test, the TTTT (neuromolecular indistinguishability), to pick out the one with the mind? My guess is that the constraints on the TTT are tight enough, being roughly the same ones that guided the Blind Watchmaker who designed us (evolutionary adaptations -- survival and reproduction -- are largely performance matters; Darwinian selection can no more read minds than we can).
Let me close with the suggestion that the problem under discussion is not one of definition. You don't have to be able to define intelligence (knowledge, understanding) in order to see that people have it and today's machines don't. Nor do you need a definition to see that once you can no longer tell them apart, you will no longer have any basis for denying of one what you affirm of the other.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.