To appear in Encyclopedia of
Philosophy (Macmillan)
Searle's Chinese Room Argument
Stevan Harnad
In 1980, the philosopher John Searle published in the journal Behavioral and Brain Sciences a
simple thought experiment that he called the "Chinese Room Argument"
against "Strong Artificial Intelligence (AI)" The thesis of Strong AI
has since come to be called "computationalism," according to which cognition is just computation,
hence mental states are just computational states:
Computationalism. According to
computationalism, to explain how the mind works, cognitive science
needs to find out what the right computations are -- the same ones that
the brain performs in order to generate the mind and its capacities.
Once we know that, then every system that performs those computations
will have those mental states: Every computer that runs the mind's
program will have a mind, because computation is hardware-independent: Any hardware
that is running the right program has the right computational states.
The Turing Test. How do we know
which program is the right program? Although it is not strictly a tenet
of computationalism, an answer that many computationalists will agree
to is that the right program will be the one that can pass the Turing
Test (TT), which is to be a system that is able to interact by email
with real people exactly the way real people do -- so exactly that no
one can ever tell that the computer program is not another real person.
Turing (1950) had suggested that once a computer can do everything a
real person can do so well that we cannot even tell them apart, it
would be arbitrary to deny that that computer has a mind, that it is
intelligent, that it can understand just as a real person can.
This, then, is the thesis that Searle set out to show was wrong: (1)
Mental states are just computational states, (2) the right
computational states are the ones that can pass the TT, and (3) any and
every hardware on which you run those computational states will have
those mental states too.
Hardware-Independence. Searle’s
thought experiment was extremely simple. Normally, there is no way I
can tell whether anyone or anything other than myself has mental
states. The only mental states we can be sure about are our own. We
can’t be someone else, to check whether they have mental states too.
But computationalism has an important vulnerability in this regard:
hardware-independence. Since any and every dynamical system that is
executing the right computer program would have to have the right
mental states, Searle himself can execute the computer program, and
then check whether he has the right mental states. In particular,
Searle asks whether the computer that passes the TT really understands the emails it is
receiving and sending.
The Chinese Room. To test this,
Searle obviously cannot conduct the TT in English, for he already
understands English. So in his thought-experiment the TT is conducted
in Chinese: The (hypothetical) computer program he is testing in his
thought-experiment is able to pass the TT in Chinese. That means it is
able to receive and send email in Chinese in such a way that none of
its Chinese pen-pals would ever suspect that it was not a real
Chinese-speaking and Chinese-understanding person. (We are to imagine
the email going on as frequently we like, with as many people as we
like, as long as we like, even for an entire lifetime. The TT is
not just a short-term trick.)
Symbol-Manipulation. In the
original version of Searle’s Chinese Room Argument he imagined himself
in the Chinese Room, receiving the Chinese emails (a long string of
Chinese symbols, completely unintelligible to Searle). He would then
consult the TT-passing computer program, in the form of rules written
(in English) on the wall of the room, explaining to Searle exactly how
he should manipulate the symbols, based on the incoming email, to
generate the outgoing email. It is important to understand that
computation is just symbol-manipulation,
and that the manipulation and matching is done purely on the basis of
the shape of the symbols, not
on the basis of their meaning.
Now the gist of Searle’s argument is very simple: In doing all that, he
would be doing exactly the same thing any other piece of hardware
executing that TT-passing program was doing: rulefully manipulating the
input symbols on the basis of their shapes, and generating output
symbols that make sense to a Chinese pen-pal -- the kind of email reply
a real pen-pal would send, a pen-pal that had understood the email
received, as well as the email sent.
Understanding. But Searle goes
on to point out that in executing the program he would not be
understanding the emails at all! He would just be manipulating
meaningless symbols, on the basis of their shapes, according to the
rules on the wall. Therefore, because of the hardware-independence of
computation, if Searle would not be understanding Chinese under those
conditions, neither would any other piece of hardware executing the
Chinese TT-passing hardware. So much for computationalism and the
theory that cognition is just computation.
The System Reply. Searle
correctly anticipated that his computationalist critics would not be
happy with the handwriting on the wall: Their “System Reply” would be
that Searle was only part of
the TT-passing system. That whereas Searle would not be understanding
Chinese under those conditions, the
system as a whole would be!
Searle rightly replied that he found it hard to believe that he plus
the walls together could constitute a mental state, but, playing the
game, he added: Then forget about the walls and the room. Imagine that
I have memorized all the symbol manipulation rules and can conduct them
from memory. Then the whole system
is me: Where’s the understanding?
Desperate computationalists were still ready to argue that somewhere in
there, inside Searle, under those conditions, there would lurk a
Chinese-understanding of which Searle himself was unaware, as in
multiple personality syndrome -- but this seems even more far-fetched
than a person plus walls having a mental state of which the person is
unaware.
Brain Power. So the Chinese
Room Argument is right, such as it is, and computationalism is wrong.
But if cognition is not just computation, what is it then? Here Searle
is not much help, for he first overstates what his argument has shown,
concluding that it has shown that cognition is not computation at all – whereas
all it has shown is that cognition is not
all computation. Searle also concludes that his argument has
shown that the Turing Test is invalid, whereas all it has shown is that
the TT would be invalid if it could be passed by a purely computational
system. His only positive recommendation is to turn brainward, trying
to understand the causal powers of the brain instead of the
computational powers of computers.
But it is not yet apparent what the relevant causal powers of the brain
are, nor how to discover them. The TT itself is a potential guide:
Surely the relevant causal power of the brain is its power to pass the
TT! We know now (thanks to the Chinese Room Argument) that if a system
passed the TT via computation alone, that would not be enough. What is
missing?
The Robot Reply. One of the
attempted refutations of the Chinese Room Argument – the “Robot Reply”
– contained the seeds of an answer, but they were sown in the wrong
soil. A robot’s sensors and effectors were invoked in order to
strengthen the System Reply: It is not Searle plus the walls of the
Chinese Room that constitutes the Chinese-understanding “system”, it is
Searle plus a robot’s sensors and effectors. Searle rightly points out
that it would still be him doing all the computations, and it was the
computations that were on trial in the Chinese Room. But perhaps the TT
itself needs to be looked at more closely here:
Behavioral Capacity. Turing’s
original Test was indeed the email version of the TT. But there is
nothing in Turing’s paper or his arguments on behalf the TT to suggest
that it should be restricted to candidates that are just computers, or
even that it should be restricted to email! The power of the TT is the
argument that if the candidate can do
everything a real person can do – and do it indistinguishably from the
way a real person does it, as judged by real people – then it is mere
prejudice to conclude that it lacks mental states when we are told it
is a machine. We don’t even really know what a machine is, or isn’t!
But we do know that real
people can do a lot more than just email to one another. They can see,
name, manipulate and describe most of the things they talk about in
their email. Indeed, it is hard to imagine how either a real pen-pal or
any designer of a TT-passing computer program could deal intelligibly
with all the symbols in an email message without also being able to do at least some
of the things we can all do with the objects and events in the world
that those symbols stand for.
Sensorimotor Grounding of Symbols.
Computation, as noted, is symbol-manipulation, by rules based on the
symbols’ shapes, not their meanings. Computation, like language itself,
is universal, and perhaps all-powerful (in that it can encode just
about anything). But surely if we want the ability to understand the symbols’ meanings to
be among the mental states of the TT-passing system, this calls for
more than just the symbols and the ability to manipulate them. Some, at
least, of those symbols must be “grounded” in something other than just
more meaningless symbols and symbol-manipulations – otherwise the
system is in the same situation as someone trying to look up the
meaning of a word in a language (say, Chinese) that he does not
understand -- in a Chinese-Chinese dictionary! Emailing the definitions
of the words would be intelligible enough to a pen-pal who understood
Chinese, but they would be of no use to anyone or anything that did not
understand Chinese.
Mind-Reading. So the TT
candidate must be a robot, able to interact with the world and with us
directly, not just via email. And it must be able to do so
indistinguishably from any of the rest of us. That is the gist of the
TT. The reason Turing originally formulated his test in its pen-pal
form was so that we would not be biased by the candidate’s appearance.
But in today’s cinematic sci-fi world we have, if anything, been primed
to be over-credulous about robots, so much more “capable” are our
familiar fictional on-screen cyborgs than any TT candidate yet designed
in a cog-sci lab. In real life our subtle and biologically based
“mind-reading” skills (Frith & Frith 1999) will be all we need once
cog-sci starts to catch up with sci-fi and we can begin T-Testing in
earnest.
The Other-Minds Problem. Could
the Chinese Room Argument be resurrected to debunk a TT-passing robot?
Certainly not. For Searle’s argument depended crucially on the
hardware-independence of computation. That was what allowed Searle to
“become” the candidate and then report back to us (truthfully) that we
were mistaken if we thought he understood Chinese. But we cannot
“become” the TT-passing robot, to check whether it really understands,
any more than we can become another person. It is this parity (between
other people and other robots) that is at the heart of the TT. Anyone
who thinks this is not an exacting enough test of having a mind need
only remind himself that the Blind Watchmaker (Darwinian evolution),
our “natural designer,” is no more capable of mind-reading than any of
the rest of us. That leaves only the robot to know for sure whether it
really understands.
REFERENCES
Frith Christopher D. & Frith, Uta (1999) Interacting minds -- a
biological basis. Science
286: 1692_1695. http://pubpages.unh.edu/~jel/seminar/Frith_mind.pdf
Harnad, Stevan (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental
Artificial Intelligence 1: 5-25. http://cogprints.org/1573/00/harnad89.searle.html
Harnad, Stevan (1990) The Symbol Grounding Problem. Physica D 42:pp. 335-346. http://cogprints.org/3106/01/sgproblem1.html
Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room
Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument.
Oxford University Press. http://cogprints.org/4023/01/searlbook.html
Harnad, Stevan (2001) On Searle On the Failures of Computationalism, Psycoloquy: 12,#61 Symbolism
Connectionism (28)
http://psycprints.ecs.soton.ac.uk/archive/00000190/
Harnad, Stevan (2003) Can a machine be conscious? How? Journal of Consciousness Studies
10(4-5):pp. 69-75.
http://cogprints.org/2460/01/machine.htm
Harnad, Stevan (2003) Symbol-Grounding Problem. Encylopedia of Cognitive Science.
Nature Publishing Group. Macmillan. http://www.ecs.soton.ac.uk/~harnad/Temp/symgro.htm
Harnad, Stevan (2004) The Annotation Game: On Turing (1950) on
Computing, Machinery and Intelligence. In: Epstein, Robert &
Peters, Grace (Eds.) The Turing Test
Sourcebook: Philosophical and Methodological Issues in the Quest for
the Thinking Computer. Kluwer. http://cogprints.org/3322/01/turing.html
Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3
(3): 417-457 http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html
Searle, John R. (1984) Minds,
brains, and science. Cambridge, Mass.: Harvard University Press.
Searle, John. R. (1987) Minds and Brains without Programs," Mindwaves, C. Blakemore & S.
Greenfield (eds.), Oxford: Basil Blackwell.
Searle, John R. (1990) Explanatory inversion and cognitive science. Behavioral and Brain Sciences 13:
585-595.
Searle, John. R. (1990) Is the Brain's Mind a Computer Program?", Scientific American, January 1990.
Searle, John R. (2001) The Failures of Computationalism: Psycoloquy: 12,#62 Symbolism
Connectionism (29)
http://psycprints.ecs.soton.ac.uk/archive/00000189/
Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 59:pp. 433-460. http://cogprints.org/499/00/turing.html