OnLine Section: OffLine . Thursday 23 February 1995. Page 11.
Stevan Harnad is Professor of Psychology and Director of
Cognitive Sciences Centre
University of Southampton and
author of the forthcoming
"Icon, Category, Symbol: Essays on the
Foundations and Fringes of Cognition."
SE NON e vero, e ben trovato. The linguist Benjamin Lee Whorf is known for his celebrated hypothesis that how we perceive reality is determined by our language and culture. This sounds plausible, yet every piece of evidence Whorf cited in its support has turned out to be flawed, if not downright incorrect:
Whorf cited the Hopi language, which lacks a future tense, suggesting that as a consequence, the Hopi people lack a sense of future. (Not so, of course: The Hopis hope and expect and predict and plan for tomorrow like the rest of us, in both word and deed; Whorf simply had not had a full enough grasp of their language's expressive power.)
Whorf also cited the language of the Inuits ("Eskimos"), which allegedly has a superabundance of words for snow because, presumably, of the importance of snow in their polar lives; he thought they accordingly perceived snow differently, seeing richer distinctions, more categories than we do. (Wrong again: As the latter-day linguist Geoffrey Pullum has pointed out in "The Great Eskimo Vocabulary Hoax", Whorf had simply misunderstood Inuit, which is an "agglutinative" language, one that pastes together many words into a single word where we would use a phrase or a sentence. Hence, Whorf had thought there needed to be separate dictionary entries for "the snow that is slightly moist" or "the snow that has lain outdoors overnight" merely because his native translators expressed these with a single word. But, by the same token, he would have needed just as many Inuit dictionary entries for "the computer that is slightly moist" or "the TV that has lain outdoors overnight", neither of which are very salient categories of polar reality.)
ONE could go on and on: It is not true, as latter-day Whorfians such as Alfred Bloom have suggested, that the reason the Chinese did not develop Western-style science was that Mandarin lacks the "counterfactual conditional" construction: "If your aunt had had a Y chromosome she would have been your uncle" (skilled speakers of Chinese are perfectly capable of expressing or understanding such sentiments; they just do so in what in English sounds like a more roundabout way).
Nor is it true that the rainbow looks different to you depending on how your language divides and names the visible wave lengths of light. As the anthropologists Brent Berlin and Paul Kay have shown (and color receptor neurophysiologists have corroborated), the colors of the rainbow are fixed by the genes that code for the color detectors in your retina and brain, rather than by your language and your culture.
EPPUR, Eppur Si Muove: And yet, and yet there must surely be some Whorfian effects in our lives, as anyone will attest who has seen a pair of identical twins, and first sworn that never in this lifetime would he be able to tell them apart, and then, after they had become sorted out and throughly familiar to him, swore with equal conviction that he couldn't imagine how he could ever have mixed them up! Something changed in the way the world looked, between those two glimpses of "reality", something very much like what changes as one becomes expert in distinguishing cancerous from noncancerous cells under a microscope, sexing newborn chicks, or or countless other tasks in which we exercise our remarkable capacity to categorise things, from infancy onward.
AND OUR capacities are remarkable, as attested to by the fact that we haven't come even remotely close to being able to build machines that are able to categorise as we can. It won't do to say that machines can't do it because they can't think. Nobody has any idea what thinking is (even though we all know what it's like to do it), and if anyone ever does get a solid hunch about what thinking is, or, in particular, the means by which we manage to categorise, then the only test of whether that hunch is right will be to see whether a machine can indeed do what we can do, armed only with those means!
Would a machine that could categorise have to be Whorfian? In other words, would it somehow have to alter the way things appeared, while learning to sort and name them? (Let's leave for another time the question of whether things "appear" like anything at all to a machine.) For if we turn the Whorf Hypothesis on its head and say simply that our language and culture is determined by how we perceive reality, then we have said nothing controversial at all: I sort and name colors (and snow and twins and cells) on the basis of the way they look to me, not the other way round.
Yet there is evidence that it is the other way round: How things look to us is determined (or at least strongly influenced) by how we sort and name them. Experiments on human subjects show that things we learn to put in the same category and give the same name come to resemble one another more, and things we put in different categories and give different names come to look increasingly different. These are Whorfian effects. What possible purpose might they serve?
AN ANSWER may come from neural networks -- artificial systems of interconnected nodes that some people think might even be a bit brain-like. These nets can learn to sort and name things. And the way they do it is by changing the "receptive fields" of their inner nodes, reshaping the inner "shadows" that things cast on them so that things that belong in the same category come to look more alike, and things that belong to different categories come to look more different. In the end, these nets do sort things based on how they appear, only the appearances have been changed: Reality has been "warped" ("Whorfed"?) in the service of categorisation and naming.
If something like this really does transpire in our heads, then Whorf was right after all.