Harnad, S. (2000) Correlation VS. Causality: How/Why the Mind/Body Problem Is Hard. [Invited Commentary of Humphrey, N. "How to Solve the Mind-Body Problem"] Journal of Consciousness Studies 7.

CORRELATION VS. CAUSALITY: HOW/WHY THE MIND/BODY PROBLEM IS HARD

Stevan Harnad
Cognitive Sciences Center
Department of Electronics and Computer Science
Southampton University
Highfield, Southampton
SO17 1BJ United Kingdom
harnad@soton.ac.uk
http://cogsci.soton.ac.uk/~harnad

"brain-imaging studies... demonstrate in ever more detail how specific kinds of mental activity (as reported by a mindful subject) are precisely correlated with specific patterns of brain activity (as recorded by external instruments).(Humphrey 2000)"
Mind/Brain (M/B) correlations: We've known about them (dimly) for decades, probably centuries. And that's still all we've got with brain imaging; and that's all we'll have even when we get the correspondence fine-tuned right down to the last mental jnd and its corresponding molecule.

But the Mind/Body Problem (M/BP) is about causation not correlation. And its solution (if there is one) will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it (thank you very much). Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the hard part.

Nick Humphrey's heroic attempt is informative -- and some of it might even be correct, functionally speaking -- but alas it too fails to furnish the missing causal link for the mental component, which continues to dangle in his account, nonfunctionally. Hence the only problems Humphrey solves are the "easy" ones, not the M/BP.

"suppose, by analogy, that... "atmospheric-imaging" experiments [demonstrate] that whenever there is a visible shaft of lightning in the air there is a corresponding electrical discharge. We might soon be confident that the lightning and the electrical discharge are aspects of one and the same thing"
Such analogies are (famously) inapplicable to the M/BP (Nagel 1974, 1986): There is no problem about seeing two sets of empirical observations as "aspects" of the same thing, given a causal model that unifies them. But there is no such causal model in the case of the M/BP. For, unlike all other empirical observations, such as lightning/electricity (or water/H2O, heat/molecular-motion, life/biogenetic-function, matter/energy, etc.), in the special case of M/B, the correlated phenomena are not of the same KIND. And that's precisely what makes this particular set of "correlations" different, and problematic. So the forecast that M/B will simply turn out to be yet another set of correlations like the rest is unpromising.

Empirically detectable shafts of lightning and empirically detectable electrical discharges are the same kind of thing (empirical data, detectable by instruments). So are empirically detectable brain activities and empirically detectable behaviour and circumstances. So when I say or act out that something hurts (especially when something is indeed damaging my tissues), and the accompanying brain-image is neural activity in my nociceptor system, we do have a correlation between things of the same kind, exactly as in the case of the lightning. And out of that correlation we can contruct a causal theory of nociceptive function (tissue injury, avoidance, learning, recall, etc.).

But when the correlate in question is my feeling of pain, we're in another ballpark: There's now an explanatory gap that neither the nociceptive theory (which is only a functional theory of tissue-damage-related doing) nor any amount of reconfirmation of the tightness of the correlation can close.

So maybe what Humphrey means to highlight in the lightning case is not the correlation between the two sets of empirical data, but between the empirical data and an underlying causal factor that explains the data. That's fine too, but then the analogy with M/B correlations in imaging is irrelevant, and we are talking about a causal explanation: And if so, what IS that underlying causal factor in the case of the M/BP? All I see is unexplicated correlations. (Note that the neural functions and the behavioral functions and their interrelations do get explicated, but their mental correlations do not.)

Nor is it a matter of "deducing one from the other a priori" (even physics doesn't do that, only mathematics does). The "laws" of physics are not necessary but contingent; so are their boundary conditions. So this is a red herring. The real problem is about causal explanation in the special case of M/B. Consciousness seems fated to be a causal dangler no matter how tight the coupling and how minute the predictability. It's like perfect weather-forecasting without the underlying meteorological theory.

(Remember: Neural and behavioral functions are not at issue; mental ones are. The correlations of a purely behavioral neuroscience would not be problematic in any way; it's the causal status of the mental component that is at the root of the M/BP. The causality in M/B theory is invariably "third-party" causality: The underlying neural mechanism causes both the brain/body's functional neural/behavioral states and the fact that they happen to be mental states. The trick is to show -- functionally, if that's the route one elects to take -- how/why the mentality is not functionally superfluous; just reaffirming that its causally hard-wired somehow to its functional substrate is not an answer.)

"with lightning [t]he physico-chemical causes that underlie the identity [were] discovered through further experimental research and new theorizing. Now the question is whether the same strategy will work for mind and brain."
It is indeed, and the question arises because of the obvious disanalogies: public ("3rd person") data, as in all the other analogies, vs. private ("1st person") data. There's a start. And then there's the disanalogy about the (independent) causal role of the private-stuff ("qualia" = feelings): it better not try to exert any, on pain of telekinetic dualism. Which leaves the usual question of why it's dangling there, then, epiphenomenally. A "functionalist" would have expected a better answer, a functional one. But it's whenever we try to face squarely the question of what causal role feelings could possibly have that we draw a blank (unless we cheat by "identifying" feelings with something else -- such as a neural correlate, thereby begging the question!).

Humphrey wishes to distance his own position from two nonstarters, those of Chalmers (1996) and McGinn (1989):

"Chalmers [1996]... argues [that] consciousness just happens to be a fundamental, non-derivative, property of matter."
One must agree with Humphrey that such dicta are unedifying. Here is a one-line summary of Chalmers's message: "The M/BP is hard!" So what? How does that help? It's a good pull-up, for those who have simplistic quick-fixes, but other than that it is tautological: It wouldn't be the longstanding problem it is if it weren't "hard." The question is: Is it soluble at all?
"McGinn [1989] believes that... certain kinds of understanding... must for ever lie beyond our intellectual reach [e.g., the M/BP]."
No substance in that position either, in my opinion. I too happen to think the M/BP's insoluble, but not because of any limitation of the human mind. Indeed, I don't know what is even meant by saying that there may indeed exist a "solution" to the M/BP, but not one that the mind can ever know! There is nothing here that is analogous in any way to the (epistemic?) constraints underlying Goedel unprovability, quantum indeterminacy, statistical-mechanical indeterminacy, unproven mathematical conjectures, halting problems, the many-body problem, the limits of measurement, the limits of memory, the limits of technology, the limits of computation, NP-completeness, the limits of time, the limits of "language" (no idea what the last might even mean) etc. Those are all red herrings and false analogies.

Nor is it clear why in his own approach Humphrey wants to obscure the M/BP with formalism ("dimensions," "equations," "identities")? The problem is clear, hard, and staring us informally in the face: I have feelings. Undoubtedly the feelings are in some way caused by and identical to ("supervene on") brain process/structures, but it is not at all clear how, and even less clear why. That's the M/BP. No "equation" to write down; no "terms." And the "incommensurability" is the name of the game (or of the problem)!

Inputs and outputs can be connected, functionally, computationally. But feelings are another story (a hard one!). If we "characterize" feelings computationally or functionally, we have simply begged the question, and changed the subject -- to a discussion of the relation between brain function and computational (or other) function.

"Most of the states of interest to psychologists... remembering, perceiving, wanting, talking, thinking, and so on are... amenable to... functional analysis."
In every respect except the relevant one, which is that they are qualitative, feeling states. They will be amenable to informational analysis, and to behavioural and neural analysis, but their feelingness will remain a dangler -- and that's the point! That's what makes the M/BP the problem it is. The functional stuff would all go through fine -- behaviourally, computationally -- if we were all just feelingless Zombies. But we're not. And that's the problem (Harnad 1995).
"recalling that today is Tuesday = activity of neurons in the calendula nucleus.... But [these] are notoriously the "easy" cases... No one it seems has the least idea how to characterize the phenomenal   experience of redness in functional terms"
This is too quick. There is nothing special about "detecting red," compared to "recalling that X" or "inferring that Y." These can all be treated functionally (i.e., as transpiring in a Zombie), in precisely the same way. There are I/O conditions under which certain psychophysical capabilities in the "chromoceptive task" domain are adaptive for our species, hence our brains have evolved the functional mechanisms for processing objects with reflective surfaces, etc. But why/how does doing and being able to do that kind of thing feel-like something? Back to square one, and it's exactly the same square for both the "cognitive/intentional" cases Humphrey thinks are easier (recalling X), and the more patently phenomenal/qualitative ones that wear their hardness more on their sleeves (detecting red).

In reality, every mental capacity has both an easy and a hard aspect: the functional aspect is easy, the feeling aspect is hard. But it's the feeling aspect that makes it mental! So there's only one M/BP, and that's the hard one. The rest is just mindless Zombie functionalism (a branch of reverse bioengineering that is not particularly "easier" empirically than any other area of science).

Now we arrive at what will be the core insight, which Humphrey attributes to Reid:

"sensory awareness is an activity. We do not have pains we get to be pained. This is an extraordinarily sophisticated insight [of Reid's]"
In my opinion Reid provides no insight here. The M/BP has always abutted onto various forms of scepticism -- scepticism about the external world, scepticism about other minds. The problem of "hallucination" (an apparent external object of experience, when in reality there is no external object) has sometimes been folded into the M/BP. From this follows this eagerness to distinguish external object-based experiences like feeling tree-barks from completely internal ones, like feeling moods (I leave out the awkward intermediate case of feeling headaches).

No illumination follows from adopting these distinctions. YES some of the sceptical problems (not induction, but definitely solipsism and other-minds) are lemmas of the M/BP. But the M/BP is primary. Solve that and those bits of idealism will be trivial by comparison (and will probably vanish).

But don't try to subordinate the (unsolved) theorem to the derivative lemma! Never mind the distinction between trees and moods: For M/B purposes they are completely of a muchness, and moods are the more representative model, rather than the external-world-contingent special case of trees.

To put it another way: it is the relation between feelings and brain states that we are charged with explicating, not the relation between feelings and their objects whether internal or external (i.e., "feeling that I am seeing a blue balloon" vs. "feeling [affectively] blue"). All such cases are equally symptomatic of and infected with the M/BP; to try instead to resolve some of the differences that distinguish them from one another them is just to change the subject and beg the question (and analogous to focusing on differences in notation instead of confronting their common content, except that here it is qualitative content differences that are distactctng us from the problem, which that there is any content at all!)).

"my own view... is that the right expression is not so much "being pained" as "paining"... sensing is not a passive state at all, but... active engagement with the stimulus occurring at the body surface."
As GB Shaw said: "Madame, we have already established your profession, we are merely haggling about the price." Call it what one likes -- call seeing a tree "tree-seeing," call feeling a pain "paining" -- there is no illumination in sight in this corridor, just gerunds! (The profession, here, is unfortunately question-begging functionalism.)

And let us quickly lay to rest what might have looked as if it too were a contender for a phenomenological category of its own, along with seeing trees vs. feeling pains: performing a volitional act (e.g., lifting my finger). No help there either. When I lift my finger, it feels-like "me-deliberately-finger-lifting." Just another feeling to account for, along with all the others. The fact that it feels-like I'm doing it, rather than like it's being done-unto me amounts to just one more (irrelevant) difference in feeling content. It does not give us any leg up on the M/BP.

Efference is a red herring here. It feels-like I'm being the agent, and maybe that's correlated with efferent brain activity: yet another correlation. (Humphrey's proposed "Sentition" is even worse. We don't need new terms! We need conceptual insights -- if there are any to be had.) I too happen to have a longstanding interest in the motor component of perception (Harnad 1982). But there are no inroads to the M/BP from any of that -- just perhaps true and interesting functional facts about the relation between afference and efference, reafference, reflexive vs. nonreflexive behavior, motor theories of perception, etc.

One also cannot agree with Humphrey that "it is 'like something' to have sensations, but not like anything much to engage in most other bodily activities!" Kinesthesia is qualitative; so is the difference between what it feels-like to raise your leg when it has been tapped on the patella by a doctor vs. when you will the motion deliberately. Those are all differences in "Feeling Space" -- just like everything else (including mental imagery and mental reasoning, indeed all of the "language of thought"). But this casts no light on the M/BP.

Humphrey's "self-resonance" sounds like just another one in the long litany of "self-X" terms meant to illuminate consciousness (self-awareness, self-reference, self-representation, etc.) -- while in reality merely renaming it.

(I might add that the "self" need not figure in it at all: The concept of the "self" -- though, like everything, it has its qualitative contents -- surely came late in the evolutionary day. The amphyoxus already has a full-blown M/B problem, even though it does not know it, if it aches when you pinch it, and that's the only experience that ever goes on in there -- no Cartesian reflection on its being my ache, and me as distinct from it as the patient of that sensation, etc. But just that ache is feeling enough; no "self-resonance" needed...)

Similarly, the fact that an "animal has a defining edge to it, a structural boundary" sounds like an excellent functional/Darwinian reason for evolving mechanisms that make the inner-outer distinction, and act upon it. But why should any of that feel like anything? Why must one not be a zombie-amoeba in order to get the full functional benefits Humphrey describes in his evolutionary scenario? (As usual, there is some conflation of the mental and the internal here.)

Organisms "must evolve the ability to sort out the good from the bad and to respond... with an ow": Humphrey is here caught in the act of contraband, smuggling in a feeling reaction (otherwise what does the "ow" mean?), where all that was needed functionally was a doing one. Question duly begged.

Similarly with "When red light falls on it... it wriggles redly.": Why should that feel like anything? And if it doesn't, then it's just wriggling wrigglingly, under red conditions. Once one has cheated, and allowed any qualitative light to enter into and "quicken" what should merely have been functional/Darwinian survival/reproduction machines -- Zombie ones, as plants are [I hope, being a vegeterian who tries not to eat anything that has or has ever had qualia!] -- generating the mental stuff one has set out planning to explain, the game is over and the M/B question is begged! Why/how do they wriggle feelingly rather than merely doingly?

"as yet, these sensory responses are nothing other than responses... no reason to suppose that the animal is in any way mentally aware of what is happening."
A bit of equivocation here: Can one be "aware" in any other way than "mentally"? And just what is it that is "happening" if it is not the object of any awareness? If no one is feeling anything, then the events in question might just as well be transpiring on the other side of the moon as within an entity's head, for there is no one home there either. So these are Zombies wriggling wrigglingly under red conditions, not wriggling redly. No light means no light, not just paler light.
"as this animal's life becomes more complex, the time comes when it will indeed be advantageous for it to have some kind of inner knowledge of what is affecting it, which it can begin to use as a basis for more sophisticated planning and decision making. So it needs the capacity to form mental representations of the sensory stimulation at the surface of its body and how it feels about it."
How it feels about it? But this was supposed to be the functional explication of what it is to feel! In reality, it sounds as if, for functional reasons, the entity needs certain internal structures and processes. Fine, but why should it feel-like something to have those, or to have them activated (Harnad 1996)? As before, why/how are these not just Zombies with internal structures and processes that do whatever it is that needs functional doing? "Internal" certainly does not mean "mental," as every thermostat knows (or rather doesn't).
"By monitoring its own responses, it forms a representation of 'what is happening to me'."
We've already settled earlier on the functional utility of an internal distinction between external and internal. But why should that feel-like anything either? (And if it doesn't, the "me" has no subject.)
"wouldn't it be better off if, besides being aware of feeling the pressure wave as such"
This is again equivocal between functionally-responsive-to the pressure wave and feeling it.
"able to interpret this stimulus as signaling an approaching predator?"
"Interpret" is again equivocal. Functionally, it just means process the information and compute the result. Why should that involve or engender feelings?

It should be clear by now that Humphrey has gotten out of this exactly what he has put into it. His language and caveats are equivocal about precisely where, how, or why he has smuggled in the light of consciousness (or the warmth of feelings), but clearly at some point he has, and we are meant to go along with this.

Alas, I cannot, because Humphrey has given me no reason -- functional or logical -- for doing so. He has simply arbitrarily turned on the mental lights at some juncture, and somehow attributed that to the Darwinian story he was telling, yet the story (except as a Just-So story. i.e., as mere hermeneutics) does not explain or justify it at all.

"When the question is "what is happening to me?", the answer that is wanted is qualitative, present-tense, transient, and subjective. When the question is "what is happening out there?", the answer that is wanted is quantitative, analytical, permanent, and objective."
No account whatsoever is given of why this should be the case. Why do internal data have to have the mental lights on, whereas external data do not? (I don't even think it's true, in that external-object-event-processing is probably just as closely correlated with consciousness as internal.)

It is conceivable that two systems evolved along the lines Humphrey describes; it is even conceivable that there are some correlations between his functional account and consciousness (although even functionally some parts seem problematic).

But the part Humphrey owes us, if this is really meant to have any bearing on the M/B problem, is an explanation of when/how/why feelings kicked in (whatever their correlation with these two hypothetical systems might be).

"proto-experience of sensation arises from its monitoring its own command signals for these sensory responses."
"Proto" is a weasel word here: Are we talking about feelings, or about something else? I do not know, nor can I conceive, of anything intermediate or "proto" in between feeling and nonfeeling. And it would be sensation when the mental lights went on, not "experience of sensation," which is redundant. With the mental lights off, sensation is just "event" or "physical effect" or "response." Optical transducers respond to light; they don't have sensations. There's no one in there to be the subject of sensations, to feel; and sensations are experiences, feelings, full-blown. (Self-monitoring is an old favorite. But why feelingly self-monitoring, rather than zombily?)
"though the animal may no longer want to respond directly to the stimulation at its body surface as such, it still wants to be able to keep up to date mentally with what's occurring"
Delays, planning, monitoring, policing: All good stuff, and we ourselves all certainly do it mentally; but that's neither here nor there. How/why is it mental rather than just internal but mindless Zombie adaptation here, in this putative explication? (These are the hard questions.) In his functional account, Humphrey has described some very useful internal computations, but he has left out entirely how/why they have anything feelingful about them. Till he can can do that, the M/BP has been untouched by any of this.

Humphrey's closed-circuits and internal loops are also popular candidates for "self-X" structures/processes (self-modifying and self-organizing are others), but all of these are too easy! One cannot just baptize them as "mental" and declare that a problem has been solved. It is the simplest thing (and indeed valid, because of the M/B correlation) to give a mentalistic interpretation to certain brain processes. But to interpret them as mental (even truly so) is not to explain them (causally) as mental! Such are the limits of mentalistic hermeneutics Harnad 1990a, b).

"Now once this happens [natural] selection is no longer involved in determining the form of these responses and... the quality of the representations based on them."
This is a reasonable rationale for replacing further evolutionary adaptation by all-purpose learning and intelligence, based on evolved internal cognitive mechanisms, but not a hint about why/how any of this is conscious (Harnad 2000).

I never quite figured out what Humphrey's "thickness factor" was (perhaps there was something intentionally Geertzian about it [Geertz 1973]) -- apart from the fact that it appears to be making some sort of a continuum out of something that is all or none: Either feeling is going on or it is not; if it is, then we are again just haggling about the price; but Humphrey owes us an explanation of how/why the feeling-switch was turned on at all in the first place.

"when the process becomes internalized and the circuit so much shortened, the conditions are there for a significant degree of recursive interaction to come into play... the command signals for sensory responses begin to loop back upon themselves, becoming in the process partly self-creating and self-sustaining... they have... become signals about themselves."
I can design and implement recursive, self-sustaining loops fitting Humphrey's description easily. Do they quicken with the light of consciousness too? If not, then why/how do the ones Humphrey says do, do? Animating the "self-X" words does not explain, it covers up! And its not getting signals that are "about" themselves that is the problem -- for this very sentence is now about itself too; the problem is getting someone in there for those signals to be about something to: a conscious subject. That problem is hard, and alas, Nick Humphrey, like everyone else so far, has failed to solve it.

REFERENCES

Geertz, C. (1973) The interpretation of cultures; selected essays. New York, Basic Books

Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.consciousness.html

Harnad, S. (1990a) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social Epistemology 4: 167-172. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dietrich.crit.html

Harnad, S. (1990b) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dyer.crit.html

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301. Reprinted in: C.G. Langton (Ed.). Artifial Life: An Overview. MIT Press 1995.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.artlife2.html

Harnad, S. (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1: 164-167. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.zombies.html

Harnad, S. (2000) Turing Indistinguishability and the Blind Watchmaker. In: Mulhauser, G. (ed.) "Evolving Consciousness" Amsterdam: John Benjamins http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad98.turing.evol.html

Harnad, S. (2001) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information (JoLLI) special issue on "Alan Turing and Artificial Intelligence" (in press)
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Humphrey, N. (2000) How to Solve the Mind-Body Problem"] Journal of Consciousness Studies 7.
http://cogprints.soton.ac.uk/abs/phil/200002001

Nagel, T. (1974) What is is like to be a bat? Philosophical Review 83: 435-451.

Nagel, T. (1986) The view from nowhere. New York: Oxford University Press