Even if one accepts the principles of semantic externalism as defined by Putnam, one can not accept his argument that the Brain in a Vat sceptical hypothesis is self-refuting. But his entire argument is based on semantic externalism, and if one does not accept that theory of meaning, then his entire argument crumbles.
Putnam's theory of meaning demands that for a symbol (word or image) to refer to some object, two elements must be present: (a) the issuer of the symbol must have the mental intent that the symbol refer to the object; and (b) there must be some suitable causal link from the object through the subject to the symbol. It is the necessity of this causal link that renders Mr. Putnam's theory of semantics an externalist one.
Putnam's first example is the ant tracing a line in the sand. He argues that the line cannot refer to Winston Churchill because the ant (being non-sentient) can have no intent that the line refer to Mr. Churchill. The ant lacks the requisite intentionality, so the line cannot refer. Although Mr. Putnam does not explain how it is, if the line in the sand has no meaning or does not refer, that we come to know that what the ant has traced is either a caricature of the man or a cursive writing of his name.
Putnam's second example is the piece of paper with a collection of splotches of paint on it, falling on a planet that has never known trees. He argues that not only does the collection of paint on the paper not refer to trees, even though we would recognize it as a good painting of a tree, but any concept that the aliens form based on what is depicted on that paper could also not refer to trees. According to Putnam's scenario, there is neither any intentionality involved in the formation of the paint pattern on the paper, nor any causal link between trees and what is on the paper. So, according to his thesis, the pattern of paint on that paper cannot refer to trees. Nor could any image or concept formed by aliens who might view that paper. Although, again, if the paint splotched paper does not refer to trees, Mr Putnam does not explain how it comes that we know that it is a picture of a tree.
Putnam's third example is his Brain in a Vat scenario. Putnam's BIV scenario stipulates that all sentient brains are envatted, and that whatever experiences these brains have is generated by a computer program that has never been programmed. Putnam is careful to void any hint of a causal link by stipulating that the vats and the computer come into existence "randomly" -- specifically eliminating the possibility that the computer has been programmed, or that the envatted brains have any prior experience to draw upon.
Putnam claims that when an envatted brain uses the words "tree", "brain" or "vat" the only thing that these words can refer to are internal features of the brain-in-a-vat scenario. They cannot refer to the same sorts of things that we refer to -- namely real trees, real brains, and real vats. According to Putnam's scenario, there is neither any intentionality involved in the computer's generation of nerve pulse strings that the envatted brains interpret as images of trees, brains or vats, nor any causal link between real trees, brains and vats and the nerve impulse strings generated by that computer. Hence when an envatted brain contemplates whether or not it is a "brain in a vat", it cannot possibly mean what we mean by a brain or a vat. For the envatted brain, all causal linkages from the words "brain" and "vat" trace back through the nerve impulses, to computer generated impulses, to computer program features that generate the "perceived" images of trees, brains and vats. And by stipulation, stop there.
Tony Brueckner in his "Brains in a Vat" entry for the The Stanford Encyclopedia of Philosophy(1) has summarized Putnam's argument in an easily understandable form. I reproduce it here, although I have some additional comments of my own that I feel make it more readily comprehensible in the current context.
Since Putnam claims that a brain in a vat cannot refer to anything that is beyond the inputs that the computer provides to the envatted brain, whenever the envatted brain "speaks", the envatted brain is not speaking English. It is speaking (or rather seeming to speak) "vat-ish" -- a language whose words refer only to those elements of the sensory experiences that the envatted brain has. Specifically for this argument, the words "brain" and "vat" in English refers to the real world of brains and vats. In vat-ish the words "brain" and "vat" refer to the computer generated images of brains and vats. So in the following argument, vat-ish referrals are denoted by the "*".
(a) Either I am a BIV (and speak vat-ish) or I am a non-BIV (and speak English).
(b) If I am a BIV (and am speaking vat-ish), then my utterances of "I am a BIV" are true iff I am a brain* in a vat*.
(c) If I am a BIV (and speaking vat-ish), then I am not a brain* in a vat*. [This is the key step. Putnam argues that to be a "brain* in a vat*" one would have to be a computer generated image of a brain in a computer generated image of a vat. And obviously, I am not that, because I am not a computer generated image.]
(d) If I am a BIV (speaking vat-ish), then my utterances of "I am a BIV" are false. [from (b) and (c)]
(e) If I am a non-BIV (and thus speaking English), then my utterances of "I am a BIV" are true iff I am a brain in a vat.
(f) If I am a non-BIV (speaking English), then my utterances of "I am a BIV" are false. [from (e)]
(g) My utterances of "I am a BIV" are false. [from (a),(d),(f)]
(h) My utterances of "I am not a BIV" are true. [standard logical negation of (g)]
(i) If I am a non-BIV (and am speaking English), then my utterances of "I am a non-BIV" are true iff I am not a brain in a vat.
(j) I am not a BIV. [from (h) and (i)]
I see three problems with this argument. The first has to do with Brueckner's interpretation of Putnam's prose. I have read the relevant passage of Putnam's work(2) where he lays out his argument, and I cannot see where Brueckner goes wrong with his interpretation. But neither is it obvious that Brueckner's interpretation is correct. So it is unclear to me whether this first problem is Brueckner's or Putnam's.
The problem is that premise (a) above is not properly phrased. If I am a brain in a vat, then -- by Putnam's semantic externalism -- I cannot form the thought "I am a brain in a vat" in English. At best, I can only form the thought "I am a brain* in a vat*" (in vat-ish). (The same applies to the other steps in the argument as well, of course.) So premise (a) should not lay out the two options in English. As it is above, it appears to cleanly divide the universe of possibilities into two mutually exclusive alternatives. But this is in fact an error. The first step in the argument should be rephrased as:
(a) Either I am a brain* in a vat* (and speak vat-ish) or I am a not a brain in a vat (and speak English).
Of course, it is now clear that the premise is not cleanly dividing the universe of possibilities. One would have to present arguments, that Putnam does not, that this bifurcation is in fact mutually exclusive and all inclusive. And hence, on this reading, the argument does not show that I am not a brain in a vat. It collapses because it remains possible that I am a brain in a vat, even though according to Putnam, I cannot form that thought. The last step in the argument is true only if I am not a BIV (and speak English). It is not true if I am a brain in a vat and cannot conceive it. Putnam's argument is based on two contradictory premises: (a) the premise that one can frame and understand an argument in English; and (b) the premise that if one is really a brain in a vat, one cannot understand the English argument. So phrased in this way, the argument is clearly circular (assuming the consequent).
The second problem with the argument arises from the manner in which Putnam applies his semantic externalism. By stipulation, the brains in Putnam's vats function in all respects as brains do in the real world. And more importantly, the experiences of Putnam's envatted brains are supposedly qualitatively indistinguishable from a non-BIV. That is a very important constraint. But Putnam appears to gloss over a necessary consequence.
If the experiences are indeed qualitatively indistinguishable, it would be logically possible to swap a vat-ish speaking envatted brain for an English speaking non-envatted brain and have no one be able to notice the swap. What now happens to Putnam's sense of reference? To what do the words "tree", "brain" and "vat" refer, once we have swapped the brains?
When I ponder whether I might be a brain in a vat, it seems that at one instant I can do so (when I am not in fact a brain in a vat) and at the next instant I can't (because I am now a brain in a vat). But oopps!, It seems that now, all of a sudden, for reasons quite external to my experiences, I can no longer even entertain the thought that my brain might have been swapped with a previously envatted one. Putnam tries to get around the threat of this possibility by initially framing his BIV scenario to stipulate that all brains are envatted. But if you accept that initial stipulation, then you can't even entertain Putnam's argument, since you no longer speak English. And more importantly, it voids his further stipulation that the experiences of a BIV would be qualitatively identical with those of a non-BIV. If, by stipulation, there are no such things as non-BIVs, then there is nothing with which the experiences of his envatted brains could be qualitatively identical.
The third problem arises from the fact that Putnam stipulates that the computer feeding experiences to the envatted brains was never programmed so that he can ensure that there is no suitable casual chain between trees, brains, and vats and the images that the envatted brains experience. In contradiction with that, he also stipulates that "Their images, words, etc., are qualitatively identical with images, words, etc., which do represent trees in our world". If that is the case, then what is it an image of when the envatted brain perceives a tree, brain or vat? If it is not an image of a tree, then when the envatted brain is swapped for a non-envatted brain, it will be unable to identify trees in its environment. And this is contrary to Putnam's stipulation that our experiences would be qualitatively identical. If it is an image of a tree, then the swapped brains would not be able to tell the difference. But the word "tree" would refer to trees regardless of whether the brain is envatted or not. And this is contrary to Putnam's semantic externalism.
What seems to drive Putnam's semantic externalism is the recognition that, for reasons that Putnam discusses, there is more to the meaning of a word such as "trees" than what the speaker of the word has in mind, and more to the meaning of an image on paper than what the painter has in mind. The first condition that Putnam mentions (that the issuer of the symbol must have the mental intent that the symbol refer to the object) captures the intuition that a word or image doesn't mean anything to the sender if the sender has no intention invested in the symbol. The causal condition that Putnam mentions seems aimed at ensuring that the sender of the symbol has a proper grasp of the concept that is being sent.
As Putnam argues, the meaning of a symbol is not intrinsic to the symbol. Words and images and other symbols themselves do not refer. And more or less in keeping with Putnam's externalist condition, they do not refer for other people just because the sender intends that they should. However, what semantic externalism seems to deny is that the meaning and reference of a symbol to the receiver is entirely within the head of the receiver -- two people can view a given symbol and extract quite different references and meanings depending on their respective conceptual context. Contra Putnam, the reference and meaning of any symbol is entirely in the eye of the beholder, not something external to the beholder.
When the ant draws the line in the sand, the line certainly has no meaning to the ant because the ant (the sender) has no intentionality invested in the symbol. When "beholding" the line it has drawn, the ant has no conceptual context within which to assign it any meaning. But to us who behold the line, the line does have meaning because (assuming we are familiar with Winston Churchill) we do have the necessary conceptual context. Contra Putnam, the line does refer to Winston Churchill (by being either a caricature or the cursive writing of his name) because we the viewer make the reference. To us who view that painted paper from afar, the paper does contain an image of a tree, and we draw the reference to trees. Those aliens who see just some paint splotches on paper and have no knowledge of trees cannot draw the reference because they lack the necessary conceptual context. So to them, the image does not refer to trees -- our respective conceptual contexts are different
But more importantly, to an envatted brain the words "brain"and "vat" refer to things that they recognize within their experiences. And because their experiences are, by stipulation, qualitatively indistinguishable from ours, they refer to the same sorts of things that we do when we refer to things. The way that an envatted brain becomes aware of brains may be through a computer generated image of a brain. But it is none-the-less an image of a brain. And the envatted brain's use of the word "brain" refers to just that kind of brain. Not to Putnam's "brain*".
Of course, this alternative theory of concepts, meaning, and reference does nothing to resolve the sceptical problem of brains in a vat. But there are other approaches that fair better than semantic externalism.
(1) Brueckner, Tony, "Brains in a Vat", The Stanford Encyclopedia of Philosophy (Winter 2004 Edition), Edward N. Zalta (ed.), <http://plato.stanford.edu/archives/win2004/entries/brain-vat/>.
(2) Putnam, Hilary, "Reason, Truth, and History", Cambridge University Press, 1982. Chapter 1, pp 1-21.
[Up] [Home] [Next]