Know - "(1) to perceive or understand clearly and with certainty; to have in the mind or memory as the result of experience, learning, or information; to understand and be able to use; to have personal experience of; (2) to feel certain."
Knowledge - "(1) acquaintance with fact truth or principles, as from study or investigation; acquaintance or familiarity gained by sight, experience or report; (2) the fact or state of knowing; clear and certain perception of fact or truth; (3) that which is or may be known; (4) the body of truths or facts accumulated by mankind in the course of time."
Opinion - "a belief or judgment that rests on grounds insufficient to produce certainty; (2) the expression of a personal attitude or judgment."
Certainty - "The state of being certain."
Belief - "Trust or confidence (in); acceptance (of thing, fact, statement, etc.) as true or existing."
Certain - "(1) free from doubt or reservation; (2) established as true or sure; unquestionable; (3) that may be depended on." Information - "items of knowledge."
Datum - "(1) thing known or granted, assumption or premise from which inferences may be drawn. (2) facts or information, especially as the basis for inference."
We are all interested in figuring out what is the best thing to do. We are all interested in learning the answers to our questions. But to choose between alternative things to do, we have to make some forecasts as to the likely consequences of each alternative, and evaluate the desirability of those consequences on some scale of better or worse. And to make use of the answers we find to the questions we ask, we have to judge how dependable our informants are.
If you and I want to discuss what it is we each judge to be what we think is the best thing to do, or what we claim is the correct answer to the question, then we need to understand the words that we use. We each support our judgements and claims by offering reasons. And when we offer these reasons, we describe them as "things that we know". So, in order to remove one potential source of conflict between us, it is advantageous for us together to agree on just what it means to "know" something. For this reason, exploring the meaning of our concept of "knowledge" has become fundamental to the western philosophical tradition.
Whether it is you or I, John or Jane, when we claim to know something, what is it that we mean by that claim? What is the best way to Larissa? Where does the tiger lurk? Should Jack and Jill go up the hill or down the dale to find water? Why is knowing the answer to these questions so much more desirable than simply believing the answer? So, if we place so much store in someone who claims to "know", just what does it mean to claim "I know" something? Under what circumstances is such a claim proper and valid? Under what conditions would we say that such a claim is invalid or cheating?
Unlike a "mere" belief, something that we "know" isn't just a psychological or mental state that cannot be challenged. To offer as a reason something that we "know" is to offer a reason with a special normative status -- a belief that merits a special kind of positive evaluation. Consider the following series of statements:
|I think that there is a God.||I believe that there is a God.||I know that there is a God.|
|I think that there is no God.||I believe that there is no God.||I know that there is no God.|
|I think that there are UFOs.||I believe that there are UFOs.||I know that there are UFOs.|
|I think that she is my friend.||I believe that she is my friend.||I know that she is my friend.|
|I think the tiger is
behind that tree over there.
|I believe that the tiger is
behind that tree over there.
|I know that the tiger is
behind that tree over there.
As we go from think to believe to know, the degree of assurance that the statement broadcasts increases. An assertion that is couched in terms of "think" or "believe" allows for a large degree of doubt in the truth of the statement. Whereas an assertion that claims to be something that we "know" communicates a much higher degree of assurance.
|No evidence either way. P is as likely to be true as false.||No direct evidence, but some related reasons for suspecting the truth of P, and no contrary reasons.||Some evidence to support the truth of P, with no contrary evidence.||Adequate justification for guaranteeing the truth of P to others.||A lot of positive evidence, and no contrary evidence. Or, no logically possible alternative.|
|"Silly Wild-Assed Guess"||"I think"||"I believe"||"I know"||"Certain Knowledge"|
|There is a God.||I think there is extraterrestrial life out there.||I believe that she is my friend.||I know that London is in England.||I am certain that 2+2=4.|
Whether what we believe counts as "knowledge" is ultimately a subjective evaluation. Along that continuum between "silly wild-assed guess" and "certain knowledge", will be a line drawn. To one side, we maintain that our belief has inadequate epistemic grounding, and to the other side we maintain that our belief qualifies as "knowledge". Close to the line, we will be uncertain which is which. Further from the line, we will have a great deal of confidence in which is which. But the choice of where a particular belief falls along that continuum relative to the location of that line, is largely a subjective evaluation.
Science and our western philosophical tradition began when our attempts to understand the world around us was decoupled from myth and tradition. Beginning in ancient Greece (perhaps with Thales of Miletos, circa 624-546 B.C., but certainly by the time of Plato circa 428-347 B.C.), western philosophical tradition has been one of analysis and criticism - a detailed examination of the elements or structure of something, the separation of something into its constituent elements, and the judging of and explaining the importance and meaning of other people's writings. Myth and traditions are "taken on faith", "taken as given", are deemed true because someone or something (usually "the Gods") says so. Myths and traditions simply are. They are the words of "Authority" and as such, they cannot be analyzed or challenged. They cannot be understood in any greater depth. They can sometimes be interpreted and re-interpreted, but they cannot be critically examined, and cannot be compared with alternatives. Reasoned argument cannot, therefore, be used to distinguish between alternatives. Reasoned argument, analysis and criticism, can only be used to distinguish between alternatives if the reasons that justify those beliefs can be explored. Scientific and (western) philosophical thinking demands reasons, justification, rationales. When it comes to questions about Nature and the origins of the Universe, what separates myth and tradition from scientific and (western) philosophical thought is the demand for, and existence of, some form of warrant. And we can not explore reasons and their warrants unless we understand the words we use to describe them.
"Knowledge", then, is a species of belief. It is clearly an honorific title bestowed upon only some beliefs. It promises a positive normative evaluation that separates some beliefs from others -- it promises some kind of sanction, authorization, guarantee, security, ground, justification, confirmation, proof. It is a success-state for beliefs. The question is why we have this distinction between things we believe, and things we believe so strongly that we call them by a separate name -- "knowledge". And, of course, just what are the success criteria that qualify a belief as knowledge?
When it comes to our struggle for survival, it pays (on average, and in the long run) to know (ie. have some credible assurance) behind which tree the tiger lurks. To base our survival on an unwarranted (ie. unassured) belief is to run a greater risk of becoming lunch rather than enjoying lunch. We have the concept "knowledge" to distinguish those beliefs that are somehow more likely to be useful -- more likely to be true -- more likely to yield the expected results when relied upon -- more likely to offer a survival advantage. To know behind which tree the tiger lurks is therefore to have some form of warrant for the belief -- to have some special factors that render the belief more likely to be true than other unwarranted beliefs. And it pays to demand that warrant from others.
Knowledge is power. The ancient Greek philosophers considered knowledge to be power over oneself. But to modern man, knowledge is power over the world. Knowledge is the power to control and manipulate our environment. If you need to build a bridge across a stream, it is more likely to be a successful rather than a fatal undertaking if you rely on some knowledge about the best ways to build it rather than some unwarranted beliefs on how to do it. Which bridge builder are you going to trust with your life when you want to get across the gorge? The one who says he thinks the span is strong enough, the one who says he believes the span is strong enough, or the one who says he knows the span is strong enough?
We begin with the task of clarification. There are several ways in which we describe cases of knowledge. To each of these ways there might correspond a theory of that kind of knowledge -
(iii) Recognition of information as being correct
The first sense of meaning is exemplified by such statements as "I know John", "I know the answer when I see it", or "I know New York". This sense of the word "know" implies a degree of acquaintance or familiarity with the object of the verb, and an ability to recognize it again on re-encounter. This is "knowledge of ". Acquaintance knowledge implies a certain degree of skill or ability, and is relative in nature. It also implies that you have encountered the known thing before. I can know John or New York more or less than you do, or than I did or will. And I can know or not know depending on whose standards of acquaintanceship is to be applied. Over time, my level of knowledge can increase or decrease. To someone who is not familiar with New York, I can claim to know New York. But to someone who is more familiar than I with New York, perhaps I can not properly claim to know New York.
How do I determine whether or not I "know" any particular thing in this sense? The key is that I must be able to recognize the object upon re-encounter with some degree of accuracy. If I have never encountered the object before, I cannot claim to have any acquaintance with it. If I am not sufficiently familiar with the object to meet the required standard of re-recognition, then I cannot be said to have the knowledge in question. Knowledge by acquaintance has a long association with philosophical enquiry. It formed the core of Plato's theory of knowledge. And has occupied the attention of numerous philosophers since, including Bertrand Russell and A.J.Ayer. But this sense of the concept "to know" is not the subject of this essay.
The second sense of meaning is exemplified by such examples as "I know the violin", or (again) "I know New York". This is "knowledge how . . ." and implies a much higher degree of skill or ability in the topic that is the object of the verb. The sense of the word in this context implies some special skill or ability, not merely re-recognition upon re-acquaintance. I can claim to know the violin not because I can recognize a violin when I see one, but because I can play the violin to some standard. Or I can claim to know New York if I can find most obscure landmarks or restaurants in (say) Greenwich Village or Chinatown. Ability knowledge is also relative in nature, and can change over time or according to whose standards are to be applied.
How do I determine whether or not I can properly claim to "know" any particular thing in this sense? The key is that I must be able to employ my special skill or ability to some much higher standard. Thus, if I can play the violin to my mother's standards, then to her I can say that "I know the violin". But if I cannot play well enough to satisfy anyone else, then I cannot say to them that "I know the violin". As you can see, like acquintance knowledge, whether or not I "know" something in this sense is dependent upon the standards that are being applied. And these standards can vary, depending upon the audience for my statement. But this second sense of the concept "to know" also is not the subject of this essay.
It is the third sense of meaning that is the focus of this essay. It is "knowledge that . . .", or propositional knowledge. This is the kind of knowledge that appears in statements that have the general format "S knows that P", where "S" is any subject of interest, and "P" is any truth statement or proposition. I introduced a sample of such statements above in that little table. To "know that . . ." is the same kind of mental construct as "think that . . ." and "believe that . . .". Each involves a statement or other assertion about the world around us. And each expresses something about the truth of that statement. But to "know that . . ." adds something special over and above "believe that . . .".
As you can see, the propositional sense of the concept of "knowledge" differs quite dramatically from the previous two. The most significant difference is that the use of the concept is not relative. The knowledge possessed by S either does or does not include P. One cannot say I know P more today than I did yesterday, or that you know P better than I do. S either knows or does not know that P. So how do we determine whether or not "S knows that P"? Or, alternatively, how do we determine whether the knowledge possessed by S does or does not include P? Equivalently, how do I determine whether I can properly claim to know that P?
For example, consider the statements "John knows that the ball is red", or "I know that P the Sun will come up tomorrow", or "The teacher knows that one of her three students gave her the apple", or "I know there is a God", or "I know there is no God", or "I know there are UFOs", or "I know that she is my friend", or "I know that the tiger is behind that tree over there".
In each case, if I state that I know that P, am I expressing an opinion, a thought, a belief, or do I have the knowledge? And if it is more than a belief, is it knowledge? Just what is the difference between a person's belief that the ball is red, and a person's knowledge that the ball is red. What has to be added to the belief in order to qualify it as "knowledge"? What conditions does the belief have to meet in order to qualify for the special honorific of "knowledge"?
Notice that knowledge-how (ability) and knowledge-of (acquaintance) are not completely independent of knowledge-that. Acquaintance and ability knowledge both involve significant amounts of propositional knowledge. And on some understandings of knowledge-that, it involves a significant amount of knowledge-how -- how to discriminate one thing from another.
In order compile a "philosophical" (as opposed to an ostensive or dictionary) definition philosophers analyze the common usage of the words "knowledge" and "know". Obviously, in common practice we mark the distinction between a belief and knowledge by how we use these words. We already have an ability-knowledge of how to use the word (and concept) "know". We all readily employ the concept of knowledge in our communications with other people, and even when talking to ourselves. And we each readily correct the way other people use the concept. Because language is a social convention, we must all therefore employ the concept with a reasonable degree of consensus and consistency. So our intuitive reactions as to which hypothetical scenarios involve knowledge rather than beliefs, and vice versa, can be used as a guide in discovering what the conditions are that we are employing.
The process of determining just what the difference is between S believing that P and S knowing that P, therefore, involves identifying the conditions that are severally necessary and jointly sufficient to warrant a claim to knowledge. By "severally necessary" we mean that all the conditions we identify are required to make some belief qualify as knowledge. And by "jointly sufficient" we mean that when all the conditions we identify are fulfilled, then the belief at issue qualifies as knowledge.
Among philosophers, from Plato to the present, there has been more or less complete consensus that a valid claim to knowledge excludes three things:
(i) Ignorance -- if you lack the information, you cannot claim to know.
(ii) Error -- if you are wrong about the matter, you cannot claim to know.
(iii) Opinion -- if you have no special grounds, you cannot claim to know.
But saying that does not really help us understand just what it is that separates a belief from knowledge. From the above three exclusions, all we can conclude is that in order to be knowledge, your belief must involve the appropriate information, not be wrong, and be based on appropriate grounds. But these limited conclusions have been recognized from the time of Plato. Since that time, the epistemological study of knowledge has exploded the search for necessary and sufficient conditions into a number of more detailed problems:
In order to begin addressing these problems, it is best to present the "Traditional" theory of what knowledge is.
The most widely recognized theory of knowledge is what is called the "Tri-Partite" or "Justified True Belief" (JTB) theory. Its first documented appearance was in Plato's Theaetetus dialogue. All of the other theories of what knowledge is, and most philosophical discussion of just how the other problems just described are to be addressed, derive from the various problems and questions that surround "standard" or arch-typical understanding of what knowledge is.
The Traditional or "Standard" JTB theory posits three conditions that are severally necessary and jointly sufficient for S to know that P, mirroring the three things that are almost universally recognized as being excluded from knowledge -- ignorance, error, and opinion. (The phrase "severally necessary and jointly sufficient" is usually translated as "if and only if" -- abbreviated as "iff".)
|(JTB)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) S is justified in believing that P.|
The first condition for knowledge within the standard JTB theory is that the statement "P" itself must be true. This condition excludes all those beliefs that are in error. Obviously, it is possible that I might believe some proposition that is not in fact true. But it would certainly be counter intuitive for me to claim to know things that are not in fact the case. Or more correctly, since I undoubtedly do believe many things that are not in fact true, it would be an improper use of "know" for me to claim to know things that are not true. It is common practice to expect that for me to claim to know that London is in England, it would be necessary that London is in fact in England. If I were to claim to know that London is, instead in Ireland, most people would claim that I am mistaken in my belief, and that I do not in fact know that. If the statement is not true (in other words, the ball is not in fact red) then we do not say "John knows" that the ball is red. We say instead that John merely believes that the ball is red, and that belief is not in accordance with the facts. By the way we commonly use the words, a belief can be false, but knowledge cannot. However, I will have more to say about the "Truth Condition" below.
The second condition for knowledge within the standard JTB theory is that S must believe that P is true. Again, it would be an improper use of "know" for me to claim to know something that I do not believe is the case. The teacher is not said to "know" that one of her students gave her the apple, unless she firmly believes that one of them did, in fact, give her the apple. If John believes that the ball is blue, then we do not say "John knows the ball is red", even if the ball is in fact red. We employ the concept of "belief" here in its stronger sense of "internalized operating basis", rather than its weaker sense of "pragmatically adopting an unproved thesis". Thus S must believe that P in the sense that S makes behavioural choices on the basis that P, rather than in the sense that S merely does not dispute that P. So if S does not believe, accept, and operate on the basis that P, then rather than saying "S knows that P" we instead say "S accepts that P".
The third criterion for knowledge within the standard JTB theory involves justification. It is the role of justification to separate knowledge from ignorance, and accident. There are many propositions that I might believe on a whim, that just happen to be accidentally true. No one would accept my claim to know any of these propositions. Something extra is clearly required. Not only must John believe that the ball is red, and the ball must in fact be red, but John must be justified in his belief. If John had no information other than his guess that the ball is red, then regardless of the fact that the ball is indeed red, we would not say that "John knows the ball is red". We would instead say that John thinks the ball is red, but does not know the ball is red. There must be some evidence that John is aware of, some reason, some justification, to justify John's belief that the ball is red.
However, it is the issue of just what constitutes justification, just what this "justification" actually means, that has occupied epistemologists at least since the time of Plato's dialogue between Socrates and Meno. There are many different theories of just what "justification" means in the context of the extra condition that needs to be added to true beliefs to warrant calling them knowledge, and to successfully exclude ignorance and accident. Investigating the various possible alternatives for the meaning of justification, and their inevitable consequences, will form the bulk of this essay
As a result of all of these (and other) issues, the standard JTB definition of knowledge offered above is no longer considered an adequate definition of knowledge. Firstly, there are some well respected theories of knowledge that would deny that "knowing that P" involves a "justified true belief", or that it requires S to believe that P, or that it involves anything like "justification". I will be exploring some of the more well known of these alternative theories later in the essay. Secondly, even accepting the premises of the "Justified True Belief" theory of knowledge, these three conditions as stated do not provide an adequate understanding of just what is meant by either "true" or "justified". Different philosophers have offered different interpretations of both of these concepts. And I will be exploring some of those alternatives below. Thirdly, however "truth" and "justification" are to be understood, there are three additional challenges that are not adequately addressed by this simple model. These challenges are Agrippan Scepticism, Cartesian Scepticism, and the "Gettier Problem" (after Edmund Gettier who first explored this issue in 1963)(a). I will deal with each of these challenges in some detail.
For the time being, however, we'll maintain the standard JTB model of knowledge, with all of its problems, and examine two of its three conditions in greater detail.
I said above that it would be an improper use of "know" for me to claim to know something that I do not believe is the case. And the JTB theory of knowledge is based on just that intuition. But there have been challenges to that intuition. The first of these is typified by a scenario called the "Diffident Scholar".
The diffident scholar studies some subject thoroughly, and answers exam questions on that subject correctly -- say that the mass of the tau neutrino is 18.2 MeV. However, because of nerves or pressure, he does not believe that he knows the answer and thinks that he is only guessing. The answer of 18.2 MeV "looks right", but he can bring to mind no reasons to support that intuition. He is guessing, but does not believe that the mass of the tau neutrino is 18.2 MeV. If asked, he would say that he does not believe that he gave the correct answer. It is argued from this example that there are adequate grounds to claim that, contrary to the subject's own claim, he does actually know the correct answer, despite the missing belief.
This example highlights the distinction between the first and third person sense of having reasons for one's beliefs. And it suggests that there are two kinds of answers as to what constitutes "good reasons" for one's beliefs. One can say that the diffident scholar is not in fact aware of any good reasons for his hunch about the mass of the tau neutrino, and hence very properly does not claim to know the answer to the exam question. And in fact he is so bereft of reasons in support of that hunch that he does not even believe the answer he has provided is correct. On the other hand, we who have access to information that the diffident scholar has forgotten, do have sufficient reasons for believing that he has provided the correct answer. So we can very properly claim to know that the mass of the tau neutrino is 18.2 Mev. But can we say that the diffident scholar actually knows the right answer even though he does not believe it, and even though he does not have any good reasons for believing it? Most people would say no, but some philosophers have argued that he does.
The other way of dealing with scenarios like the diffident scholar, is to argue that belief has nothing to do with knowledge. Down this road we find the theories of knowledge that fall outside of the JTB family of theories. And I will address a few of those later.
Knowledge requires judgements -- thoughts or beliefs that can be true or false. But that simple statement leaves open the questions of just what that is supposed to mean. We are looking for a state of affairs, which can be recognized when it obtains (or recognized at least some of the time -- this is up for debate) in which S believes that P and has good justification for believing that P but, in our view, is wrong about P. That is how we are able to say things like, "S thinks she knows, but she's wrong'.
In order to explore the significance of the truth condition further, we need to consider the ramifications of a theory of knowledge that omits this truth condition. For this hypothetical model of knowledge we would then have --
|(JB)||S knows that P iff||(a) S believes that P; and|
|(b) S is justified in believing that P.|
Given the context of an investigation into the necessity of the truth condition, we can temporarily ignore any questions relating to S believing that P, or the detailed nature of the "justification" involved. However, because we are ignoring the details, it does need to be made clear that the following discussion will assume that those details of justification that we are passing over are not so stringent that it is not logically possible for a claim to be suitably justified and yet false. There are some approaches to justification that demand just such infallibility (see below). If one does choose to adopt an "infalliblist" model of justification, then the JTB and the JB models are logically equivalent, since "proper" justification for the belief that P would guarantee the truth of P. For the purposes of this section of the essay, therefore, I will assume that the justification involved in both the JTB model and the JB model are "fallibilist.
The JTB model's truth condition is usually understood from a truth-realist perspective, where the truth of P is evidence transcendent in an absolute sense. However, the position of truth-anti-realism is that there is no such thing as evidence transcendent truth. The anti-realist truth-status of P is determined by the evidence. If P is beyond any evidence then its truth-status is undefined. This might initially appear to make the truth-condition of the JTB model superfluous, with the presumption that the evidence that dictates the anti-realist truth-status of P would be properly contained within the details of the justification condition. But this is generally not so. Both the JB and the JTB models of knowledge are specific to a single claim to knowledge. Only the most subjectivist (solipsistic?) of the truth-anti-realist theories would maintain that it is each individual's evidence that dictates the truth-status of P (for S). The rest of the truth-anti-realist theories stipulate that it is the coherence of the evidence available (in practice, or in principle, or at some limit) to some relevant population that dictates the truth-status of P (for that population). So only for the most subjectivist of truth-anti-realists would JB be logically equivalent to JTB. For all other truth-anti-realist theories, the truth of P will be as evidence transcendent for S as it would be for any truth-realist theorist. Therefore, for the purposes of this section of the essay, I will treat the truth condition in a manner neutral between truth-realism and truth-anti-realism, while specifically excluding extreme subjectivist truth-anti-realism.
Given that clarification of context, the first thing notable about JB as a theory of knowledge is that it perfectly captures the point of view of S when S makes a claim to know that P. It is usually assumed within the JTB theory of knowledge that whether P is true or not is evidence transcendent (at least with respect to the evidence available to S then and there). In other words, in the JTB model of knowledge, the epistemological status of S with regards to P does not determine the truth status of P. Therefore, from the perspective of S, all that S has epistemologically available is the belief that P, and any justifying rationale that P. On that foundation alone is based any claim by S to know that P. The actual truth-status of P is not available to S. If this is the case from the perspective of S, what purpose does the truth-condition serve? What function, if any, does the truth condition of JTB contribute to our understanding of "knowledge" that is not satisfied by JB?
Suppose Alice tells me she knows that Bob is in the kitchen. From that and JB, I can infer that Alice believes that Bob is in the kitchen, and the Alice has judged that she has adequate justification for that belief to qualify as knowledge. From the JTB model I could also infer that it is true that Bob is in the kitchen. But what additional information have I gained from that last inference? Especially when I must consider the fact that whether or not Bob is in truth in the kitchen is an evidence transcendent fact for me as well as Alice. Alice could believe Bob is in the kitchen, and (in her judgement) have thoroughly adequate justification for that belief, and yet never-the-less be wrong about it. And so could I, even if I went and looked. (It is actually not Bob that Alice and I see in the kitchen when we look. It is his identical twin brother Ben. And neither Alice nor I am aware that Bob has an identical twin brother.) So by either definition of knowledge, if Alice tells me Bob is in the kitchen, I am in exactly the same predicament. Assuming that I do not go and look for myself, I can only claim to know that Bob is in the kitchen if I trust Alice's judgement about her own justification. In a single person scenario, therefore, from the perspective of S, the truth condition adds nothing meaningful to our understanding of knowledge. I can properly claim to know only what I believe, and judge that I have adequate justification for believing. I, myself have no access to the truth-status of P. Likewise, in a two person scenario, the truth condition also adds nothing meaningful to our understanding. I am fully aware that Alice can properly claim to know only what she believes, and judges that she has adequate justification for believing. Neither of us have access to the truth-status of P.
Which brings us to the second thing notable about JB as a definition of knowledge - it is a purely internalist construct. Only Alice can tell whether or not she believes that Bob is in the kitchen. All I can do is observe her behaviour and speech. I can infer from these the likelihood that she believes what she tells me, but it remains possible that she is intentionally misleading me. I have no basis of authority from which to challenge her claim to believe that Bob is in the kitchen. Similarly, only Alice can judge whether or not her justification for a claim to knowledge is sufficient or not. I am not (under normal circumstances) party to the details that she uses to justify her belief. I cannot, for example, see Bob in the kitchen from my location, and have only her claim to justify a belief that she can and does see Bob in the kitchen. So I am forced to take any claim by Alice to know something at face value. I have no basis of authority from which to challenge the adequacy of her justification. By JB, therefore, any claim of S to know that P is necessarily infallible and unchallengeable.
Now, of course, this is not completely absolute. Depending on those passed over details on the nature of justification, it remains quite possible that for a JB claim to knowledge to be "properly" justified, Alice must be able to offer supporting rationales that I can examine and challenge. But again, depending on those details, not necessarily. If her supporting justification is perceptual evidence, for example, not being able to share her perceptions I cannot challenge her perceptual judgements. Barring a lengthy (and energy expensive) conversational exploration of her supporting rationales and criteria of judgement, therefore, I am likely going to be forced to take her word that she has the necessary sufficiency of justification. I am not normally going to be in a position to challenge her claim. Which means that, in general, under a JB model of knowledge, any claim of S to know that P must be considered infallible and unchallengeable -- in practice at least, if not in principle.
In order to address the (more or less) unchallengeable nature of JB claims to knowledge, what the JTB model adds is a thoroughly "externalist" condition on S's claim to know that P. By design, it is not a condition that S has any access to. But it is a condition that the rest of the population can impose on S. This externalist nature of the truth condition only starts to make a contribution to matters when claims to knowledge are considered in multi-party iterative over-time scenarios. The truth-status of P, while evidence-transcendent for any one party, is asymptotically approachable by a multitude over time (in either truth-realist or truth-anti-realist terms).(2)
The truth-condition of the JTB model is intended to capture this asymptotically approachable limit. The difference between Alice's claim to know that Bob is in the kitchen therefore takes on greater significance when considered under JTB compared to JB. Under JTB, Alice's claim is not just that she has judged her justification sufficient for a claim to knowledge, but she has also warranted that as the asymptotic limit of evidence is approached by a relevant population over time, she will not be proved wrong. This is the key additional feature that the JTB model's truth condition adds over the JB model -- a warrant (an assurance) that despite the fallibility of justification, the claim is none the less true, and will never be proved wrong.
While this additional warrant contributes little of meaning in one or two party single interactions, it becomes significant under more populous and interaction-iterative scenarios. Because under these extended conditions, it is not just the justification of the knowledge claimant that matters, it is the coherence of the entire accumulated body of knowledge of the population involved. In other words, in the case of the JTB model, not only must S believe that P, and have personally adequate justification to qualify that belief as knowledge, but that belief must properly cohere with the entire body of knowledge of the relevant population. This is a significant addition because it expands the scope of justification from the claimant's personal judgement, to the entire population's collective judgement. By thus raising the bar of justification, it makes more likely that S is not ultimately proved wrong in his claim that P. And that is where the truth-condition pays its way.
In the normal course of events, it might not make much difference whether Alice is ultimately proved wrong in her claim to know that Bob is in the kitchen. But if Alice is claiming to know behind which bush lurks the tiger, it becomes of vital importance to the tiger's intended lunch whether she is ultimately proved wrong in her claim or not. Under such evolutionary survival conditions, the extra warrant that a JTB claim to knowledge offers, over a less assured JB claim, is significant and worth having. Under a JB claim to knowledge, no one else who might have some input on the location of the tiger can properly raise any timely objections to Alice's claim to know where it lurks. Under a JB model of knowledge, Alice believes the tiger is over there, and she judges that she has sufficient justification to qualify that belief as knowledge. No one else can challenge her claim without a time and energy expensive investigation of her rationales and judgement criteria. But under a JTB claim to knowledge, if anyone within influence of her claim has any contradictory information, they can immediately (and much less expensively) challenge her claim to know where the tiger lurks. If my survival depends on the validity of that claim, if I am the tiger's intended lunch, then I want the additional assurance that the JTB model of knowledge offers. I want the expectation that others will feel free to judge the accuracy of Alice's claim to know where the tiger lurks.
The difference between the JB and JTB models of knowledge is sufficiently meaningful that the English language has provided separate words for what they define. The JTB model defines what we call "knowledge", and the JB model defines what we call "opinion". Without the truth-condition, JB claims are reduced to opinions. One's opinions are one's opinions, infallible and unchallengeable, and whether they are true or not is largely irrelevant. One cannot properly challenge another's opinions as invalid, improperly formed, or inadequately justified. One can only challenge them as being right (true) or wrong (not-true). And that, of course, is an appeal to the truth-condition. If our opinions are right (i.e. true), and suitably justified, then we elevate their status to that of "knowledge". We can conclude therefore, that (with the exception of extreme subjectivist truth-anti-realists) the truth of P (in either truth-realist or truth-anti-realist terms) is indeed necessary for S to know that P. A claim to know that P is a freely challengeable warrant that P will not ultimately be proved not-true.
In thinking about what it means for a belief to be "justified", we first need to address just what it is that is supposed to be justified: S's believing that P, or the proposition that P. The former is first-person justification (also referred to as "personal justification", or "epistemic responsibility"). If it is S's believing that P that is justified, that means that S is aware of the grounds that justify his belief that P. The latter is third-person justification (also referred to as "general justification" or "epistemic grounding"). If it is P that is justified, that means that there exists adequate grounds for a belief that P, whether or not the believer is aware of those grounds.
In the first-person sense, for S to be justified in believing that P, S must live up to certain "epistemic standards" of behaviour. Epistemic responsibility is an essential component of rationality. The point of setting standards for epistemic responsibility is to reduce the risk of error. For example, the subject must be aware of some evidence that suggests that P is more likely, and must not ignore any evidence that suggests that P is less likely. In the first-person sense, the subject can tell whether or not he has lived up to the appropriate epistemic standards. In the third-person sense, S is judged by someone else to be justified in believing that P if there exists enough supporting evidence that makes P more likely, and no defeater evidence that makes P less likely. In the third-person sense, the subject cannot tell whether there exists adequate grounds for his belief.
The distinction is relevant because all theories of knowledge that admit that knowledge involves beliefs (as I mentioned above, not all do) grant that there must be proper epistemic grounding (third-person sense of justification) for the belief that P. However, the various theories of knowledge that have been proposed can be divided according to whether they place their emphasis on the first-person "epistemic responsibility" sense of justification, or on the third-person "epistemic grounding" sense of justification. Internalist theories, like the family of Justified True Belief theories, focus on the "epistemic responsibility" sense of justification., and would argue that knowledge requires one to have (be aware of) good reasons for one's beliefs. Externalist theories, on the other hand, focus on the epistemic grounding sense of justification, often to point of excluding entirely the first person sense. While all justification-externalist theories would agree that for knowledge there must be good reasons for one's beliefs, a pure externalist theory of knowledge would maintain further that knowledge does not involve one having good reasons for one's beliefs.
However, from the perspective of the traditional JTB model, one is epistemically responsible in believing some proposition only if one's belief is based on having adequate evidence. This requirement makes the first-person sense of evidential justification in the traditional JTB model so fundamental that it tends to mask any distinction between the two senses of justification.
So, in order to capture these two separate senses in which P can be justified, and keep the distinction clear for the duration of the following discussion, the traditional JTB definition of knowledge really should be expanded to:
|(JTB*)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3p) S is personally justified in believing that P; and|
|(3g) S's belief that P is based on adequate grounding.|
Whether justification is considered in the first-person sense or the third person sense, there are two models of understanding what constitutes "proper" or "adequate" or "sufficient" justification. There is the "prior grounding" model and the "default and challenge" model.
The "prior grounding" model of justification demands that S be aware of (or at least have access to) the reasons that constitute his justification for believing that P in order to validly claim to know that P. The prior-grounding model is therefore thoroughly internalist.
The Prior Grounding model of justification includes four mutually reinforcing principles:-
A consequence of the Prior-Grounding model of justification is that any claim by S to know that P can be subject to an open-ended challenge for the reasons S has for believing that P -- for the epistemic grounding of which S is aware. Since a claim to knowledge, on this model of justification, is a claim to be aware of the adequate grounding for the belief, it is an open invitation to be asked for that grounding, and an open promise that when asked the grounding can be provided.
The "default and challenge" model, on the other hand, takes an entirely different approach. It grants default warrant to some beliefs until and unless challenged. It places the priority on epistemic responsibility, and incorporates epistemic grounding only when, and to the extent that, the claim to know that P is specifically challenged. So, in terms of this model of knowledge, one does not necessarily have to have (be consciously aware of) good reasons for one's beliefs to qualify them as knowledge. Although the epistemic grounding must never-the-less exist.
A consequence of the default and challenge model, and a key difference with respect to the prior grounding model, is that you can claim to know P without also knowing that you know P. You can judge that you have proper default reasons for believing that P, without also being aware of all the reasons that exist for believing that P.
From the default and challenge model of justification, in claiming knowledge we are not supposing ourselves to have access to an impossible God's Eye view of the informational environment. Rather, we are issuing a strong and open-ended guarantee of the correctness of what we believe. We are stating that we have sufficient confidence in the truth of our belief to guarantee that we will not be eventually proved wrong. We are betting that there is no non-misleading counter evidence to our belief.
Consequently, from the basis of the default and challenge model of justification, a claim by S to know that P is not subject to the same kind of an open-ended challenge as it would be under the prior grounding model. Under the default and challenge model, the reasons S may have for believing that P -- for the epistemic grounding of which S is aware -- may consist of reasons why a default belief is sufficient. Since a claim to knowledge, on this model of justification, is a claim that a default belief is not defeated, one need be aware only of the absence of defeaters. It is, then, not an open invitation to be asked for the grounding of one's belief, but merely an open promise that the grounding exists.
It is intuitively obvious that an accidentally true belief cannot be knowledge, but it is less than obvious what makes a belief not accidentally true. The study of the Gettier Problem is the study of how the various suggested definitions of knowledge fall short of our common use of the word.
In preparing a definition for a term like "knowledge", philosophers generally employ a process that combines a suggestion for the conditions that are individually necessary and jointly sufficient to warrant the claim to knowledge, with a search for counter-examples where the definition is at odds with the common use of the concept. It turns out that it is relatively easy to compile scenarios where the definition of knowledge as a "justified true belief" runs counter to our intuitive understanding of how we employ the concept of "knowledge".
Edmund L. Gettier, in a 1963 article(a), argued that the JTB definition of knowledge has a serious problem. If we do have knowledge in ordinary cases, then we must have appropriate justification in ordinary cases. But then there can be cases in which we appear to have suitable justification but the proposition we believe is only true "by coincidence" or "accidentally". These cases will be Justified-True-Beliefs that are not generally considered knowledge.
Regardless of the particular model of justification that is brought to the standard JTB theory of knowledge, there is always the possibility that the three conditions for a "justified true belief" are satisfied, but satisfied in a way that is not generally accepted as knowledge. It is possible that P could be true, that S believe that P, and that S is considered properly justified (however that is to be understood) in believing that P. Yet the satisfaction of these three conditions together might be totally accidental. All the evidence (or other beliefs) that are considered by S to support the truth of P might not be supportive of P at all.
Edmund Gettier, along with many philosophers who have studied the problem, have framed numerous ingenious examples. Here is one of them. Consider the case of Fred's Ford. John believes that Fred owns a Ford. In fact, Fred does indeed own a Ford (a Ford Explorer, actually). Yet all of the evidence that John has that suggests to him that Fred owns a Ford, is derived from John's observations of Fred and a particular Ford Mustang. John concludes from the evidence that Fred owns a Ford. But actually the evidence equally supports the alternative that Fred has borrowed his sister's Ford Mustang. It is generally acknowledged that John does not actually know that Fred owns a Ford because John's belief that Fred owns a Ford is quite unjustified. It is just "epistemic luck" that Fred does in truth own a Ford. Without changing any of the justifying reasons that John has for his belief about Fred and this particular Ford Mustang, Fred could actually not own a car at all, or might own any other kind of car besides a Ford.
Gettier scenarios in general share a few key characteristics. In each case, a subject holds a belief that P that the proposed scenario stipulates is well justified, yet is supposedly generally acknowledged as not knowledge. The justification that is described in the example strongly suggests that P is true, but is not conclusive proof that it is, and is in fact defeated. Each example contains the key element of luck (called, in this context "epistemic luck"). Each case is constructed so that it is pure chance that P is true, because the evidence that justifies the belief that P is also consistent with not-P. The challenge that the Gettier problem presents to the justified-true-belief definitions of knowledge is to identify the additional (fourth) factor that must be added to a "justified true belief" to render the intuitively more correct judgement of whether the belief in question is knowledge.
There have been many different attempts to "de-Gettierize" the standard Justified-True-Belief theory of knowledge by adding what is termed a "fourth condition". Unfortunately, despite this, the Gettier problem remains an unresolved issue. As each new idea has been presented to the philosophical community, someone has come up with an alleged counter example. Here are some of the attempted solutions:
Notice that the Gettier Problem only arises because we were trying to say that Fred could know that someone owns a Ford on the basis of evidence that falls short of certainty. If we demand that knowledge requires absolutely certain or infallible evidence, then it would be clear why Fred is not in a position to know that someone owns a Ford. Fred doesn't have infallible evidence that someone owns a Ford.
Fallibilism about the justification of knowledge acknowledges that there are ways in which belief-forming processes can go wrong, but it accepts that these sources of error are themselves part of the normal course of events. The Gettier problem presupposes fallibilism about justified belief, since otherwise a belief could not be both true and justified without being an instance of knowledge. A Gettier example is a belief which, even though it is justified, is lucky to be true, in the same way that an unjustified belief is lucky to be true. For example, it is not out of the ordinary to misremember that one left one's glasses on one's desk rather than on the sink. But it is rather extraordinary both to misremember this and to be right about their whereabouts anyway, because someone moved them from the sink to the desk.
For justified beliefs, error is normal as compared to accidental truth. Most of our beliefs are not knowledge. Short of being knowledge, justified beliefs are more likely to be false than to be accidentally true. An ordinary justified false belief is unlucky not to be true because it has what it normally takes to be true. One is steered wrong on grounds of the same sort on which one is normally steered right. In a Gettier situation the justification, though adequate, is defeated.
However, infallibilism, as a solution to the Gettier problem, limits the concept of "knowledge" to those beliefs for which we have sufficient evidence to rule out any logically possible alternative. Since neither induction (generalization from a series of examples) nor abduction (inference to the best explanation) can give rise to inferences with sufficient certainty for this approach, infallibilism limits valid claims to knowledge to beliefs arrived at by deductive inference.
However, most of our beliefs are formed on the basis of induction or abduction (or perhaps directly from our sensory experiences). Hence, most normal knowledge claims (assuming they are valid cases of knowledge) are only defeasibly justified, although their justifications are not in fact defeated. As a matter of contingent fact and normally acceptable common usage, our justifying reasons for claiming knowledge are usually far from sufficient to rule out any logically possible alternative. Most of the time our justifying reasons are just barely sufficient to rule out the more likely alternatives.
The problem with infallibilism is that it rules out most of what we normally refer to as knowledge. Infallibilism is implausibly restrictive, therefore, because it entails that beliefs which are less than maximally justified do not qualify as knowledge. While it should be noted in passing that one response to an offered Gettier example is to deny the premise that the belief in question is in fact properly to be called knowledge, infallibilism is so at odds with the generally accepted usage of the term "knowledge" that it is no longer a seriously proposed solution.
This approach demands that the evidence the subject draws upon to justify his belief that P not be opposed by any evidence to the contrary. There are two degrees of this approach, reflecting the first and third persons sense of justification. The weaker condition is that the subject himself cannot be aware of any defeating evidence -- the first-person epistemic responsibility sense of justification. However, it is unclear that this approach will resolve many of the proposed Gettier examples. In the example cited above, John is not aware of any evidence that might defeat his belief that Fred owns a Ford. Yet it is generally accepted that in this case John does not know that Fred owns a Ford.
The stronger variant of this approach is that there must not exist any defeating evidence, whether or not the subject is aware of it -- the third-person epistemic grounding sense of justification. This would resolve the case of Fred's Ford. But there have been counter examples designed for this suggestion. Consider what has been called the "Assassination" scenario.
Jill reads in her favourite newspaper that the president of her country has been assassinated. In fact, this story is true. However, the president's associates have mounted a campaign to suppress the story, and they've been broadcasting false reports on all the television stations that the president is OK, the assassin actually only killed a bodyguard. Jill is blissfully unaware of all this misleading evidence. The newspaper she read happens to be the only news source that's reporting the true events. All of Jill's peers, on the other hand, have heard the misleading TV reports and aren't sure whether or not the president was really killed. It is suggested that Jill has a justified true belief that the president was assassinated, but she doesn't have knowledge, because there is all this misleading evidence abroad in her community, which she has only managed to avoid by sheer luck.
There is no valid evidence that does in fact defeat Jill's belief, but there is a lot of false evidence that, if believed, would be considered defeating. So even though there is in fact no defeating evidence, the presence of the false evidence is supposedly sufficient to withhold the label of knowledge. Most philosophers would agree that in this scenario, Jill does not in fact know that the President has been assassinated. Such examples demonstrate that this approach, while promising, is not a complete solution.
This approach demands that S include no false inferences when justifying a belief that P. In the example of Fred's Ford, John employs the false inference that the Ford Mustang that all his evidence is about actually belongs to Fred. But like all other proposed Gettier solutions, counter-examples have been offered for this one as well. The Assassination scenario offered above is one of them Jill does not employ any false inferences in supporting her belief, yet it is generally agreed that she does not know that the president has been assassinated. Consider also the "Barn County" scenario.(3)
Suppose there is a county in the Midwest with the following peculiar feature. The landscape next to the road leading through that county is peppered with barn-facades: structures that from the road look exactly like barns. Observation from any other viewpoint would immediately reveal these structures to be fakes: devices erected for the purpose of fooling unsuspecting motorists into believing in the presence of barns. Suppose Henry is driving along the road that leads through Barn County. Naturally, he will on numerous occasions form a false belief in the presence of a barn-facade. Since Henry has no reason to suspect that he is the victim of organized deception, his belief that these facades are barns is justified -- they do look just like barns. Now suppose further that, on one of those occasions when he believes he is looking at a barn, he happens to be looking at the one and only real barn in the county. This time, his belief is justified, and true, and not based on any false inference. But its truth is the result of epistemic luck, and thus his belief is not generally accepted as an instance of knowledge. So the No-False-Inference approach can be defeated.
There is, however, a variation of the No-False-Inference approach that I believe is proof against counter-examples. It focuses on context specific judgements about the knowledge claim. Examine S's belief that P and focus on the concepts that S employs rather than the words used to describe the scenario. If one then decomposes S's belief that P into S's "atomic beliefs" (which is easily doable from the context provided by each example), then it seems to me that either S's false inference becomes obvious, or it becomes obvious that we are mixing S's first-person judgement about the knowledge claim with our own more informationally rich third-person judgement about that claim.
In the case of Fred's Ford, John's belief is based on a lot of evidence having to do with a Ford Mustang. To decompose this belief into its conceptual context is to render it as "Fred owns that particular Ford Mustang for which I have all this evidence". John's belief that Fred owns a Ford is not the basic belief for which John has al his evidence. It is rather a derived belief -- derived from the belief about Fred and that Ford Mustang. (And in fact, it is derived on the basis of the Epistemic Closure Principle -- a principle that we will see later does not actually hold for knowledge.) It is clear from this contextual decomposition, therefore, that John is falsely inferring that Fred owns the Mustang.
In the case of Barn County, the knowledge claim that is offered in the scenario is Henry's "I see a barn". But conceptual decomposition renders this as either Henry's "What I see over there is a barn" or our own "What Henry sees over there is a barn". In the former case, I would actually disagree with Goldman. It is true that there is some potentially defeating information available "out there" that Henry is unaware of. But does this render unacceptable his judgement that his belief is sufficiently justified to qualify as knowledge? I think not. I think that Henry can validly claim to knowin this case that what he is looking at is a barn. He has adequately fulfilled his epistemic responsibility, and there does exist adequate grounds to justify his belief. Of course, where he was looking at a barn-facade, he cannot validly claim to know it is a barn he is seeing, because it is not a barn.
On the other hand, if we treat the questionable knowledge claim as our own "What Henry sees over there is a barn", however, I think it is obvious that the answer comes down the other way. We can not properly claim this as knowledge because we are in possession of the defeating additional information about the existence of barn-facades. We can further conclude that in our own judgement, on the basis of our additional knowledge, Henry's belief is not knowledge. Even though we can admit that from Henry's context, his knowledge claim is valid and proper.
The assassination scenario is handled the say way as the barn county scenario. If we are discussing Jill's claim to knowledge, then I disagree with the general position that the false and misleading counter evidence negates Jill's claim to knowledge. She is not aware of that false evidence, and thus does indeed have adequate justification for her claim to knowledge. But we, who are aware of that false and misleading evidence, cannot claim to know that the president has been assassinated. Contextually, there is a fundamental difference between first-person knowledge judgements and third-person knowledge judgements.
This distinction between perspectives highlights the fact that knowledge claims involve a judgement and a confidence scale for belief. Henry is sufficiently confident in his belief (because he believes he has sufficient justification and is not aware of the counter evidence) that he claims to know it is a barn. Jill is sufficiently confident in her belief (because she believes she has sufficient justification, and is not aware of the misleading false counter evidence) that she claims to know the president has been assassinated. And from their first-person contexts, their claims are right and proper. We, on the other hand, in possession of more information, see that their confidence is misplaced and properly conclude that they do not know what it is they claim to know. And from our third-person contexts, our claims are right and proper. If knowledge claims are seen as context sensitive, not absolutely objective as most philosophers would insist, then we can conclude that Henry and Jill and we can all be correct in our respective judgements while contradicting each other.
(To see how Contextual Decomposition treats other Gettier and similar scenarios, click on Gettier Cases.)
The no-false-belief approach, if combined with an internalist conception of justification (without contextual decomposition), can easily suffer from "creeping scepticism". The scepticism arises because it is never clear from the inside when some particular supporting belief we are using in the justification of our belief that P is actually or even likely false. We are none of us infallible. At any given time, it is always possible that some of our beliefs are in fact false. If that is the case, then it is possible that we don't know what we think we do. Contextual decomposition prevents this creep to scepticism by making it clear, even to the subject, which belief is the unjustified one.
The no-false-belief approach combined with an externalist conception of justification (without contextual decomposition), quickly devolves into the next offered solution.
Like the strong defeasible evidence variant mentioned above in Section 4.2.2, this approach demands that there must not exist any defeating evidence, and considers it irrelevant whether or not the subject is aware of it. But it goes further than the defeasible evidence approach by examining all the potential evidence (practically available or not) and considers how that evidence impacts the third-person (God's Eye) process of justification. This approach nicely deals with all of the Gettier examples because it considers, from a global omniscience perspective, all of the evidence provided in those scenarios.
The problem that this approach faces is that it makes the concept of knowledge something that is determinable only externally by an omniscient intellect, or at least by an outside observer. It provides no guidance to us fallible and limited individual knowers on whether any claim to knowledge is properly justified. Like with the strong version of defeasible evidence, global justification externalism is at odds with many philosophers' preference for the alternative of justification internalism.
This approach demands that the subject's belief that P be caused in "an appropriate way" by the fact that P. Furthermore, the subject's justification for the belief must include the logic by which the truth of P caused the belief that P. This nicely rules out all of those Gettier examples where the truth of P is not the cause of the subject's belief that P. In the example of Fred's Ford, John's belief that Fred owns a Ford is not caused by the fact that Fred owns a Ford Explorer. But it remains vulnerable to examples where the causal chain is intact, but there is actual or false defeater evidence available -- like the assassination or barn county examples. In such scenarios, the belief that P is suitably caused by the fact that P, but the real or false defeater evidence is not provided a role.
The additional challenge faced by this theory, is that it is difficult to specify exactly how the truth of P can cause a belief that P without recourse to the very concept of knowledge that is being defined. So this approach also is demonstrated to be less than satisfying.
This is perhaps the most popular of the alternatives to the standard conception of knowledge as an intellectual exercise. Reliablism is the thesis that propositional knowledge consists of a true belief that is arrived at through some reliable (albeit fallible) process. It is an externalist theory because the subject is not in a position to determine whether the processes involved are reliable. Although the approach can be somewhat internalized by considering our beliefs about the reliability of our belief-forming processes.
The advantage of reliablism as an approach to the Gettier problems, is that a reliable process need be only more reliable than not. The Reliablist approach takes Fallibilism seriously, responding to the Gettier Problem by simply accepting a certain low-level rate of errors. A reliablist conception of knowledge permits subjects to err in their claims to knowledge. Jill and Henry may properly claim to know about assassinations and barns, and yet be mistaken in their claims. We, with more information, can reach a different reliable conclusion. Likewise, our common employment of the concept of "knowledge" need not be totally precise. We need only be more correct than not. So instances of an incorrect granting or withholding of the warrant of "knowledge" can be quite acceptable as long as it is not too often. A definition consisting of necessary and sufficient conditions to properly cover all conceivable scenarios is not required. Some slack is tolerable, and unlikely Gettier counter-examples can be ignored.
In addition to the Gettier Problem described above, the traditional JTB definition of knowledge suffers from the problem of scepticism. The fundamental claim of Scepticism is that there is nothing contradictory in the suggestion that our belief that P may be well and properly justified, yet P might none-the-less be false. In the standard understanding of the JTB definition of knowledge, there is a gap between justified belief and truth (assuming a fallibilist understanding of justification). The sceptic claims that this gap cannot be bridged and that therefore none of our beliefs are sufficiently justified to qualify as knowledge.
Sceptical challenges to the JTB theory of knowledge comes in various forms --
The universe of sceptical arguments can also be divided along more historical lines. The early sceptical arguments drew upon what is now generally called "Agrippan Scepticism". The more recent sceptical arguments draw upon what is now generally called "Cartesian Scepticism" or "Brain-in-a-Vat Scepticism".
This form of sceptical challenge is known as Agrippan (or Ancient Greek) Scepticism after the Five Modes of Agrippa identified by Sextus Empiricus.(4)
Most theories of knowledge, like the JTB family of theories, are classed as "internalist" because their concept of justification is based solely or primarily on things that are directly available to the knower. Internalist theories of knowledge and justification focus on the way we employ reasons to support conclusions from the evidence. When you list all the reasons you believe justify your belief that P, what you are listing is other beliefs, along with the associated second-order beliefs that the supporting evidence actually does imply P, or make P more probable.
However, if your knowledge is justified on the basis of other beliefs, and those are in turn justified based on further beliefs, it is easy to see that an infinite regress quickly develops. A similar infinite regress occurs with the second-order beliefs that the evidence supports the likelihood of P. The sceptical challenge is that when asked to provide the reasons that justify any particular belief, you have only one of three options:
1) Keep providing some new supporting reason -- i.e. embark on an infinite regress of reasons; or
2) At some point, repeat yourself -- i.e. reason in a circle; or
3) At some point give up, and fall back on some basic dogmatic (and hence unjustified) assumption.
The Agrippan Sceptic argues that none of these three alternatives is acceptable as a proper justification for your beliefs. Therefore, none of your beliefs are justified. Hence knowledge (conceived as justified true belief) is impossible to attain.
The two most well known alternatives within the family of JTB theories of knowledge and justification are distinguished by how they attempt to resolve this challenge of Agrippan Skepticism. Foundationalism responds to the Agrippan sceptic by adopting the premise that there are some "foundational" beliefs that are non-inferentially justified -- intrinsically credible by virtue of the kind of belief they are. Foundationalism therefore argues that Agrippa missed one alternative on that list of three -- that there are beliefs that are not justified by other beliefs.
Coherentism, on the other hand, responds to the Agrippan sceptic by adopting the premise that beliefs can be mutually supporting. Coherentism argues that reasoning in a circle does not have to be circular reasoning. Coherentism is justification internalism without foundational beliefs. The hypothesis is that one's belief that P is justified if and only if one's belief that P coheres with the rest of what one believes. Unlike foundationalism, the coherence concept of justification is holistic rather than linear or hierarchical.
I will expand on both of these theories of knowledge once I have discussed Cartesian Scepticism.
This is a very famous radical sceptical challenge to knowledge. It is known by various names. Most famously it is called "Cartesian Scepticism", after Rene Descartes who initiated modern sceptical thought with his "method of doubt". He hypothesized that for all he knew, all of his experiences might be deceptions provided by an evil demon. More recently, drawing upon the popularity of science fiction and technology, Descartes' evil demon has been replaced by evil scientists or ingenious aliens who steal your brain and place it in a vat of nutrients. In this scenario all of your experiences are provided by a super computer wired to the sensory nerves of your brain. By hypothesis, no experience that we might possibly have can discern whether or not we are a brain in a vat. (Star Trek episodes and the Matrix trilogy of movies have popularized this scenario.) Hence the popular name for this kind of scepticism is "The Brain-in-a-Vat Argument". It is also known as "The Problem of Under-Determination" or "The Problem of the External World".
The sceptic argues that all of our beliefs about the existence of an external world are unjustified because all of the evidence (perceptual experience) we have in support of those beliefs cannot rule out the sceptical alternative that we are a brain in a vat or being deceived by Rene Descartes demon. All of our experiences are fully compatible with both the hypothesis of an external world and the sceptical alternative that we are but a brain in a vat. Hence our beliefs about the external world are unjustified. And since all our beliefs about the external world are unjustified, none can qualify for the honour of "knowledge".
In detail, these arguments can be group into three general approaches:
Note that these challenges are not directed against justified beliefs. It is, rather, a claim that our beliefs can be justified and yet still not qualify as knowledge. The Cartesian sceptical challenge is based on the premise of metaphysical realism. How things are (truth) may be different from how they seem (perception). The sceptic concludes that no one is ever sufficiently justified in their beliefs to warrant the honorific of knowledge, so knowledge is not possible. This conclusion applies to personal justification (first-person justification, epistemic responsibility) as well as general justification (third-person, adequate grounding).
However, it needs to be emphasized that the sceptic's arguments only show that there are limits to our abilities to give reasons or cite evidence. Cartesian Scepticism is an argument about adequate grounding. To get from what he argues to what he concludes, the sceptic needs to assume the prior grounding model of justification -- that no belief is responsibly held unless it is based on citable evidence. More precisely, he needs the dependence principle of the prior grounding model of justification to link epistemic responsibility with adequate grounding. And he needs strong justification internalism to identify grounding with the ability to cite the evidence. It is the Cartesian Sceptic's claim that no amount of epistemological responsibility can guarantee adequate epistemological grounding.
Be that as it may, the various externalist, non-JTB theories of knowledge and justification owe their genesis to different notions of how to counter the kinds of sceptical arguments presented by Cartesian Scepticism. These theories, like reliablism and the causal theory, posit direct links between knowledge and epistemic grounding, bypassing the first-person sense of justification that is epistemic responsibility.
But before exploring some of the alternatives to the traditional JTB theory of knowledge, I need to explore the "Brain in a Vat" argument in greater detail. The form of the argument, and the premises upon which it is based, will provide important input into the following discussions of the various alternatives theories of knowledge.
(i) If I am a Brain in a Vat, then I do not have two hands.
(ii) I cannot tell whether or not I am a Brain in a Vat.
(iii) Therefore, I can not know that I am not a Brain in a Vat.
(iv) Therefore, I can not know that I have two hands.
This sceptical argument depends for its impact on its apparent paradox -- the argument seems intuitively reasonable, yet at the same time intuitively false. It seems quite obvious that we do in fact have a lot of knowledge about the external world. It seems quite unproblematic for me to claim to know that I have two hands. Yet it also seems obvious that the sceptical argument is at least comprehensible. Once the possibility is pointed out, it seems quite reasonable and unproblematic to suppose that I can't in fact know that I am not a brain in a vat. And it also seems reasonable to conclude from this that I therefore don't know what I think I know -- that I have two hands. To see where the problem lies, we need to examine the argument in greater depth.
In order to be comprehensible, the short-form argument provided above requires the necessary addition of a number of premises that are normally hidden. Some of these are rather obvious additions required to turn the argument into a valid deductive format. But some of these necessary additional premises are well buried. Both the obvious and hidden missing premises open up opportunities for responding to the sceptic, either by negating his argument or by demonstrating that his argument is incoherent. Making clear what these unmentioned premises are makes clear that the sceptical argument is not as intuitively reasonable as first supposed. Thus, at the very least, dissolving the apparent paradox.
To begin with, a more formally framed version of the Brain-in-a-Vat argument proceeds as follows:
|(BIV)||(a)||If (I know things about the external world), and|
|(I know that knowing things about the external world implies that I am not a BIV),|
|then (I know that I am not a BIV).||[Closure Principle]|
|(b)||Things may not really be as they appear.|
|(c)||Despite appearances, I may really be a BIV.||[From (b)]|
|(d)||I cannot prove, and hence I do not know, that I am not a BIV.||[From (c)]|
|(e)||I do not know things about the external world.||[From (a) and (d)]|
Now let's explore some of the implications of the necessary missing premises -
(1) Things may not be as they appear. This is the basic Cartesian sceptical premise of metaphysical realism. This premise is necessary in order to separate the way that things appear from the way that things are in fact. If there is no such separation (metaphysical idealism), then I could prove that I am not a BIV by simply observing that it does not appear to me that I am a BIV. In order for the sceptical argument to have any force, therefore, it must assume that it is possible that I might in truth be a BIV despite the fact that it does not appear to me that I am. "Folk" philosophy comes freighted with the baggage of Cartesian mind/body dualism, so it is natural for us to expect that there is a gap between how our minds "see" things, and how they really are. Hence, from the perspective of folk philosophy, it is quite intuitively acceptable that things may not be as they appear.
One way to challenge the BIV argument, therefore, is the adoption of either metaphysical idealism or a direct realist theory of perception. Both alternatives would deny the Cartesian premise that things might not be as they appear. Another way to mount the same sort of challenge is the alternative of an (extreme) anti-realist conception of truth. Truth-anti-realism (when taken to the extreme) would also deny the premise that things might not be as they appear.
(2) An infallibilist model of knowledge. In order to move from the mere suggestion of the BIV alternative to the conclusion that I therefore do not know (because I cannot prove) that I am not a BIV, the sceptical reasoning must demand that my justification for my beliefs about whether or not I am a BIV rule out all potential defeaters, including the BIV hypothesis. In other words, the BIV argument is based on the assumption that the justification required to make my belief (that I am not a BIV) into knowledge must be much closer to being infallible than usually expected. This infallibilist premise is reinforced by the particular form in which I expanded the BIV argument above. The expanded version asserts the premise that "I cannot prove that I am not a brain in a vat". Employing the word "prove" reinforces the presumption of infallibilism, since "prove" in contexts such as this is usually understood to imply something like "logical proof", or "deductive proof" -- a sense of guaranteeing the truth of, rather than simply providing evidence in support of. This is nicely consistent with the common "folk philosophy" understanding of knowledge as a belief about which we are "certain". Hence the infallibist assumption appears quite reasonable and is usually not recognized for what it is.
The way to challenge the BIV argument here is the alternative of a fallibilist conception of knowledge. A fallibilist conception of knowledge would allow that I could know a lot of things about the external world because those beliefs are adequately justified, and yet never-the-less that justification might be mistaken about the truth of those beliefs. The justification required for my beliefs to qualify for the honorific "knowledge" would not have to rule out all logically possible alternatives. The process of justification could permit some errors. It is possible (if I am indeed a BIV) that I do not in fact know anything about the external world. But the fact that I cannot prove that I am not a BIV is not a defeater for any of my claims to have such knowledge. Fallibilism draws upon the distinction between a first-person (internal, epistemic responsibility) view of justification, and a third-person (external or God's Eye", adequate grounding) view of justification.
(3) A foundationalist model of knowledge. The BIV argument is based on the premise that knowledge is to be understood from a foundationalist perspective. The reasoning is based on the premise that experiential awareness of perceptual evidence has some form of intrinsic epistemic priority. The justification necessary to elevate beliefs into knowledge has to be fundamentally based on perceptual knowledge, not on other beliefs. Otherwise, the required possibility that things might not be as they appear would have no epistemic weight. On some alternative theories of knowledge, my belief that I am not a BIV (and my beliefs about the external world) would be sufficiently justified (proven) by the coherence of a set of beliefs. Similarly for the various externalist models of knowledge, my beliefs about the external world could qualify as knowledge without my even considering the BIV alternative -- let alone proving that I am not a BIV. But we are all intuitive empiricists, so we naturally feel quite comfortable with the assumption that perceptual/evidentiary propositions have some special epistemic status -- the basis of foundationalism.
As a result, any of the non-foundationalist models of knowledge would provide a way to challenge the BIV argument. As a matter of historical fact, they have been specifically constructed to do so. The two most well-known are also members of the JTB family of theories -- Coherentism and Contextualism.
(4) A prior grounding model of justification. As a consequence of the premises that assume that knowledge demands an infallible foundationalist justification of one's beliefs, the BIV argument can also be seen to require the "prior grounding" model of justification. The BIV argument demands that the subject be aware of the "proof" that infallibly justifies the belief that one is not a BIV before allowing that one knows that one is not a BIV. The sceptic cannot admit either (a) the possibility that I might in fact be appropriately justified and yet not be aware of that fact; or (b) the possibility that some of my beliefs about the external world might be default justified. Otherwise, I might properly be said to know that I am not a BIV without being able to prove it. When folk philosophy thinks about "knowledge", it normally thinks in terms of asking and giving reasons for one's beliefs. So it is natural to fee comfortable with a "prior grounding" model of justification. Knowing that one is not a BIV is usually understood to mean having good reasons for believing that one is not a BIV. Which, of course, the BIV argument denies is possible ex hypothesi.
There are two kinds of alternative way of challenging the BIV argument here, or rather two ways in which one can adopt a non-prior-grounding model of justification. One is to adopt an externalist theory of knowledge that denies that knowledge involves justification at all (such as pure reliablism, a causal theory, or Nozick's truth-tracking theory). The other option is to adopt the "default and challenge" model of justification (as is incorporated in the contextualist theory of knowledge, for example). From this latter alternative, I am prima facie justified in believing all sorts of things about the external world until and unless the sceptic can provide a context that challenges that default justification. Since the mere mention of the BIV alternative does not provide the necessary context, the burden is shifted to the sceptic to provide justification for believing the BIV alternative. With a default prima facie justification in play, it is irrelevant that I cannot prove that I am not a BIV.
(5) Knowledge is closed under known entailment. This is the "epistemic closure principle" employed in step (a). This is a complex topic in itself, and I will explore this premise in much further detail in the next section of this essay.
When the hidden premises behind the apparent intuitive reasonableness of the sceptic's argument are made clear, it becomes obvious that the BIV argument relies for its comprehensibility on a concatenation of highly questionable premises about the nature of knowledge. Each of those premises has been challenged by criticisms presented by many philosophers. There are numerous theories of knowledge available in the literature that provide alternatives for each premise. While the image of "knowledge" that underlies the seeming reasonableness of the BIV sceptical argument might be recognized as the "traditional" or "paradigmatic" JTB theory, for many reasons, some of which we have seen already, it is no longer considered the "best" model of knowledge available.
The "closure principle" or the "principle of known entailment" (or known implication) when applied to knowledge posits that -
If (1) S knows that P
and (2) S knows that P entails (implies) Q
then (3) S knows that Q.
It is easy to see the intuitive appeal of this principle. If I know that this tomato is red, and I know that a red tomato implies that it is ripe, then I know that this tomato is ripe. If John knows that Fred owns this particular Ford Mustang, and John knows that this implies that Fred owns a Ford, then John knows that Fred owns a Ford. Logical implication is a form of deductive argument. And as such, given true premises it guarantees that the conclusion is also true. So -
If (1) if it is true that P
and (2) it is true that P entails (implies) Q
then (3) it is guaranteed to be true that Q.
This seems to be a ery good justification for believing that Q. We use reasoning of this sort all the time. Thus it is reasonable to assume that for a "Justified True Belief" model of knowledge, the principle of known implication is simply documenting the fact that logical entailment, a deductive argument generally, is a sufficient reason for justifying the conclusion. This is perhaps easier to see if we focus in on the justification aspect of knowing. A more interesting variation of the closure principle (for reasons you will see in a moment) is whether justified belief is closed under known implication. In this form, the "closure principle" posits that -
If (1) S has a justified belief that P
and (2) S has a justified belief that P entails (implies) Q
then (3) S is justified in believing that Q.
Or alternatively, drawing upon the JTB definition of knowledge one can, without losing the sense of the above, mix the two forms as -
If (1) S knows that P
and (2) S knows that P entails (implies) Q
then (3) S is justified in believing that Q.
And it really seems quite counter intuitive to not believe something that one is fully justified in believing. When phrased in terms of justified beliefs, it would initially seem as if all of the JTB theories of knowledge would have to agree that justified belief is indeed closed under known entailment.
For the externalist alternatives to the JTB family of theories, however, the situation is a little more complex. For each particular theory it would depend on whether or not logical entailment is encompassed within the details of whatever reliable, causative, or law-like connection might be posited. Nozick's "truth tracking" or "subjunctive conditional" theory of knowledge is, however, an example of an externalist theory of knowledge that would specifically deny that justified belief is closed under known implication -- mostly because Nozick's theory does not treat knowledge as a question of a belief justified by other beliefs.
The closure principle has gained its significance because of its employment in the Brain-In-a-Vat arguments of Cartesian Scepticism as described above in the previous section. In order to reason from the very reasonable sounding premise (BIV-d) that "I cannot prove that I am not a BIV" to the problematic conclusion (BIV-e) that "I do not know that I have two hands", the sceptic is drawing upon premise (BIV-a) that knowledge is closed under known implication. The sceptic depends upon a modus tollens deductive argument that proceeds from the premise "I know that P" (I know that I have two hands) through the closure principle (if I know that P, and I know that P implies Q, then I know that Q), to the interim conclusion "I know that Q" (I know that I am not a BIV), and then denying this interim conclusion (I do not know that I am not a BIV) to reach the ultimate objective of denying P (I do not know that I have two hands). We are all familiar with deductive reasoning, and the closure principle for knowledge seems to be a simple matter of deductive logic and therefore seems obviously true.
Unfortunately for the Cartesian sceptic, and fortunately for we who know that we have two hands, the closure principle does not in fact hold as a matter of necessity for most theories of knowledge. So the Brain-in-a-Vat argument fails at step (BIV-a). And John's reasoning from his basic belief about Fred and a certain Ford Mustang, to a less specific belief about Fred and Fords generally, also fails. Even if the closure principle is framed instead in terms of justified belief, then it is step (BIV-d) that trips up the sceptic. In terms of justified belief, step (BIV-d) would have to be phrased as "I do not have a justified belief that I am not a BIV". But of course, it is now obvious that I do indeed have a very well justified belief that I am not a BIV. My belief might in fact be false (if the sceptic's hypothesis is true and I am indeed a BIV). But I none-the-less have a justified belief that I am not a BIV. Its not an infallibly justified belief, but it is a well justified none-the-less.
There are two reasons why the closure principle fails for knowledge. The first is the question of belief, and the second is the question of justification. Given the traditional JTB definition of knowledge, it is obvious that with the closure principle, the truth condition is satisfied. It is stipulated that P is true. And the deductive reasoning of logical implication appears to ensure that Q is true. But it is the belief condition in the JTB concept of knowledge that trips matters up. Knowledge as a justified true belief is not closed under known implication because belief is not necessarily closed under known implication. Assuredly it often is closed as a matter of contingent fact. We do often believe what we have adequate justification to believe. We even often believe what we do not have adequate justification to believe. But it is also entirely possible that I might know that P, and know that P implies Q and yet not believe (for whatever reason) that Q. Perhaps I also believe (falsely) that Q is inconsistent with some other (also possibly false) beliefs that I hold. Or perhaps I have just been cognitively lazy, and while knowing when I think about it that P entails Q, have not taken the trouble to acknowledge, and hence believe, that Q. Advising me that I have very good justification for believing that Q does not necessitate that I will believe that Q. Belief as a mental state does not follow logical rules.
Here are two examples of situations where belief in Q does not follow:
Lottery Paradox -- I believe that individually each ticket of a lottery is a loosing ticket. Believing that individually each lottery ticket is a loosing ticket (P), entails that collectively all lottery tickets are loosing tickets (Q). Yet I also believe that one ticket will be a winning ticket (not-Q). This is a rationally justified belief based on the rules of lotteries.
Preface Paradox -- I believe that individually each proposition in this essay is true. Believing that individually each proposition is true (P) entails that I believe that all propositions are true (Q). Yet I also believe that at least one proposition in the essay is false (not-Q). This is rational belief based on the observation that I am not infallible, and do make mistakes. Hence it is highly likely, to the point of certainty, that I have made an error in at least one of the many propositions that I believe individually are true.
Most theories of knowledge, because they rely on the condition that S believe that Q before S can know that Q, would therefore fall into that group of theories for which knowledge is not closed under known entailment. The group would include foundationalism, coherentism, contextualism, reliablism, the causal and law-like connection theories, and Nozick's "truth-tracking" or subjunctive-conditional theory to mention just the more readily recognizable.
But there are also a few theories that do not incorporate this second condition on knowledge that S believe that Q. The performative theory, as one example, would however also deny that knowledge is closed under known implication because it maintains that "to know" is performance verb that has nothing to do with beliefs or justification. And while most deontological theories fall into the JTB family of theories (and hence deny the closure principle for knowledge) it is possible to frame a deontological theory in terms of S having a right or duty to believe that Q (when properly justified), rather than in terms of S actually believing that Q. And for this variation, knowledge would indeed be closed under known implication. By dropping the requirement that S believes that Q, similar variations can be created out of some of the other theories as well
This brings us to the reason discussed by Fred Dretske for denying that knowledge is closed under known implication. Dretske's argument is that accepting Q could nullify all the justification one has for knowing that P. And hence one could not accept the extension of knowledge from P to Q, even if one accepted that P implied Q.
The example that Dretske provides is that of the zebras at the city zoo.(6) If one visits the city zoo and sees what looks to all appearances as zebras in a pen, along with a posted sign explaining that these are zebras in the pen, and one knows of no reasons for doubting this information, then one is justified in believing that there are zebras in that pen. But there being zebras in the pen logically implies that what you are seeing are not mules cleverly disguised as zebras. Dretske argues that you are not justified in believing that what you are seeing in the pen is not mules cleverly disguised as zebras because you are no longer justified in believing they are zebras.
The problem with Dretske's reasoning as a basis of denying the closure principle, is that it simply doesn't work at the level he intends. Dretske's reasoning is based on the traditional JTB model of knowledge. But the logical extension through the implication to Q only works if P is true. If S knows that P, then by definition, P is true. In other words, if I know that I am seeing zebras, then it is true that I am seeing zebras. (If it is not true that I am seeing zebras, I cannot by definition know that I am seeing zebras.) Then if I know that P (I am seeing zebras) implies Q (they are not cleverly disguised mules), I am completely justified in believing that they are not cleverly disguised mules. Dretske's argument that the possibility that they might be cleverly disguised mules is certainly a sceptical challenge to my knowing that P. But it fails as a demonstration that knowledge is not closed under known implication.
If looked at in terms of justified belief, on the other hand, Dretske's reasoning makes more sense. Having a justified belief that I am seeing zebras does not entail that it is true I am seeing zebras. Therefore, it is now readily apparent that my consideration of the alternative that they are disguised mules might nullify all the justification I have for my belief that they are zebras. In other words, Dretske's reasoning, while not demonstrating that knowledge is not closed under known implication, does demonstrate that justified belief is not necessarily closed under known implication.
The conclusion is that under most theories of knowledge, neither knowledge nor justified belief is closed under known implication. The only exceptions to this conclusion would be some of the more obscure versions of reliablist, causal, and law-like theories, and the non-JTB variation of a deontological theory I outlined above (because they do not incorporate the condition that S believe that Q).
Interestingly, we loose nothing by denying the closure principle. We are, after all, not maintaining that logical inference is never an adequate justification for believing that Q. We are only denying the argument that logical inference is always a completely sufficient reason for believing that Q. The principle of known implication is a good prima facie reason for believing that Q -- ceteris paribus. But there are necessarily other considerations that might come into play to govern whether S actually does believe that Q. So we can suggest that knowledge is usually closed under most practical circumstances. It is a useful rule of thumb. But we cannot say that knowledge is closed -- period. It is not a necessary consequence of what knowledge is. Which means that the BIV sceptical argument fails at step (BIV-a).
Hilary Putnam provided a refutation of a version of the brain-in-a-vat argument, based upon what he referred to as "semantic externalism"(7). However, I do not believe that it is effective.
Even if one accepts the principles of semantic externalism as defined by Putnam, one can not accept his argument that the Brain in a Vat sceptical hypothesis is self-refuting. But his entire argument is based on semantic externalism, and if one does not accept that theory of meaning, then his entire argument crumbles.
Putnam's theory of meaning demands that for a symbol (word or image) to refer to some object, two elements must be present: (a) the issuer of the symbol must have the mental intent that the symbol refer to the object; and (b) there must be some suitable causal link from the object through the subject to the symbol. It is the necessity of this causal link that renders Mr. Putnam's theory of semantics an externalist one.
Putnam's first example is the ant tracing a line in the sand. He argues that the line cannot refer to Winston Churchill because the ant (being non-sentient) can have no intent that the line refer to Mr. Churchill. The ant lacks the requisite intentionality, so the line cannot refer. Although Mr. Putnam does not explain how it is, if the line in the sand has no meaning or does not refer, that we come to know that what the ant has traced is either a caricature of the man or a cursive writing of his name.
Putnam's second example is the piece of paper with a randomly generated collection of splotches of paint on it, falling on a planet that has never known trees. He argues that not only does the collection of paint on the paper not refer to trees, even though we would recognize it as a good painting of a tree, but any concept that the aliens form based on what is depicted on that paper could also not refer to trees. According to Putnam's scenario, there is neither any intentionality involved in the formation of the paint pattern on the paper, nor any causal link between trees and what is on the paper. So, according to his thesis, the pattern of paint on that paper cannot refer to trees. Nor could any image or concept formed by aliens who might view that paper. Although, again, if the paint splotched paper does not refer to trees, Mr Putnam does not explain how it comes that we recognize it as a good picture of a tree.
Putnam's third example is his Brain in a Vat scenario. Putnam's BIV scenario stipulates that all sentient brains are envatted, and that whatever experiences these brains have is generated by a computer program that has never been programmed. Putnam is careful to void any hint of a causal link by stipulating that the vats and the computer come into existence "randomly" -- specifically eliminating the possibility that the computer has been programmed, or that the envatted brains have any prior experience to draw upon.
Putnam claims that when an envatted brain uses the words "tree", "brain" or "vat" the only thing that these words can refer to are internal features of the brain-in-a-vat scenario. They cannot refer to the same sorts of things that we refer to -- namely real trees, real brains, and real vats. According to Putnam's scenario, there is neither any intentionality involved in the computer's generation of nerve pulse strings that the envatted brains interpret as images of trees, brains or vats, nor any causal link between real trees, brains and vats and the nerve impulse strings generated by that computer. Hence when an envatted brain contemplates whether or not it is a "brain in a vat", it cannot possibly mean what we mean by a brain or a vat. For the envatted brain, all causal linkages from the words "brain" and "vat" trace back through the nerve impulses, to computer generated impulses, to computer program features that generate the "perceived" images of trees, brains and vats. And by stipulation, stop there.
Tony Brueckner in his "Brains in a Vat" entry for the The Stanford Encyclopedia of Philosophy (8) has summarized Putnam's argument in an easily understandable form. I reproduce it here, although I have added some additional comments of my own that I feel make it more readily comprehensible in the current context.
Since Putnam claims that a brain in a vat cannot refer to anything that is beyond the inputs that the computer provides to the envatted brain, whenever the envatted brain "speaks", the envatted brain is not speaking English. It is speaking (or rather seeming to speak) "vat-ish" -- a language whose words refer only to those elements of the sensory experiences that the envatted brain has. Specifically for this argument, the words "brain" and "vat" in English refers to the real world of brains and vats. In vat-ish the words "brain" and "vat" refer to the computer generated images of brains and vats. So in the following argument, vat-ish referrals are denoted by the "*".
(a) Either I am a BIV (and speak vat-ish) or I am a non-BIV (and speak English).
(b) If I am a BIV (and am speaking vat-ish), then my utterances of "I am a BIV" are true iff I am a brain* in a vat*.
(c) If I am a BIV (and speaking vat-ish), then I am not a brain* in a vat*. [This is the key step. Putnam argues that to be a "brain* in a vat*" one would have to be a computer generated image of a brain in a computer generated image of a vat. And obviously, even if I am a BIV, I am not a computer generated image.]
(d) If I am a BIV (speaking vat-ish), then my utterances of "I am a BIV" are false. [from (b) and (c)]
(e) If I am a non-BIV (and thus speaking English), then my utterances of "I am a BIV" are true iff I am a brain in a vat.
(f) If I am a non-BIV (speaking English), then my utterances of "I am a BIV" are false. [from (e)]
(g) My utterances of "I am a BIV" are false. [from (a),(d),(f)]
(h) My utterances of "I am not a BIV" are true. [standard logical negation of (g)]
(i) If I am a non-BIV (and am speaking English), then my utterances of "I am a non-BIV" are true iff I am not a brain in a vat.
(j) I am not a BIV. [from (h) and (i)]
I see three problems with this argument. The first has to do with Brueckner's interpretation of Putnam's prose. I have read the relevant passage of Putnam's work where he lays out his argument, and I cannot see where Brueckner goes wrong with his interpretation. But neither is it obvious that Brueckner's interpretation is correct. So it is unclear to me whether this first problem is Brueckner's or Putnam's.
The problem is that premise (a) above is not properly phrased. If I am a brain in a vat, then -- by Putnam's semantic externalism -- I cannot form the thought "I am a brain in a vat" in English. At best, I can only form the thought "I am a brain* in a vat*" (in vat-ish). (The same applies to the other steps in the argument as well, of course.) So premise (a) should not lay out the two options in English. As it is above, it appears to cleanly divide the universe of possibilities into two mutually exclusive alternatives. But this is in fact an error. The first step in the argument should be rephrased as:
(a) Either I am a brain* in a vat* (and speak vat-ish) or I am a not a brain in a vat (and speak English).
Of course, it is now clear that the premise is not cleanly dividing the universe of possibilities. One would have to present arguments, that Putnam does not, that this bifurcation is in fact mutually exclusive and all inclusive. And hence, on this reading, the argument does not show that I am not a brain in a vat. It collapses because it remains possible that I am a brain in a vat, even though according to Putnam, I cannot form that thought. The last step in the argument is true only if I am not a BIV (and speak English). It is not true if I am a brain in a vat and cannot conceive it. Putnam's argument is based on two contradictory premises: (a) the premise that one can frame and understand an argument in English; and (b) the premise that if one is really a brain in a vat, one cannot understand the English argument. So phrased in this way, the argument is clearly circular (assuming the consequent).
The second problem with the argument arises from the manner in which Putnam applies his semantic externalism. By stipulation, the brains in Putnam's vats function in all respects as brains do in the real world. And more importantly, the experiences of Putnam's envatted brains are supposedly qualitatively indistinguishable from a non-BIV. That is a very important constraint. But Putnam appears to gloss over a necessary consequence. If the experiences are indeed qualitatively indistinguishable, it would be logically possible to swap a vat-ish speaking envatted brain for an English speaking non-envatted brain and have no one be able to notice the swap. What now happens to Putnam's sense of reference? To what do the words "tree", "brain" and "vat" refer, once we have swapped the brains?
When I ponder whether I might be a brain in a vat, it seems that at one instant I can do so (when I am not in fact a brain in a vat) and at the next instant I can't (because I am now a brain in a vat). But oopps!, It seems that now, all of a sudden, for reasons quite external to my experiences, I can no longer even entertain the thought that my brain might have been swapped with a previously envatted one. Putnam tries to get around the threat of this possibility by initially framing his BIV scenario to stipulate that all brains are envatted. But if you accept that initial stipulation, then you can't even entertain Putnam's argument, since you no longer speak English. And more importantly, it voids his further stipulation that the experiences of a BIV would be qualitatively identical with those of a non-BIV. If, by stipulation, there are no such things as non-BIVs, then there is nothing with which the experiences of his envatted brains could be qualitatively identical.
The third problem arises from the fact that Putnam stipulates that the computer feeding experiences to the envatted brains was never programmed so that he can ensure that there is no suitable casual chain between trees, brains, and vats and the images that the envatted brains experience. In contradiction with that, he also stipulates that "Their images, words, etc., are qualitatively identical with images, words, etc., which do represent trees in our world". If that is the case, then what is it an image of when the envatted brain perceives a tree, brain or vat? If it is not an image of a tree, then when the envatted brain is swapped for a non-envatted brain, it will be unable to identify trees in its environment. And this is contrary to Putnam's stipulation that our experiences would be qualitatively identical. If it is an image of a tree, then the swapped brains would not be able to tell the difference. But the word "tree" would refer to trees regardless of whether the brain is envatted or not. And this is contrary to Putnam's semantic externalism.
What seems to drive Putnam's semantic externalism is the recognition that, for reasons that Putnam discusses, there is more to the meaning of a word such as "trees" than what the speaker of the word has in mind, and more to the meaning of an image on paper than what the painter has in mind. The first condition that Putnam mentions (that the issuer of the symbol must have the mental intent that the symbol refer to the object) captures the intuition that a word or image doesn't mean anything to the sender if the sender has no intention invested in the symbol. The causal condition that Putnam mentions seems aimed at ensuring that the sender of the symbol has a proper grasp of the concept that is being sent.
As Putnam argues, the meaning of a symbol is not intrinsic to the symbol. Words and images and other symbols themselves do not refer. And more or less in keeping with Putnam's externalist condition, they do not refer for other people just because the sender intends that they should. However, what semantic externalism seems to deny is that the meaning and reference of a symbol to the receiver is entirely within the head of the receiver -- two people can view a given symbol and extract quite different references and meanings depending on their respective conceptual context. Contra Putnam, the reference and meaning of any symbol is entirely in the eye of the beholder, not something external to the beholder.
When the ant draws the line in the sand, the line certainly has no meaning to the ant because the ant (the sender) has no intentionality invested in the symbol. When "beholding" the line it has drawn, the ant has no conceptual context within which to assign it any meaning. But to us who behold the line, the line does have meaning because (assuming we are familiar with Winston Churchill) we do have the necessary conceptual context. Contra Putnam, the line does refer to Winston Churchill (by being either a caricature or the cursive writing of his name) because we the viewer make the reference. To us who view that painted paper from afar, the paper does contain an image of a tree, and we draw the reference to trees. Those aliens who see just some paint splotches on paper and have no knowledge of trees cannot draw the reference because they lack the necessary conceptual context. So to them, the image does not refer to trees -- our respective conceptual contexts are different
But more importantly, to an envatted brain the words "brain" and "vat" refer to things that they recognize within their experiences. And because their experiences are, by stipulation, qualitatively indistinguishable from ours, they refer to the same sorts of things that we do when we refer to things. The way that an envatted brain becomes aware of brains may be through a computer generated image of a brain. But it is none-the-less an image of a brain. And the envatted brain's use of the word "brain" refers to just that kind of brain. Not to Putnam's "brain*".
Of course, this alternative theory of concepts, meaning, and reference does nothing to resolve the sceptical problem of brains in a vat. But there are other approaches that fair better than semantic externalism.
Now that we have explored of some of the sceptical challenges to the traditional TJB theory of knowledge, it is time to explore some of the alternative theories that have been proposed as solutions to those challenges.
Foundationalism is a variant of the traditional Justified-True-Belief definitions of knowledge. It gets its name from the manner in which it addresses the nature of justification. It was first conceived as a response to Agrippan Scepticism.
The formal requirements of a response to the Agrippan sceptic would be satisfied if there were beliefs of any of these three types:
(a) beliefs which are justified by something other than beliefs;
(b) beliefs which justify themselves;
(c) beliefs which need no justification.
Foundationalism addresses the Agrippan problem of infinite regress by positing that there are indeed some beliefs that are non-inferentially justified, and therefore "foundational". Foundationalism maintains that in addition to the three options offered by the Agrippan Sceptic, there is a fourth option -- specifically a set of foundational beliefs that do not require further justification, by their very nature true and self-presenting (i.e. self-justifying) simply in virtue of the fact that we have the belief, intrinsically credible by virtue of the kind of belief they are. These non-inferential foundational beliefs are those that are "infallible", "indubitable", "self-evident", and/or "self-presenting".
Foundationalism proposes that -
|(JTB-F)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3f) S perceives that P (in some tightly constrained understanding of "perceive"); or|
|(3s) S's belief that P is "based" (in some tightly constrained understanding of "based") on foundational beliefs.|
The claim that there are two forms of justification -- inferential and non-inferential -- is the core of any form of foundationalism. The difficulties faced by foundationalism arise from just how to precisely specify those two "tight constraints". The two dimensions that characterize and distinguish the various foundationalist theories are therefore:
(a) the problem of the foundation -- just exactly what beliefs are "foundational" and just why are they to be considered "self-justifying"; just what exactly is that tightly constrained understanding of "perceive"; and
(b) the problem of the structure -- just how are our non-foundational beliefs "based" on the foundational beliefs; just what exactly is that tightly constrained meaning of "based".
The distinctive commitment of foundationalism is that every belief has an intrinsic epistemological status -- the epistemological status of the belief is independent of the believer. Beliefs of one kind can be treated as epistemologically prior to beliefs of some other kind because they simply are epistemologically prior independently of what we may think of the matter. This can be characterized as a form of "epistemological realism" -- the epistemological status of any belief is a property of that belief that is "real" and evidence (and judgement) transcendent.
Classical foundationalism gives expression to the central tenet of empiricism, the view that all our knowledge is derived from our experience. It does this by insisting that a belief which is not about our own sensory states (our immediate experience) must, if it is to be justified, be justified by appeals to beliefs which are about our own sensory states. Such beliefs are supposed to be able to stand on their own feet, without support from others.
How is it that our beliefs about our present sensory states need no support from other beliefs? Foundationalism maintains that our beliefs about our present sensory states are infallible. This would guarantee they are all true. These basic foundational beliefs are not simply unchallenged, but unchallengeable. (In some variations of Foundationalism, perhaps not absolutely unchallengeable, but at least not automatically challengeable and thus at least prima facie justified.) Perceptual beliefs are infallible because we necessarily have them just when they are true. For example, if you believe that it is raining outside, then it is unavoidably (indubitably, infallibly) true that you believe that it is raining outside (regardless of whether it is in fact raining outside). One's beliefs about one's internal mental states are claimed to be infallible in just that way. If I believe that I am hungry, sad, happy, tired, sore, and so forth, it is impossible for me to actually be otherwise. The state of being hungry simply is the belief that I am hungry. As an example of external sensory inputs, I believe that I perceive that the tomato is red if and only if I perceive that the tomato is red. I simply cannot believe that I perceive the tomato to be green if I perceive it to be red. This infallibility of beliefs about our perceptions carries through all sorts of environmental and psychological distortions.
There are several kinds of beliefs that foundationalists claim to be unavoidably and infallibly true. The approaches that have been suggested can be divided into a priori foundations, and a posteriori foundations. A priori knowledge is supposedly arrived at through reasoned analysis, and is independent of any empirical justification or verification. The concept of a priori knowledge goes all the way back to Plato's Forms. A posteriori knowledge, on the other hand, depends on empirical observation and experience, and is the standard of the empiricists. As a consequence of this distinction, foundationalist theories that maintain that our foundational beliefs are primarily a priori are called "rationalist". And those that maintain that our foundational beliefs are primarily a posteriori are called "empiricist".
Whether a rationalist or an empiricist, the foundationalist's dilemma is to define a basis for knowledge modest enough to be secure against the sceptic, but rich enough to be adequate as the foundation for the rest of what we believe. The problem arises from the nature of the content of any experience -- either of internal states or external perceptions. If our experience is to have any epistemological significance, it must be expressible in propositional form. This amounts to rejecting any sharp distinction between experiencing and judging or believing. The kind of experiential awareness that is involved in knowledge must necessarily incorporate the ability to make propositionally comprehensible claims.
However, propositional content involves conceptual or descriptive content -- that A is F. And description is inseparable from the possibility of mis-description. It seems, then, that propositional content is inseparable from the possibility of error. If this is so, no judgement however modest, is absolutely indubitable. So if basic experiential knowledge has to be indubitable, there is no such thing as knowledge. The difficulty is to see how any judgement can be wrenched out of all inferential connections to further judgements (other beliefs) and retain any content at all. Our basic beliefs must have sufficient content to support the superstructure in which we are really interested, and no belief with that amount of content is going to be infallible.
"There can be no bearers of truth value without
judgment and judgment involves the application of concepts. But to apply a
concept is to make a judgment about class membership, and to make a judgment
about class membership always involves relating the thing about which the
judgment is made to other paradigm members of the class. These judgments of
relevant similarity will minimally involve beliefs about the past, and thus
be inferential in character (assuming that we can have no "direct" access to
facts about the past)."
It is difficult, for example, to understand the constrained concept of "perceiving", when one believes that one perceives oneself to be hungry or the tomato to be red, unless one also adds to the mix clearly non-foundational beliefs about the meaning of "hungry", "tomato" and "red".
Most foundationalist theorizing does not adequately treat the problem of the manner in which S's belief in P is "based" on the foundational beliefs. It surely can not be restricted to deductive entailment, since most of our inferences are not deductive but inductive. What happens if our foundational (for example -- perceptual) beliefs are contradictory? This is certainly possible. It is not uncommon for our eyes to give us one message of what is out there, while our ears or fingers give us another.
But the principles of inference by which we move from foundational beliefs to non-foundational beliefs (induction and abduction) are also fallible. To deal with this challenge, so called "Modest" Foundationalism, a recent innovation in Foundationalist theories, maintains that our basic beliefs have only an intrinsic prima facie credibility, because their epistemic status can be modified in the context of a developed belief-system (a la Coherentism). This modern conception of foundationalism takes a more fallibilist attitude towards what classical foundationalism would treat as infallible beliefs.
Perhaps our belief in P relies on defeasible evidence. Defeasible evidence is simply evidence that justifies, but does not logically entail. It is always possible, no matter how much evidence we accumulate, that defeasible evidence may be defeated by the addition of more (and counter) evidence. If this is the hypothesis, then Foundationalism more or less devolves to Coherentism.
Because the Foundational theory emphasizes the notion that non-foundational beliefs are necessarily "based" on foundational beliefs, it is essentially a "prior grounding" model of justification. Empiricist versions of foundationalism stress the evidentialist principle of the default and challenge model, giving voice to the empiricist position that all knowledge is based on perceptual experience. On other dimensions of justification theories, most foundationalist theories remain mute.
Additionally, Foundationalists need to find some means of justifying the rules of inference. They are not generally considered "foundational" beliefs, unless one considers them to be basic axioms -- assumed without justification. It is difficult, therefore, to understand how non-foundational beliefs can be derived in a suitably constrained "basing" way from foundational beliefs without opening up the holes inherent in inductive and abductive inferences, neither of which guarantees the truth of their conclusions the way that deductive inferences do.
It is clear from both the difficulties of the foundation, and the difficulties of the structure, that the foundationalist account of knowledge is, at the very least, incomplete.
Coherentism, in contrast to Foundationalism, is radically holistic - it does not recognize any epistemological distinction between kinds of beliefs. Unfortunately, our belief systems show considerable degrees of variation in the degree to which we believe our beliefs. We have different styles of belief, degrees of acceptance. This variation in epistemological significance is not well treated by Coherentism. Not only does Coherentism not deal well with the different degrees or styles of belief, it does not grant any special epistemic status to experiential or perceptual beliefs. The coherence account of knowledge therefore does not provide any necessary linkage between one's network of beliefs and an external reality. And that is its weakest link.
Coherentism proposes that -
|(JTB-C)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) S's belief that P "coheres" (in some tightly constrained understanding of "cohere") with the rest (or some suitably constrained sub-set) of S's belief-set.|
The coherence account of knowledge faces a pair of serious challenges. One challenge is the need to specify the nature of the "coherence" that is claimed to be the basis for belief justification. Just what does it mean for a set of beliefs to form a "coherent" set? As part of this specification is the need to distinguish between beliefs within the belief-set that do qualify for the "coherence set", and beliefs that must be dismissed as irrelevant, and remain in the category of opinion -- the problem of degrees of epistemological significance. Clearly, not all of my beliefs will qualify as knowledge. But the question is - which ones will? The other serious challenge facing coherence theorists is to explain why a coherent set of beliefs does or should track the truth? The traditional JTB theory of knowledge includes the condition that the belief be true. It is not at all obvious why or how a coherent set of beliefs should, or could track the truth.
Let's consider first the problem of just what it means for my beliefs to form a coherent set. Just what is this "coherence"? It surely cannot be as simple as logical consistency. For one thing, most people believe at least some contradictory beliefs. And for another, most of our beliefs are formed or arrived at on the basis of induction or abduction -- generalized conclusion from limited observation, or selected as the "better" from a number of possible explanations. "Coherence" must be interpreted to mean more than just deductive inference, on pain of unacceptably limiting what can be claimed as knowledge.
Coherence theorists generally describe the coherence involved as "explanatory" coherence: A belief system is more coherent the more beliefs it incorporates into the network, the greater the range of beliefs it records, explains, and allows us to anticipate. Recognizing which beliefs are relevant and constructing theories to explain them are two aspects of a single process. The goal is to make our beliefs as coherent (ie explanatory) as we can. On this basis, a belief-system is more coherent the fewer self-contained sub-sets it contains, and the fewer beliefs it dumps into the irrelevant category. Coherence is also increased by our epistemological self-understanding. Why we believe the things we believe, and what we consider to be the standards of "good" justification are important contributors to the explanatory coherence of our belief-set. But a challenge facing any suggestion of what "coherence" consists in is its necessary dependence on our being aware of both the totality of our belief-set (in order to determine whether any new belief might increase its coherence), and the proposed rules of coherence (in order to discern whether some new belief might increase the level of coherence). To employ "coherence" as a standard of knowledge justification, we need to know what all our beliefs are, and how they hang together. This seems to conflict with the rather obvious fact that at any given time I do not always remember all the things that I would believe if I could but remember them.
In response to this apparent overload, some variants of Coherentism have accepted that the belief-set that needs to be coherent can be less than one's entire body of knowledge. But accepting a sub-set approach as a way to lessen the information scope moves Coherentism towards the alternative of Contextualism.
Of course, there is an alternative here that some coherence theorists have adopted. Most JTB theories of knowledge are internalist. But it is possible to frame a coherence theory of justification in an externalist way. One externalist version of Coherentism suggests that the rules of coherence that apply are external. They are provided by reality and our evolutionary adaptations, and need not be consciously accessible to the subject. Such an externalist version of Coherentism would demand only that the set of propositions expressed by one's beliefs actually be coherent (according to whatever concept of "coherent" applies) without the subject being aware of that fact or the relevant principles of coherence. However, an externalist approach does not help in addressing the second challenge.
The coherence theorist also faces the challenge of explaining why and how the coherence of a set of beliefs should converge on truth. It is not at all obvious that the coherence of a set of beliefs should or even could track reality. The prevalence of "conspiracy theories" and their advocates demonstrates that a coherent set of beliefs need not reflect the truth. Yet, as noted above, it is commonly accepted that for a belief to be accepted as knowledge, that belief must at the very least be true. Without granting some special status to observational or experiential beliefs, it is difficult to find a way to ensure that one's coherent set of beliefs maintains some sort of linkage to reality.
According to some theorists, the presence in our belief-net of second-order epistemological beliefs about the reliability of our belief forming processes provides experiential observations more weight than might otherwise be supposed. But even with that additional weight, there is nothing within an "explanatory" understanding of "coherence" that necessarily connects our beliefs to the truth. It is always at least logically possible that one's coherent set of beliefs is quite separated from reality. Especially when you consider that one function of "fitting" a set of beliefs into a coherent network, is to filter out those beliefs that must remain outside the network as "mere opinion".
Of course, some coherence theorists take the step of denying that there is any gap between a coherent belief network and the truth. The "truth-realist" believes that there are evidence-transcendent truths, truths whose obtaining lies beyond our powers of recognition. The "truth-anti-realist" denies the existence of such evidence-transcendent truth, and holds that differences which we are in principle incapable of recognizing do not exist. From such a perspective, it is not possible for a coherent set of beliefs to not be true. By adopting an anti-realist conception of truth, such theorists can maintain that reality is defined by, determined by, or otherwise constrained by one's coherent set of beliefs. This is known as the coherence theory of truth. By this standard, if my beliefs do form a coherent set, they would by definition be true.
Although a widely popular approach to the meaning of "truth" amongst metaphysical Idealists, authoritarian political and religious rulers, and conspiracy buffs, it does turn the notion of "truth" into a purely subjective concept. Adopting this means of eliminating the gap between a coherent belief-set and "the truth" also entails that the operative definition of knowledge has to be reduced from a "justified true belief" to simply a "justified belief". The extra condition of truth would be duplicating the condition of justification, since they are now considered equivalent. I discussed the desirability of a separate "Truth Condition" above in section 3.3 above.
The obvious abuse of the popular or folk notion of "truth" that an extremely subjectivist truth-anti-realism introduces usually pushes the truth-anti-realist to generalize the idea in order to make it less subjective. On this broader interpretation, "the truth" is the coherent set of beliefs that would be held at the limit of enquiry, or that is held by some particular population, or some omniscient intellect. Unfortunately, this retreat from pure subjectivism does not resolve the issue of a gap remaining between one individual's coherent beliefs and "the truth" (however defined). The local belief set of some one individual may be coherent, yet still inconsistent with the broader vision of coherence truth (and therefore not true). That leaves the coherence theorist with no means of distinguishing between a "reasonable" (ie "in touch with reality") belief-system, and the delusions of a logically adept paranoid. An internalist coherence account of knowledge admits no external standard by which to evaluate different belief-systems. The coherence account of knowledge has no means of distinguishing between a belief-network that is "true", and a belief-network that is not. Who is the historian telling it like it is, and who is the delusional conspiracy theorist?
Of course, it is always possible to add externalist features to a coherence theory. If one adds the externalist stipulation that the coherent belief-set actually track the (realist or anti-realist conception of) truth, then one can resolve the difficulty outlined above. But it becomes debatable whether we should continue to call the resulting theory a "Coherence" theory of knowledge. The most meaningful feature of the resulting theory would be its means of providing the required empirical grounding. The personal epistemic responsibility aspect (the coherence of a belief-set) becomes more or less secondary to the epistemic grounding consideration.
Brand Blanshard takes another approach by arguing that there is only one possible maximally coherent set of beliefs, and that set is distinguished from all its potential rivals by being empirically grounded. "What really tests [a] judgment is the extent of our accepted world that it implicated with it and would be carried down with it if it fell . . . . That, is the test of coherence."(9)
According to this approach, the functional purpose of thought and belief formation is to understand our experiences. And to do so by generating the most systematic (coherent) belief-set that explains our experiences. And, I would suppose, to provide the most profitable basis for anticipating the future course of events. So the set of beliefs which we form in the process of understanding our experiences must, because of its starting point in our experiences, be empirically grounded. It is this necessary beginning of belief formation in experience that he claims guarantees that there will be only a single unique set of beliefs which constitutes the most coherent set.
However, contra Blanshard, even including the beliefs we form as a result of our efforts at understanding and anticipating our experiences, it is still logically possible that there may be more than one equally good way of incorporating our experiences into a belief set, more than one equally "good" way of assembling our experiences into a coherent explanatory system. Particularly when we remember that some of our beliefs will be rejected (as anomalous, or erroneous, or irrelevant) along the way. The entire sceptical argument of under-determination is based on the observation that our theories are all under-determined by the evidence. There is always an alternative theory that can explain all our evidence. Both paranoids and conspiracy buffs prove this.
Considering the coherence account of knowledge in the light of Agrippan scepticism focuses attention on the need for beliefs that meet two conditions. First, they must be non-inferentially credible -- justified without deriving their justification from other beliefs (i.e. perceptual beliefs). Second, they must be credible in a way that reflects some kind of external constraint. Can we even understand the concept of thinking (not just believing and knowing) if we cannot think of our beliefs as justified in a way that involves the aim of our beliefs being the nature of the world. There are only two ways to anchor a network of beliefs to an external (evidence transcendent) "truth." One is to grant some special epistemic status to perceptual / experiential evidence. But down that route lies the competing theory of Foundationalism. The other is to abandon the struggle to maintain pure justification internalism, and allow some external notion such as reliablism to intrude sufficiently to anchor our beliefs. But down that route lies the competing theory of Contextualism.
Either route is one that the coherence account has already started on. Despite the claim of coherentism that no beliefs are to be granted any special epistemic status, the coherence account needs to provide some special status for the standards of coherence -- the standards of what is to count as one set of beliefs being more coherent than another. Either one's standard of what should constitute a coherent set of beliefs (a "good" way of assembling our experiences into a coherent explanatory system) is purely subjective in the way that the anti-realist views truth, or the coherentist must needs grant the objective standards of coherence special epistemic status. The first alternative is another way to move the coherence account of knowledge towards the competing theory of Contextualism. The second alternative is another way to move the coherence account of knowledge towards the theory of Foundationalism.
Either way, it is clear that the coherence account of knowledge, like the foundationalist, is, at the very least, incomplete. And like Foundationalism, the Coherence theory of knowledge incorporates the prior grounding model of justification. Because of its emphasis on the coherence of one's entire body of beliefs, it necessarily emphasizes the possession principle -- one must be aware of all the justifying reasons before one can judge whether this belief coheres with the rest. Hence it also emphasizes the "no free lunch" principle that if one knows that P, then one also has all the resources in hand to know that one knows that P. The advantage that coherentism has over foundationalism is its manner of dealing with Cartesian scepticism. As I noted above when discussing the BIV argument, Cartesian scepticism is based on a foundationalist understanding of knowledge. But Coherentism bypasses the BIV argument by maintaining a different idea of what constitutes proper justification. I can therefore know that I have two hands, if that belief properly coheres with the rest of my beliefs. The BIV sceptical suggestion carries no weight, and is successfully defused.
Pure reliablist theories of knowledge are also thoroughly externalist. These theories analyze knowledge in terms of the processes that generate our beliefs, rather than in terms of the beliefs themselves. Knowledge, in reliablist theories, is defined in terms of processes that reliably generate true beliefs, or at least generate true beliefs more often than not.
One of the more prominent proponents of a Reliablist understanding of knowledge is Dretske. (10)
Reliablist theories would replace the three conditions of JTB theories mentioned above with -
|(TBR)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) S's belief that P is generated by a reliable cognitive process (in a way that degettierizes that belief).|
Knowledge therefore is a belief from reliable sources; and to trace a belief to a reliable source is to justify it. Recognizably reliable sources are thus authoritative. For Reliablism, therefore, it is always possible to understand the word "justified" in broad enough terms that we could consider a belief formed by some reliable process as suitably "justified".
If we are to explain ordinary knowledge in terms of truth-reliable methods, we must insist that "reliability" be reliability with respect to some reasonable or normal range of conditions, not with respect to all logically possible conditions. "Reliability" is therefore an incurably interest-relative notion. Reliablism, like justification, comes in degrees. This means a fallible conception of reliability, rather than in infallible conception. "More often than not" is sufficiently reliable in most circumstances. And this element nicely meshes with the Gettier examples described above. In most circumstances, our claims to knowledge are reliably generated. But in some of those weird Gettier scenarios, the process gets it wrong. And that is acceptable.
It is worth noting that reliablism is much more consistent than the JTB theories with the evolutionary development of human intellect. In positing only the existence of some unspecified reliable belief forming process, reliablism is ambivalent to any specific process. This allows reliablism to accommodate an evolutionary sequence of ever more reliable processes. Such an evolutionary conception of knowledge generating processes also provides an explanation of why knowledge is valuable -- an issue that the standard JTB theories do not directly address. On most theories of knowledge, knowledge is "assuredly true belief", and is thus the best bet when pursuing evolutionary survival. It is better, in the long evolutionary run, to know (rather than guess) behind which tree the tiger lurks (else, as I have already mentioned, one is liable to become lunch rather than enjoy it). From the perspective of reliablism, considered generically, it is largely irrelevant how one comes to know the necessary knowledge. The processes of evolutionary selection will ensure that those with the more reliable belief forming process will be the ones to populate the future.
On the other hand, purely externalist theories (like reliablism and truth tracking, described below) do not incorporate the intellectual capabilities that seem to be demanded by JTB theories. A key intuition about knowledge is that knowledge involves a judgement -- an intellectual exercise that examines evidence and judges whether some belief is sufficiently justified to qualify as knowledge. Externalist theories therefore run counter to our generally held intuition that one must have reasons for claiming to know rather than believe. The concept of justification seems somehow integral to our notion of knowledge, and externalist theories do not fulfil that intuition.
However, reliablism permits multiple reliable processes as the basis of knowledge. Intellectual inference from evidence (as per the internalist's concept of justification) could be considered but one of many reliable processes. But an advantage of reliablism is that it is also consistent with the fact that it seems that knowledge does not always depend on a reasoned (and hence time consuming) inference from evidence. Reliablism permits other processes that do not involve intellectual inference from evidence. Reliablism as a general approach is therefore consistent with both an external version of coherentism, and an external version of foundationalism. Foundationalism and Coherentism are just additional reliable means of forming true beliefs.
Some philosophers recognize two different kinds of propositional knowledge -- intellectual or reflective knowledge on the one hand, versus animal and non-reflective knowledge on the other. Intellectual propositional knowledge is the usual kind that is considered a true belief that is justified with sufficiently supportive evidence or intellectually drawn inferences, of either the deductive, inductive, or abductive kind. "Animal" knowledge is the kind that can be attributed to animals and pre-intellectual children. It is knowledge that is arrived at through some reliable non-intellectual process, and need not necessarily involve a belief. Non-reflective knowledge is the kind that one can acquire quickly without intellectual effort, when time is clearly too short for reflective development of reasons and consequences, as in moments of crisis. Externalist theories like reliablism can accept that we might have "non-intellectual" sources of knowledge. We commonly do attribute knowledge to animals (especially pets), and pre-intellectual children. And we commonly attribute knowledge to people in circumstances where it is clear that time for intellectual reflection (a search for justification) does not exist. These intuitive uses of the concept of knowledge are not adequately satisfied by pure JTB theories. So perhaps reliablist theories are not the complete story, but only part of the tale.
Not all philosophers recognize both sorts of propositional knowledge, of course. Most JTB theorists would deny that non-intellectual mentalities (animals, young children, brain-damaged patients, senile seniors, etc) have "knowledge" in the proper meaning of the word. They only seem to have knowledge, and our application of the word "knowledge" to their circumstances is just a convenient verbal short-hand. Hence, many philosophers would consider that non-intellectual "knowledge" is merely allegorical attribution -- calling something "knowledge" when it really isn't, because it might be if the subject in question were intellectually capable. Internalist theories of knowledge, because of their focus on intellectual justification, would consider animal or non-intellectual knowledge to be merely allegorical attribution.
In reaction to these conflicting advantages, there are also some "mixed" theories that combine both the intellectual sense of the JTB theories, and the reliable belief generation of the reliablist theories. Some of these theories have been called "Loose Reliablism" because they incorporate intellectual justification as just another reliable belief generating process. Others, like the Contextualist theory discussed below, are more properly considered justificationist because they are primarily JTB theories that also incorporate some reliablist and externalist considerations.
Like all reliablist theories, within Goldman's Causal Theory knowledge has no essential connection to justification. Instead, it posits that
|(TBC)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) S's belief that P is caused by the fact that P.|
Another reliablist theory, the Law-Like Connection theory, maintains that events can co-vary in a law-like way even when there is no causal connection between them.
|(TBL)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) there is a "law-like connection" between the state of affairs that P and S's belief that P.|
Being pure reliablist theories, these two are pure externalist conceptions of knowledge. They focus on the third-person epistemic grounding sense of justification to the complete exclusion of the first-person epistemic responsibility sense. They do capture the intuition that there needs to be some form of "connection" between the truth (facts of the matter) and our beliefs in order for our beliefs to qualify as knowledge. However, as with all purely externalist theories, there is no way for S to discern whether condition (3) has been satisfied. And that runs counter to our intuition that if S knows that P he ought to be aware of that, or at least be able to discern that.
There is another, and more recently popular, member of the JTB family of theories of knowledge that resolves many of the difficulties facing foundationalism and coherentism, while maintaining many of their desirable features. That theory is the Contextualism theory of knowledge. It is still a Justified True Belief theory. But it differs critically from Foundationalism by denying that both that there are some beliefs that are justified independently of other beliefs, or that there are beliefs that have a specific epistemic status independent of the context of discourse. And it differs from Coherentism by denying both that the coherence that matters is of some significantly large beliefs set, and that the rules of coherence are fixed and independent of the context of discourse. The fundamental principle of contextualism is that whether a belief qualifies as knowledge depends critically on the context within which the knowledge claim is held.
The impetus for Contextualism is the observed phenomenology of our encounters with Cartesian Scepticism:
1. In ("sceptical") conversational contexts that do involve sceptical hypotheses (like the Brain-in-a-Vat possibility), any attribution of knowledge (of either that we can know that we are not a BIV, or that we know that we have two hands) seems intuitively wrong.
2. In normal every-day ("non-sceptical") conversational contexts that do not involve sceptical hypotheses, attributions of knowledge (of either that we know that we are not a BIV, or that we know that we have two hands) seems intuitively unproblematic.
3. The only thing that seems to change when we shift from a normal every-day "non-sceptical" context to a "sceptical" context, are factors specific to the context of discourse.
Contextualism, because it allows both fixed points and epistemic interdependence, has a good claim to incorporate the best features of its traditional rivals. Elements of coherentism show up in the prima facie warrant granted to beliefs. It is necessary that the belief that P "cohere" with a lot of other beliefs that S might have -- pretty much in the way that Coherentists would describe explanatory coherence, although only with beliefs that are contextually relevant. Note that this specifically does not demand coherence with any problematic "universal set" of beliefs. For example, if it seems to me that I have two hands, then I am prima facie justified in believing that I have two hands. The logically possible alternative that I am a brain in a vat is not a relevant alternative in most contexts, and can be safely ignored in those contexts. Only when the context involves a philosophical discussion of sceptical challenges, would the context include the "brain-in-a-vat" alternative. (And in such a context, I would not be justified in believing -- and hence would not know -- that I have two hands.) Justification, and hence adequate justification, is always context sensitive (hence the name of the theory).
Elements of foundationalism show up in contextualism through the recognition that perceptual inputs have special epistemic status. Although this special epistemic status is granted within a context of beliefs, and not intrinsic to the belief itself. One of the ways it can seem to me that the tomato is red, is that I can be perceptually aware (see) that the tomato is red. And I have second-order epistemic beliefs to the effect that my perceptual systems are generally reliable. Contextualism does not share foundationalism's difficulty with understanding the meaning of "tomato" and "red", because it understands perceptual awareness in a coherentist way -- perceptual awareness is wrapped by beliefs about the meaning of conceptual categories and word meanings.
A key difference between Contextualism and the other theories of knowledge is that contextualism adopts the "default and challenge" model of justification, and switches the priority of justification from the "prior grounding" conception to the "epistemic responsibility" conception. (Williams makes this specific claim about inferential contextualism. Semantic contextualists have largely ignored this issue. So I will let it stand that this is a feature of both versions of contextualism.)
As I noted above when exploring the details of the Brain-in-a-Vat argument, Agrippan scepticism is rooted in the Prior Grounding requirement. Not merely does the Agrippan sceptic demand the prior grounding requirement for beliefs to qualify as knowledge, he insists on a very restrictive conception of our available evidence. Given the way in which we actually use the notion of "knowledge", the sceptic's assumption that experiential knowledge is, in some wholly general way, epistemologically prior to knowledge of the world is a contentious and even implausible theoretical commitment. The basic principle of contextualism, by contrast, is that if it seems to S that P, then S is prima facie justified in believing that P. In the absence of reasons to believe otherwise, for example, it is irrational not to believe what you perceive.
In judging that I have knowledge rather than merely belief I am judging that I am warranted in claiming that my belief is adequately grounded, but not necessarily to having already confirmed its grounding. Its grounding could consist of it being caused by or formed via a reliable process. This aspect of contextualism draws in a lot of the benefits of Reliablist theories of knowledge, like its compatibility with evolutionary theory. Also, second order epistemic beliefs can show up in the contextual coherence net that justifies the belief that P. The fact that I believe my perceptions to be a reliable indicator of the truth of things, is a good reason for believing that (prima facie) things are as they perceptually appear. From the perspective of contextualism, then, a good justifying reason for believing that P would be it seeming to me that P, when I have no reason to suspect that the way in which it seems to me might not be veridical. Given, of course, the conceptual context wherein either P or not-P might be supported by the evidence, but excluding non-contextual ("unreasonable") alternatives that would invalidate any of the beliefs that make "P" intelligible.
A contextualist view of knowledge and justification does not commit one to holding that a reference to context is part of the content of a knowledge-claim. A knowledge claim commits one to holding that all significant defeaters -- possibilities which, if realised, would make one's beliefs false or inadequately justified -- have been eliminated: the contextual element comes in to fix what defeaters should be counted as significant, and the constraints on sceptical challenges delineate the scope of the context. But presumptions as to what is significant are themselves open to criticism, which can be informationally or economically triggered. Contextualism takes fallibilism seriously. Justification is always provisional, never water-tight. And it often involves less-than-algorithmic procedures, such as inference to the best explanation.
That being said, there are two different kinds of Contextualism. There is the "semantic" contextualism of Keith DeRose(11) and David Lewis(12) (among the more prominent of advocates), and there is the "inferential" contextualism of Michael Williams.(13) Both versions of contextualism agree that "know" is a context sensitive concept in a manner similar to the concept of "flat". Whether something can be properly called "flat" depends critically on the context of discourse. If the context of discourse is, say, the geography of the plains states, then the state of Kansas can be properly called "flat". But if the context of discourse is highway bridge construction, then Kansas cannot be properly called "flat", since highways spanning it still require grades and bridges.
The contrast between the two branches of contextualism is that in the semantic version, the focus is on the role that conversational contexts play in the determination of epistemic contexts, while the inferential version the focus is on the role that individual inferential structures play in the determination of epistemic contexts.
The semantic contextualism of DeRose and Lewis is also called attributer contextualism, because it maintains that the conversational context of the attributer of knowledge is the relevant context that determines whether the knowledge claim is valid or not. Hence the conversational context within which one might assert "S knows that P" determines whether that assertion is proper.
According to this view, the truth value of
sentences containing the words "know", and its cognates will depend on
contextually determines standards. Because of this, such a sentence can have
different truth-values in different contexts. Now when I say "contexts", I mean
"contexts of ascription". […] This view has the consequence that, gfiven a
fixed set of circumstances, a subject S and a proposition P, two speakers may
say "S knows that P", and only one of them thereby say something true. For the
same reason, opne speaker may say "S knows that P", and another say "S does not
know that P", (relative to the same circumstances), and both speakers thereby
say something true"
One of the key challenges with the semantic version of contextualism, however, is its over-sensitivity to conversational content. Given a non-sceptical context, it is non-problematic for me to claim to know that I have two hands, or to make any other common every-day claim to know something. But I cannot, it would appear, properly claim to know that I am not a Brain in a Vat. For to make that claim, I have raised the sceptical hypothesis, and thereby changed the context of discourse. The mere mention of the sceptical possibility, changes the context of the conversation to a "sceptical" context. And in a sceptical context, I do not know that I am not a BIV.
The inferential contextualism of Williams provides a more desirable response to this problem. Inferential contextualism is not as sensitive to the conversational contents. It rather focuses on the inferential structure or foundation of the discourse -- the underlying premises upon which the discourse is based. Hence, the mere mention of a sceptical alternative is not alone sufficient to switch contexts. For example, if one is discussing history, then the inferential context underlying the discussion is one that is valid to the study of history. It includes such things as the presumption that histories and historical documents are roughly accurate, and excludes such things as the sceptical Russellian hypothesis that the world was created 10 seconds ago, complete with all "evidence" of a longer prior history. Merely mentioning the Russellian hypothesis is not sufficient to change the context of discourse. Changing the context of discourse would require the agreement of all (or at least most) of the parties to the discourse that the discussion is no longer a discussion about history, but a discussion about epistemology. Inferential contextualism is therefore not an attributer contextualism. It does not matter who is making the attribution of knowledge, as long as that attribution remains within the given context of discourse. Hence, if the context of discourse is centered on such every-day common things as the fact that I know that I have two hands, I can also properly claim to know that I am not a Brain in a Vat. (I may in fact be wrong about that claim, and not in fact know that I am not a BIV, if I am indeed a BIV, but I can none the less properly make the claim to know that I am not a BIV, as long as the context remains centered in common every-day matters.)
Of course, some third party observer to the circumstances can decide to change the context for some reason having nothing to do with the discourse being observed. Pete may look at his the MLS listing in his hand and tell his wife that the vendors are asking $340,000 for this house. Pete sees that his papers have the asking price printed on it, and he remembers printing off this paper from the MLS web site that morning, so judges that he has sufficient reason for properly claiming to know what the asking price is for this house. But Jane, overhearing Pete's advise to his wife, tells her husband to go and check with the real estate agent showing the house, because she does not believe that Pete does know. Because Jane is very interested in placing a bid on the house, her inferential context contains epistemic standards that are much higher than Pete's. Her inferential context is much more sensitive to the possibility of getting the asking price wrong, because erring on the high side will be a very expensive error. Pete and his wife, because they are not as interested in buying this house, have an inferential context that is much more tolerant of error. Even if Pete is in fact in error about the asking price, it would not matter to them. Thus Pete's wife can say "Pete knows the asking price", and Jane can say "Pete does not know the asking price", and both statements can be true.
Another of the distinctions between inferential and semantic contextualism is the attitude of inferential contextualism towards the epistemic status of beliefs. This shows up most obviously in the different treatments of contexts of discourse. The semantic/attributer contextualism treats contexts of discourse as intrinsically more or less demanding. Hence, in the above example, Jane brings a "higher" or "more stringent" set of epistemic standards to the judgement of whether Pete has knowledge, than Pete's wife does. And the sceptic brings the highest and most stringent set of standards to his claim that knowledge is impossible because the sceptical hypothesis cannot be disproved. Inferential contextualism, on the other hand, maintains that the different contexts of discourse are not more or less demanding, just different. Hence inferential contextualism challenges the sceptical enterprise right at the start, by arguing that the sceptic's level of epistemic standard is no more valid or "good" or "stringent" than is the common every-day standard. The sceptic claims that all knowledge is not possible because his sceptical alternative cannot be disproven. The inferential contextualist replies that the sceptic is invalidly applying his sceptical epistemic standard where it does not apply. Inferential contextualism maintains that al that the sceptic has proven is that knowledge is impossible within the sceptical context. Inferential contextualism, as a consequence, denies what Williams calls "epistemic realism" -
[…] the epistemic status of a given
proposition is liable to shift with situational, disciplinary, and other
contextually variable factors: [inferential contextualism holds] that,
independently of such influences, a proposition has no epistemic status
Interestingly, this denial of epistemic realism applies to areas beyond the standards of judgement for knowledge claims. Like coherentism, contextualism embodies a kind of local or modular holism, but its inferential variant is radically anti-realist when it comes to epistemic status. What epistemic status a belief acquires is awarded within a specific context by the participants. Norms, including epistemic standards and classifications, are norms that we set, not intrinsic properties imposed upon us by "the nature of epistemic justification". Inferential contextualism therefore maintains that such categorical distinctions as "foundational versus non-foundational", "a priori versus a posteriori", "necessary versus contingent", "analytic versus synthetic" and "about the real world versus about internal mental states" are only contextually relevant and contextually definable, and not objective properties of beliefs. Inferential contextualism therefore rejects both foundationalism's epistemological atomism -- the notion that a basic belief can be intrinsically justified on its own, and foundationalism's epistemolgocial realism -- the notion that any belief possesses an objective epistemological status.
Hence, for inferential contextualism, "Knowledge of the external world" is like "demonic possession": both concepts reflecting false theories. There is no such thing as demonic possession because there is nothing in reality that would distinguish a "demon". In just the same way there is no such thing as "knowledge of the external world" because there is nothing in reality that would distinguish an "external world" from something else. For there to be "knowledge of the external world" as a specifically identifiable set of beliefs, there would have to be some special epistemic status to beliefs about an "external world" that would distinguish such beliefs from others.
Inferential contextualism recognizes that sceptical challenges to our beliefs take place within several systemic constraints that themselves justify a prima facie justification for some beliefs. While it is entirely possible that some of our beliefs might in fact be in error, contrary to the sceptic's claim,it is not possible that all of our beliefs could be in error at the same time. In order to think at all, there must be some beliefs that by necessity must go unchallenged.
(1) intelligibility or semantic constraints. One reason we have many default entitlements is that holding many true beliefs, or not being subject to certain kinds of error, is a condition of making sense. Anything can be called in to question, but not everything at once. In order to understand any particular challenge to our claim to knowledge, one must be able to understand the challenge. And understanding that challenge demands a whole host of associated beliefs that cannot, in the context of the challenge, be taken to be questioned. In order to render intelligible any discussion of any topic, a whole host of beliefs must be assumed to be true in order to make the discussion intelligible. The intelligibility of language is a social matter that necessarily involves a whole host of related beliefs that must be accepted as true.
(2) methodological constraints. Some propositions have to be exempted from doubt or challenge (even if only for the time being) if certain types of question are to be pursued. Methodological constraints frame the direction of inquiry. The direction of inquiry has to do not with the level/depth of scrutiny, but with the angle of scrutiny. How one goes about an inquiry involves a whole host of beliefs that must assumed to be true in order to render possible the very questions that are being asked, and in order to evaluate the reasonableness of any possible answers entertained. You can't enquire about anything unless you assume that the measurements you make are trustworthy. Even if you are testing the trustworthiness of one device, you have to trust some other testing device.
(3) dialectical constraints. Possible defeaters may or may not be in play. The default status of claims and beliefs changes with the dialectical environment. Non-epistemic defeaters cite evidence that one's assertion is false. This evidence might be purely negative, or it might be positive evidence for the truth of some incompatible claim. Epistemic defeaters give grounds for suspecting that one's belief was acquired in an unreliable or irresponsible way.
(4) economic constraints. A defeater does not come into play simply by virtue of being mentioned. There has to be some reason to think that it might obtain. How much reason we require fixes the severity of our epistemic standards. If it is important to reach a decision, and the costs of error are relatively low, or if we gain a lot by being right and lose only a little by being wrong, we can afford to take a relaxed attitude towards justificational standards. Standards of justificational adequacy are always standards that we fix in the light of our interests, epistemic or otherwise. If the context is evolutionary survival, then it is important to know where the tiger lurks, if even only roughly. Justificational standards for that kind of knowledge need to be relatively low. And if I am wrong about things, and am actually a BIV, then I have lost nothing.
(5) situational constraints. (The "external factors" in contextualism.) The grounding must exist, whether or not we are aware of it. In claiming knowledge we commit ourselves to the well-groundedness of our beliefs. Our commitment to this well-groundedness is an important source of openness to self-correction. If the evidence should prove me wrong, then I must correct my beliefs, and my claims to know.
Although knowledge cannot be detached in any general way from the ability to cite reasons (when called for), never-the-less, in special cases we can attribute knowledge to another person because we can defend his reliability, even if he cannot. This social distribution of reason-giving abilities allows us to inherit knowledge by deference to experts. In a complicated society, an enormous amount of knowledge is acquired this way.
For an inferential contextualist, therefore, there cannot be a sharp distinction between knowing-that and knowing-how. Being able to make judgements (of identification or classification) -- the pre-condition for any knowing-that -- involves know-how essentially. For inferential contextualism, the distinction between the "observable" and the "inferential" or "theoretical" is methodological and not substantive or intrinsic (another instance of the denial of "epistemic realism"). We can observe anything whose presence we can be trained to report reliably on. Perceptual knowing can be causally mediated by our having sensations, while remaining epistemically direct and about objects in our environment.
All challenges to our knowledge claims therefore take place in some definite inferential context, constituted by a complex and largely tacit array of currently unchallenged entitlements. All inquiry takes place in a highly rich informational context. Such contexts are never the creation of a sole inquirer; they are the legacy of past co-operation and communication. Tradition -- the inheriting of results and methods -- is the prerequisite of investigation, thus of self-correction. For a contextualist, therefore, a particular context of inquiry and observation is always characterized by a range of justified background presuppositions (some default justified) concerning matters of general as well as particular fact. There is therefore no room for either the Agrippan or the Cartesian sceptic's global doubts. If a challenger implies that we may be making a mistake, we are entitled to ask how. If the challenger has nothing to say, then no real challenge has been entered. It is simply not possible to be in error about everything at once.
Another challenge that inferential contextualism addresses directly, but on which semantic contextualism is silent, is the fact that the Prior Grounding and the Default and Challenge models of justification set different standards for epistemic responsibility, for a belief's entitlement to the honorific of "knowledge". Semantic contextualism, because it accepts "epistemic realism" has to provide a justification for its choice of models, and explain why it is better than the other.
But for inferential contextualism, it is illegitimate to ask which model is correct. This is to proceed as though there were some realist external fact of the matter that holds quite independently of what we take it to be. A belief is no more justified wholly independently of human evaluative standards than a certain kind of block in soccer is against the rules independently of our practices of judging certain types of block as against the rules.
By tacitly invoking epistemological realism, the semantic contextualist is stuck with the sceptic's conception of knowledge -- the assumption of the prior grounding vision of justification; and that we cannot responsibly change that standard unless we can prove that it is false. Semantic contextualism therefore does not adequately address the Cartesian sceptical challenge. The inferential contextualist, on the other hand, denies all of the premises relied upon by the BIV argument (as documented above in the section on the BIV argument).
Inferential contextualism -
From the perspective of inferential contextualism, then, the BIV argument is simply incoherent. Not only is the Cartesian Sceptic's BIV argument invalid because its conclusion does not follow from its premises, it is also unsound because its premises are false.
Given the advantages of Inferential Contextualism, over both alternative theories of knowledge, and the alternative variant of semantic contextualism, it will be Inferential Contextualism that forms the basis of our understanding of "knowledge" in the rest of this text.
Also known as the Subjunctive Conditionals theory, the "Truth Tracking" theory of knowledge was presented by Robert Nozick in 1981.(16) What Nozick proposed was to step outside the standard tripartite model of knowledge, and examine our concept of knowledge from the outside rather than the inside. Nozick focussed on our intuition that our claims to knowledge must be "properly related" to the fact that P. "Knowledge is a particular way of being connected to the world, having a specific real factual connection to the world: tracking it."(17) He therefore proposed to examine knowledge as something that must "track the truth". In order to capture this concept, he proposed a pair of subjunctive counterfactual conditionals in place of the justification condition of the JTB theory. These subjunctive counterfactuals test both that S believes P because of P and that S's belief that P is not accidental.
In other words, Nozick would replace the three conditions of JTB theories with -
|(NTT)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3a) If P were false, S would not believe that P, and|
|(3b) If P were true under other circumstances, S would believe that P.|
More formally, taking into consideration that a consistency of method needs to be stipulated, and that the adherence condition needs to be understood in broad terms, the Truth-Tracking model of knowledge maintains that :
|(TBC)||S knows that P iff||(1) P is true;|
|(2) S believes (via some method M) that P, and|
|(3a) If P were false, S would not believe (via M) that P, and|
|(3b) If P were true under other circumstances, S would still believe (via M) that P.|
[Note that clause (3a) is commonly known as the variation condition, and clause (3b) as the adherence condition.]
This theory nicely captures two of our intuitions about knowledge. Firstly, that our knowledge reflects the truth -- we try to avoid believing things that are not true. And secondly, that our knowledge is somehow caused by the facts of the matter, rather than simply accidentally true.
However, while this theory does address a number of the difficulties that trouble the JTB theories, it is another purely "externalist" theory. There is no way for S to be aware of whether conditions (3a) and (3b) are satisfied. This seems to run counter to our intuition that if S knows that P he ought to be aware of that, or at least be able to discern that. So while Nozick's truth tracking approach to understanding knowledge may give us some insight into why we have the intuitions we have about knowledge and truth, and provides some guidance for a "God's Eye" analysis of knowledge, it would not seem to provide any insight into what we mean when we claim to have "knowledge".
As we have been exploring at length, ever since Plato the standard JTB analysis stumbles over just exactly what is meant by "proper justifying reasons" for one's beliefs. So in some minds, it is no better off than the TBC model offered by Nozick. What externalist theories like the TBC theory are proposed by their creators to do, and what they are usually criticized by other experts for not adequately doing, is satisfactorily counter the two primary challenges facing any of the internalist theories of knowledge. As described above, these two challenges are the "Gettier Example", and the "Brain-in-a-Vat Argument".
As described more completely above, Gettier examples are scenarios where P is true, S believes that P is true, and S has what initially appears to be properly justifying reasons for believing that P. The counter-example scenarios are so constructed that it is purely accidental that P is true despite the justification that S might have for believing that P. Gettier examples are important for highlighting our intuition that we cannot properly claim to know that P if our reasons for believing that P are not "properly related" to the truth of P. Yet the experts have experienced great difficulties in coming up with a sound understanding of just what that "proper relation" is. From the perspective of S, in these Gettier examples, it is impossible to discern whether or not one's reasons for believing that P are "properly related" to the fact that P (the truth of P is something that transcends any reasons that S might have for believing that P). As a result, some experts find themselves driven to externalist theories of justification -- interpreting the meaning of "proper justifying reasons" in a way that is not directly accessible to S.
Also as described more completely above, the Brain-in-a-Vat argument claims that because, ex hypothesi, I can have no evidence for believing that I am not a Brain-in-a-Vat, I cannot therefore know that I am not a BIV. And if I cannot know that I am not a BIV, then I cannot know many of the things that I normally take myself to know. Clearly an unreasonable conclusion. Since the conclusion is unacceptable, much effort has gone into explaining where and how the logic of such a reasonable sounding argument has gone astray. Some experts, therefore, have found themselves driven to theories that deny that knowledge is closed under known entailment.
The key advantage with Nozick's Truth-Tracking approach to knowledge is that it satisfactorily resolves the BIV sceptical argument. The truth-tracking model entails that knowledge is not in fact closed under known entailment. Suppose I am (or am not) a BIV. Then by clause (3b), I have no method (M) of coming to believe that I am (or am not) a BIV. Hence I do not know whether I am or am not a BIV. Yet, suppose I have two hands. By clause (3a), if I did not have two hands, I would see by observation I did not have two hands. And by clause (3b), under other circumstances, I would see by observation that I have two hands. Hence I do in fact know that I have two hands. It is therefore quite acceptable to not know whether or not one is a BIV, and yet still be able to know such common facts as that I have two hands.
In addition to being a purely externalist examination that ignores all our intuitions about reasons and judgement, a key problem with Nozick's Truth-Tracking theory of knowledge is that it can still be challenged by Gettier counter-examples. The theory as it stands does not satisfactorily address the intuition that in order to properly claim to know that P, one's reasons for believing that P must be "properly related" to the fact that P. It still remains possible that the method (M) that S employs to come to believe that P may generate a "falsely justified" belief that P. It is still possible that:
(i) P is true;
(ii) S believes that P is true through some method M that nevertheless is not "properly related" to the fact that P;
(iii) If P were in fact false, S would not believe (via M) that P; and
(iv) If P were true under other circumstances, S would still believe (via M) that P because his method (M) is not "properly related" to the fact that P.
Suppose, for example, that I am a BIV, and the Vat (aka "God") advises me that I am a BIV. Then (i) it is true that I am a BIV; (ii) I believe, because "God" told me, that I am a BIV; (iii) if I were not a BIV, then "God" would not have told me I was a BIV, and I would not believe that I was a BIV; and (iv) if I was a BIV under other circumstances, "God" would still have advised me that I was a BIV. Sounds good. And yet, whether I am or am not a BIV is not "properly related" to whether "God" advises me that I am a BIV. It is quite conceivable that the particular "God" involved in this scenario could advise me that I am a BIV when in fact I am not, or that I am not a BIV when in fact I am. There appears to be a missing link in the truth-tracking theory that could provide the necessary "proper relation" between the fact that P, and S's belief that P. The only redress appears to be a demand that whatever (M) is, it is a reliable method that "properly relates" the fact that P to S's belief that P. (For example, one would have to add the assumption to the BIV scenario that "God" did not tell lies.) Yet that un-detailed demand for "reliability" leaves the truth-tracking model of knowledge in exactly the same boat as the tripartite theories of justification. And specifically, the various reliablist versions of justification.
Other criticisms of the Truth-Tracking model of knowledge focus on the fact that the model does not deal with the issues of justification and the intuition that S must have "proper justifying reasons" for believing that P. It merely mentions that S must employ some method M for coming to believe that P, without going into any details. There is no place in Nozick's model for the intuition that for S to properly claim to know that P, S must be aware (to some greater or lesser extent) of the reasons why s/he has formed the belief that P. In addition, the model does not address the intuition that knowledge claims represent a discriminative judgement on the part of S. There is no place in the truth-tracking model for the intuition that for S to properly claim to know that P, there must be some form of rational judgement on the part of S that his belief about P is knowledge rather than mere opinion. The model seems to leave the door open for "intuitive guessing", or "hunches", (and perhaps ESP?) as an appropriate way of forming a true belief.
It should be emphasized that the Truth-Tracking model of knowledge (like other externalist approaches to understanding "knowledge") is not necessarily inconsistent with the standard internalist understanding of the JTB model. It is just an outside-in view of knowledge claims, rather than the traditional inside-out view. It looks at knowledge claims from a perspective not involving S's appreciation of the situation. It is a description of the phenomenology of knowledge claims, rather than a philosophical definition of knowledge. As such, it is more appropriate to avoid regarding the Truth-Tracking, Reliabilist, Causal, or Law-Like models of knowledge as a competitor of the JTB family of models. Instead we should consider these externalist models as descriptions of knowledge phenomenology, as outsider companions to the standard insider definitions of knowledge provided by the tripartite alternatives.
If the companion approach is taken, then one can mate the truth-tracking description of knowledge claims with (say) the inferential contextualist definition of knowledge. And one would wind up with an understanding of "knowledge" that would both resolve the BIV argument as per the truth-tracking model, and resolve the Gettier examples as per either the contextualist model of justification, or the truth-tracking model of knowledge depending on the details of the example. The truth-tracking model supports the denial that knowledge is closed under known entailment, defusing the BIV reasoning. The contextualist model of justification would ensure that S requires sufficiently supportive justifying reasons for any belief that P. And the truth-tracking model would ensure that prima facie default justification tracks the truth. Thus the combination would adhere to both the common intuitions about knowledge that neither model does separately.
I must also briefly mention the "possible worlds" interpretation of Nozick's truth-tracking subjunctive conditionals. On this interpretation, the variation and adherence conditions would be phrased as
(3a) in the closest possible worlds
where P is false (unlike actuality), S no longer believes
that P; and
(3b) in all other close possible worlds where P is also true, S does believe P.
When thinking in terms of "possible worlds", it is assumed that we live in one possible world that we arbitrarily call "actual". It is, however, readily conceivable that things could have been different in any of an infinite number of ways. When we imagine an alternative possibility to the way things "actually" are, then that is an alternative "possible world". The greater the discrepancy from reality, the more "distant" that possible world is from actuality.
The possible worlds interpretation has the advantage of offering some people a more convenient/comfortable/graspable way of understanding the truth/success criteria of the subjunctive conditionals of Nozick's theory. The problematic subjunctive "S would (not) believe" gets translated into a selection of possible worlds where "S does (not) believe". On the other hand, the possible worlds interpretation relies on the concept of one possible world being more or less "close" to another. Since this concept is just as indefinable as the truth/success criteria of the subjunctive "S would (not) believe", nothing concrete is added to the basic theory. It just replaces one intuitive grasping with another. This is not necessarily a bad thing, of course, if one alternative is found easier to deal with than the other. It just must be realized that it adds nothing substantive to the theory.
Deontological theories (as you might expect from their name) define "knowledge" in terms of what S has a right or duty to believe, or act (in virtue of that belief). Instead of a third condition involving justification, they understand knowledge as -
|(TBD)||S knows that P iff||(1) P is true;|
|(2) S believes that P; and|
|(3) S has a duty / right to believe that P.|
There is one variation of this theory that is clearly not in the JTB family of theories. This variation understands knowledge as --
|(TD)||S knows that P iff||(1) P is true;|
|(2) S has a duty / right to believe that P.|
The "diffident scholar" example [see above] has been proposed to argue that knowledge might not necessarily involve belief. This is a third-person "epistemic grounding" focus on the question. This approach highlights the distinction between the first and third person sense of having reasons for one's beliefs. And it reinforces the analysis presented above that there are two kinds of answers as to what constitutes "good reasons" for one's beliefs -- the first-person sense of having reasons, and the third person sense of their being reasons. The TD model maintains that if there exists good reasons for believing that P, then one ought to believe that P.
These theories do capture our intuitive sense that we ought to believe things that are true, and ought not believe things that are not true. However, if one inquires under what conditions S has such a right or duty, then the inquiry quickly delves down the same roads as the inquiry into the meaning of justification. So it is unclear to what extent the Deontological Theories are effectively different from the JTB theories.
Some philosophers, following the linguistic analysis traditions of the Logical Positivists, have examined how we use "know" in English sentences, and concluded that knowledge is not a complex concept. Rather, they suggest, it is a "performative" concept. Like such other performance words as promise, request, order, warn, etc., "to know" is to perform the act of granting assurance or authority. To say "S knows that P" is to say no more than "S grants assurance that P".
The Performance Theory does capture a lot of the sense of how we actually use the word "know" in common English discourse. Restricting its analysis to linguistic usage, however, does mean that the Performative Theory has nothing to contribute on the question of what it is that allows us to properly offer such assurances. We don't let S get away with offering his assurance for just anything. Only under some restricting conditions do we allow S to claim to know that P. The Performative Theory has nothing to say on what those conditions might be. So as a theory of what knowledge is (rather than theory of how we use the word "know"), the Performative Theory of knowledge seems to be significantly lacking.
Much of what we ordinarily call knowledge involves propositions that we believe only on the basis of what others have told us - i.e., on the basis of testimony. What conditions have to be met for us to gain knowledge from the testimony of others?
Much of what we ordinarily call knowledge involves propositions that we believe only on the basis of our perceptions -- of both the external world and our own internal world. What is the relationship between our seeing that the tomato is red, and our knowing that the tomato is red?
Some of what we ordinarily call knowledge involves propositions that we believe only on the basis of our abilities to reason -- knowledge that does not rely either on testimony or on perceptions.
"There is no species of reasoning more common, more useful, and even
necessary to human life, than that which is derived from the testimony of men,
and the reports of eyewitnesses and spectators."
David Hume (An Enquiry Concerning Human Understanding, Pg. 74)
The conditions that have to be met for us to gain knowledge from the testimony of others are the same conditions that must be met for us to qualify any belief as knowledge. There must be adequate grounds to support our judgement that that our belief is in fact true.
Applying the traditional JTB definition of knowledge to the case of testimony, we have:-
(JTB-T) (A knows that p on the basis of testimony from B)
iff (1) P is true &
(2) A believes that P &
(3) A is suitably justified in believing that P
on the basis of testimony from B
This then gives us a start for identifying the conditions for us to gain knowledge from testimony. Because this definition of knowledge requires that P be true, if Bob is testifying falsely then Alice can not gain knowledge. Even if Alice comes to believe that P on the basis of Bob's testimony, Alice would acquire a false belief and not knowledge that P. So the first condition that must be met for Alice to gain knowledge from the testimony of others, is that when Bob testifies that P, P must be true.
In other words, to be informed by testimony is to believe that what the testimony asserts is true. And this belief can turn out to be knowledge when the testimony is in fact true. It should be noted in passing that knowledge obtained purely by testimony is "thin". What Alice comes to believe from the testimony of Bob is simply that P. More specifically Alice does not learn the reasons that Bob may have for believing that P is true. Alice need not even understand P. So Alice does not inherit the justification that Bob may have that qualifies Bob's belief as knowledge. The question therefore becomes -- "Under what conditions can Alice be suitably justified in believing that P on the basis of the testimony of Bob?" For this we need a concept of "suitable justification".
As we have seen, most theories of knowledge maintain that gaining knowledge from any source (other than Foundationalism's self-evident awareness, not an issue here) must include the processes of deductive, inductive, and abductive inference. With the exception of Contextualism, all of the internalist models employ the "prior grounding" model of justification. To elevate Alice's belief that P into Alice's knowledge that P, Alice must have sufficient supporting evidence to make it likely that P.
In the case of testimony, however, it would initially seem obvious that Alice would almost never have any evidence beyond Bob's testimony. Consider your situation when you read a newspaper article. You have no evidence supporting the truth of what the reporter has told you, and no way of checking the reporter's history of veracity. Our informants are human, fallible, and with complex interests responsive to other things besides truth. We all recognize that they are sensitive to their own personal conception of self interest. Hence, such reasoning would suggest, Alice could (normally) have no suitable justification for believing that P just on the basis of the testimony of Bob.
But this kind of thinking focuses too closely on a single individual instance of a transmission of P from Bob to Alice. The foreground focus of attention highlights the plight of Alice, who must rely upon the word of Bob without benefit of knowing him. Whereas it is the background, which this sort of intuitive thinking leaves out of focus, that supplies the enormous supportive foundation for Alice's belief in Bob's veracity. We need to view any single act of testimony as but one iteration of an ongoing process that takes place through time. Any one instance of Bob's testimony that P is a single frame in an ever evolving process which has shaped and guided both Alice and Bob's participation in it. This grounding, being implicit in our linguistic and social practices, provides our sufficient justification for accepting testimony. As long as there is no reason to suspect such abnormal features as mistake, delusion, or deception, Bob can conventionally be presumed not only to assert what he believes, but also to communicate the truth. By testimony, Bob gives us not only a piece of his mind, but a glimpse of the world as he knows it.
Many thinkers on this issue call this prior presumption of adequate grounding a matter of "trust" -- suggesting that testimony only succeeds in transmitting knowledge if there is trust (by Alice of Bob's veracity). But I think that this is to ignore the major role of language in causing people to have beliefs, and in generating knowledge. We expect communicators to be cooperative, not as the outcome of a statistical weighing, but as a presupposition of fruitful linguistic and social exchange. This expectation does not consist of "trust" -- it is a necessary part of employing language that hearers take speakers as believing what they assert.
Behind Alice's acceptance of Bob's general veracity is a vast history of largely (although certainly not completely) successful communication. Cooperation between speaker and hearer is almost always the enlightened rational choice. The liar, in order to succeed in his lie, at least requires us to understand him. And that understanding relies on a history of proper (correct, veridical) use of language. Statements get their meaning from the standard practice of intending and taking them to be true. McDowell draws attention to this feature of communication by reminding us of the evolutionary survival value of the capacity to spread important news by making meaningful signals.(18) The tail flash of the white-tailed deer at the sight of a prowling wolf is surely not intended to signal the flasher's intention to alarm, but simply to pass on the beneficial results of one deer's perceptual knowledge to other individuals. Like both perception and inference, testimony sometimes turns out to be unreliable. But if the fallibility of our senses does not annul the reliability of perception, and the fallibility of inductive inference can be met with an acceptable pragmatic response, it would be logically inconsistent to refuse the award of "knowledge" to true beliefs based on testimony simply because a small proportion of understandable utterances could be false.
Certainly, there is no doubting that Alice is at risk if Bob's testimony is false. What matters, however, is Alice's perception of the relative magnitude of the risks involved. From the fact that Alice goes along with Bob's testimony, it does not follow that Alice accepts ("believes") it. First, if Alice does not need to rely upon the information immediately, Alice has little incentive to question it. Second, even if Alice does rely on it immediately, this still does not require Alice's unqualified endorsement. If Alice must act upon Bob's testimony, and if the success of her actions depends upon the truth of that testimony, then Alice is often immediately able to confirm (or disconfirm) both the asserted facts and the reliability of Bob as informant. Consider, for example, how you respond to the advice of others according to the magnitude of the decision you need to make, and your perception of the expertise of the advisor. Getting directions to the mall is one thing. Buying a house another. Your standards of what constitute "suitable justification" for knowing that Bob's testimony is accurate will change dramatically. The "trust" that we extend for testimony is not directed at the character of the informant. It is directed at the reliability of this single instance of testimony, within an informationally rich background context of similar linguistic exchanges and social constraints on truthfulness. I am not disrespecting you, nor doubting your word, when I check on your claim that this house is free from significant defect. Nor do I have specific reason to doubt you. What I have is both an unusually high cost if you are wrong, and the belief that there are limits on all but an expert's good judgment on these matters.
What also needs to be considered is that Bob is also at risk if the testimony is false. There are powerful social constraints on Bob to be truthful and reliable. The force of these constraints vary, of course, according to such factors as the community's sensitivity to deception or error, the costs to Alice once an error is detected, and the rapidity and extent of communication about these findings. Although such behavioural constraints are not usually viewed as "suitably justifying" evidence, because we are not normally consciously aware of them, they do retain the crucial mark of evidence - in the presence of these constraints, it is much more likely that P is true than otherwise.
Moreover, our presuppositions about the likely veracity of our informants are not uniform. There is room for Alice to filter Bob's testimony through the rest of Alice's belief-set. The alternatives are not simply acceptance (belief that P) or rejection (belief that not-P) of the testimony. We bestow different degrees of belief in testimony. Testimony is in this sense "impure". Its believability varies depending upon what we already know about similar cases and how risk-significant the information is. It is obvious that we do draw all manner of distinctions among sources of testimony. Consider how differently we treat a news story if the source is a grocery store tabloid versus the New York Times, or it is gossip about some starlet versus a local weather report.
For all that there is unquestionably adequate grounds (normally) for Alice's assuming the veracity of Bob, there is one consideration that tells against the "prior grounding" model of justification. The epistemic responsibilities implied by a requirement for an inductively supported belief become "too absurdly enormous to be discharged by a single individual with limited time, expertise, and cognitive equipment." (19) We do normally have this enormous background set of beliefs supporting the acceptance of Bob's testimony. Such an enormous background, in fact, that we generally do not initially consider Bob's credibility as an issue. On the contrary, we believe implicitly in the truth of testimony, unless we have positive grounds for doubt or disbelief. This simple fact argues against the Prior Grounding model of justification, and for the Default and Challenge model.
As documented above, the "default and challenge" model of justification argues that any testimony is creditworthy until shown otherwise; whereas the "prior grounding" model demands that specific evidence for its reliability is needed. The contextualist theory of knowledge adopts the "default and challenge" model of justification and grants that Alice is prima facie justified in believing that P simply in virtue of understanding that Bob testifies that P. From the contextualist understanding, therefore, there is no necessity that Alice be consciously aware of, or even implicitly consider, the "absurdly enormous" amount of background grounding we have been discussing. (Which is not to suggest, I should emphasize, that the necessary grounding does not need to exist.)
Finally, let's consider for a moment Nozick's truth-tracking theory of knowledge. Nozick's Subjunctive Conditional description of knowledge can function as a check on our discussion so far because it is an entirely externalist view of knowledge, and is therefore completely insulated against any of the problems inherent in an internalist conception of "suitable justification". Adapting Nozick's theory to our discussion of knowledge from testimony would mean that --
(A knows that P on the basis of
testimony from B)
iff (1) P is true &
(2) A believes that P &
(3a) if P were false, A would not believe it &
(3b) if P were true under other circumstances, A would still believe it.
If we assume (as we established in the beginning) that Bob intends to be truthful with his testimony, then it is clear that Alice would know P based on Bob's testimony because (3a) if P were false, Bob would not testify that P, and Alice would then not come to believe that P; and (3b) if P were true under other circumstances, Bob would still testify that P and Alice would still come to believe that P. This means that our internalist discussion of the suitable justification for Alice gaining knowledge that P based on the testimony of Bob is consistent with Nozick's externalist understanding of knowledge.
As with all knowledge judgements, whether our beliefs based on testimony count as "knowledge" is ultimately a subjective evaluation. Whether Alice judges that P qualifies as knowledge will depend on the confidence that Alice has about the ceteris paribus conditions surrounding Bob's assertion. Whether we judge that P qualifies as knowledge for Alice will depend on the confidence that we have about the ceteris paribus conditions surrounding Bob's assertion. Three conditions constrain that degree of confidence -
(1) the (explanatory) coherence of P with the rest of Alice's
(or our) belief set (at least the contextually relevant portion thereof).
(2) the past performance of the particular source of information -- either Bob specifically, or the institution that guarantees Bob (if there is one). And in the absence of any specific prior information on that matter, we adopt the prima facie presumption of general reliability inherent in all linguistic exchanges.
(3) Alice's (or our) perception of the relative risks involved. The more Alice has to lose, or the more that Bob has to gain from P being false, the more stringent need be the epistemic standards applied in validating the grounds for believing P.
The "suitability" condition on "justification" remains simply one of coherence. The "best explanation" for why Bob testifies that P is, ceteris paribus, that Bob believes it for duly responsible reasons (Bob knows that P). Once testimony is accepted, there are many sources which come into play to supply additional grounds relevant to the warrant for belief. What Hume affirmed (and the inferential contextualist theory of knowledge maintains) is that we normally enter the setting of testimony with a large range of well-founded beliefs (derived from a similar source, experience), which provides a basis to test or assess any new testimony. We approach the information imparted by testimony with a vast background of knowledge about the reliability of communication in general, supplemented (usually) by beliefs that suppliers of testimony have nothing to gain and something to lose through error or deception. We do not have to establish the separate trustworthiness of our testimonial sources; it is enough if no evidence to the contrary is already available.
When I see a red tomato, I commonly claim to know that the tomato is red. But if Foundationalism has proven to be an inadequate conception of knowledge, what is it about my perception of the redness of the tomato that becomes sufficient justification for my belief that the tomato is red? How does "perception" become a suitable source for "knowledge"? There are two main issues that need to be addressed here --
(a) what does it mean to "perceive" the redness of the tomato? and
(b) what does it mean to "know" that the tomato is red?
We can resolve the second of these
issues quickly by adopting as our conception of knowledge the inferential
contextualist model described above. Therefore, in order for me to "know" that
the tomato is red it must be the case that
(1) it is true that the tomato is red; and
(2) I believe that the tomato is red; and
(3) I am suitably justified in believing that the tomato is red.
From this understanding of what it means
to "know" that the tomato is red, we can see that the relation between my
perception of the redness of the tomato and my knowledge that the tomato is red
can be expanded to
(a) the relation between perceiving and believing that the tomato is red;
(b) what it is about that relation that constitutes suitable justification for the belief; and
(c) what function the truth has in this relation.
From the internal experiential viewpoint, when I perceive the "tomato" I experience a recognizable patch of visual information (a bounded solid angle of view with certain spatio-temporal properties) that shares some of the visual characteristics of previously experienced things that I have learned to call "tomato". And when I perceive that patch's "redness" I recognize that the data-pattern in my visual field contains a property in the family of colours I have learned to call "red". Once we start dealing with the conscious experience of perceiving the redness of the tomato we necessarily require an informationally rich context of associated beliefs.
Can I perceive the redness of the tomato without employing any of these labels or classifications? Yes, I can -- sort of. It is perfectly possible for me to experience a visual field (either all or part) to which I do not or cannot apply classifications and labels. There are two ways that this can happen. I can be distracted from consciously attending to my visual field (or part thereof). Or my visual field can contain "data patterns" that defy classification on my part.
If I am distracted from attending to part of my visual field, I might be receiving the photons from a lot of other things in my visual field besides the tomato, and I might be doing a lot of processing in my visual cortex. But the information is not making it to my conscious awareness. Classifying and labelling the visual data-patterns that pass before me is not an automatic and immediate process. It demands a conscious focus of attention. As I gaze out the window, deep in thought, my eyes pass over a scene that contains numerous familiar visual data-patterns. When I attend to what I am looking at, I can identify (classify) what it is I am seeing. But when I am pondering the issues of perception and belief, I am distracted, and do not consciously attend to the visual data before me.
In the normal course of events, when I am consciously attending to some part of my visual field (like that tomato), I am actively classifying and labelling whatever it is I focus my attention on. Unless, of course, my visual field contains "data patterns" that I cannot match with any of my classifications. In such cases I will usually make a classification error -- classifying what I see as, for example, a "pepper" instead of a "tomato" (assuming I don't know what a tomato looks like), or the tomato's colour as "rose" instead of "red" (assuming I don't know what red looks like). I am sure that we have all made such mistakes. Who is that person in the distance over there? Gee, I"m not sure, but it looks like ??? Why are you not sure? Because there is insufficient information in the visual data patterns you have available to properly classify that image as someone you know, or someone you don't know.
So, for me to perceive the redness of the tomato, I must experience and consciously attend to a visual field containing a "data pattern" that I can classify as a "tomato" and as "red". Once I have classified the data-pattern, I can form the belief "the tomato is red" to capture my visual experience. This last step is also an active effort of consciousness. I can gaze out my window and land my focus on various things (data-patterns I can classify) without forming any propositional beliefs about what I am seeing. I only form a propositional belief when I need the information for some purpose -- even if it is only the saying to myself what it is I am looking at. The belief that "the tomato is red" is only one of very many possible beliefs I might form, given the perception of a red tomato. The visual field presents data patterns that are far more rich than can be expressed in a simple proposition. I only form a belief about the tomato being red if I have some need for that information. I might just as easily form the belief that the tomato has a blotch on it, or the tomato is sitting on a book, or there is something to eat, and so forth.
To discover what it is about that the relation between perception and belief that constitutes "suitable justification" for the belief we need to delve a little more deeply into how "suitable justification" qualifies a "mere" belief for the honorific of "knowledge".
As we have been discovering, different theories of knowledge define "suitable justification" in different ways. Therefore, what will constitute "suitable justification" will vary according to the theory of knowledge applied. The coherence theory, as an example of an internalist understanding of justification, would maintain that my belief that the tomato is red qualifies as knowledge if and only if such a belief coheres with (and maximizes the coherence of) the rest of my beliefs.
The Inferential Contextualism theory of knowledge combines the best of coherentism with some reliablism and an acknowledgement that beliefs based on perception require a special status (albeit, a status granted from within a specific context). Contextualism also has an additional benefit to recommend it. All of the other JTB theories understand "suitable justification" as meaning that the subject must have adequate grounds to support the claim to knowledge prior to the claim. Contextualism adopts instead the "default and challenge" model of justification, maintaining that if it seems to me that the tomato is red then that is prima facie suitable justification for my knowing that the tomato is red. The "prima facie" conditional is required, of course, in order to rule out the possibility that I might be aware of circumstances that would render dubious the veracity of how things seem to me.
And this brings us to the role that the truth of the matter plays in perceiving the redness of the tomato. There are three different "theories of perception" that grapple with this issue.
The direct realist theory of perception is "realist" because it maintains that the objects (like tomatoes) we perceive normally exist and maintain most of the properties we perceive them as having (like redness) even when they are unperceived. The theory is "direct" because it maintains that we are directly and immediately aware of the existence and nature of physical objects in our environment and their properties. (This is not to suggest, as direct realism is often accused of suggesting, that its redness is necessarily an intrinsic property of the tomato. Direct realism maintains rather that the tomato has an intrinsic property that a normal perceiver will see as "red" under normal perceiving conditions.) Direct realism maintains that the there is an evidence transcendent truth that counts. And hence that perceiving the redness of the tomato is becoming consciously aware of the evidence that the tomato is red.
The indirect realist theory of perception agrees with the direct realist that we are aware of the existence and nature of physical objects in our environment, and that the objects we perceive normally exist even when they are unperceived. Where the indirect realist differs is in maintaining that we do not gain this awareness directly, but only in virtue of a direct perception of some intermediary -- called variously a "sensation", "appearance", "sensum", "percept", or "representation", or (in its more popular form) "sense data". If indirect theories of perception are right, then there is something between the tomato and the conscious "I" that becomes aware of the redness of the tomato. That thing is the "sensation" or "sense-data" of the tomato. The sense-data is invested with all of the properties possessed by the appearance of the tomato (such as its redness), and it is those properties that we actually perceive. Somehow, somewhere between the real tomato and our conscious awareness some intermediate medium or our sensory input processors "translate" the actual straightness of the stick in the water glass into a bent appearance, or the actual round shape of the table into an oval appearance, or thin air into a pink elephant appearance, or the unknowable "nuominal" nature of the tomato into the perceived "phenomenal" properties of the tomato.
However, indirect theories of perception are fatally flawed. First of all, the sense-data theories assume that if the tomato appears to have redness, there must exist something that intrinsically has redness. But there is no need to reify appearances. If a church looks like a barn, we do not perceive a barn appearance. We perceive a church that looks like a barn. Secondly, if the tomato's sense-data do have the properties they appear to have (which, according to the theory they must), where and what are these sense-data? They are certainly not physical objects located in physical space in the place that they appear to be located. Of course, they may be simply encoded representations of the tomato's appearance -- brain-states of encoded neural network impulse strings, perhaps. But if that is the case, in what sense can the sense-data be said to "have" the redness they are supposed to have? And in what sense would a conscious appreciation of that encoded representation differ from a direct perception of the tomato? Thirdly, indirect realist theories presume a hidden Cartesian Dualism. Such theories demand somewhere in the consciousness/mind/brain a place where sense-data ("appearances") can be "presented" for our consciousness to "appreciate" and evaluate. They assume, therefore, some manner of direct perception by the consciousness of the presentation. But in the absence of mind-brain dualism, if the mind's "appreciation" of the sense-data is necessarily direct, then why can't the mind's appreciation of the tomato itself be direct?
Which brings me to the third family of theories of perception. Phenominalism agrees with the indirect realist that the mind is not directly aware of the objects of perception, but only aware of the experiences of perception. On the other hand, phenominalism agrees with the direct realist in maintaining that the mind's appreciation of those experiences is direct and not mediated by "sense data". Where the phenominalist differs from the two is in maintaining that there is nothing beyond, behind, or underlying, those perceptual experiences. All there is, is the experience of redness, the experience of the tomato. All the tomato is, is a series of experiences. However, in order to avoid collapsing into Solipsism, Phenominalism demands the additional "free-floating" (unsupported) premise that there are in objective fact other people out there that are the basis (the behind, the underlying) of our experiences of other people. So a non-Solipsistic Phenominalism is fundamentally self-contradictory.
The relation between perceiving the redness of the tomato and the belief that the tomato is red consists of three parts --
(i) a consciously attended to experience of a visual field containing a data-pattern that can be classified as a tomato and as red; and
(ii) the conscious recognition of similarities between this and past experiences and the classification of the data-pattern as "a tomato" and "red"; and
(iii) the forming of a belief expressing that classification as directed by some informational need.
By adopting an Inferential Contextualism concept of knowledge we can maintain that our belief that the tomato is red is prima facie sufficient justification for our knowing that the tomato is red. Our perception based belief that the tomato is red must cohere with (and maximize the coherence of) the contextually relevant sub-set of our entire belief-set. And that contextually relevant sub-set, it should be restated, will include any beliefs about the normality of the seeing conditions, the reliability of our perceptual belief-forming processes, the boundaries of our conceptual classification schemes, the meaning of any labels we employ, and the informational need that drives the belief formation process. Contra foundationalism, those contextually relevant other beliefs are necessary in order to form a perpetual belief. Contra coherentism, "inference to the best explanation" is not sufficient to ground perceptual beliefs. A connection to the truth of the matter is necessary if we are to maintain any hold on reality.
Finally, by adopting a Direct Realist approach to perception we can maintain that the truth does matter. There necessarily must exist within our visual field a tomato that is in fact red in order that we may perceive the redness of the tomato. Perceiving the redness of the tomato is direct and immediate awareness, and hence suitable grounds for believing that the tomato is red, and prima facie justification for knowing that the tomato is red.
[Section Under Development]
[Section Under Development]
[Section Under Development]
[Back] [Home] [Next]
(1) This essay is a summary of the standard "Justified-True-Belief" model of Knowledge. There are many reference works that assume this model as a base. Please see the Bibliography below.
(2) In the truth-realist sense that is normal for the JTB model of knowledge, the truth-condition of the JTB model is intended to capture the asymptotically approachable limit that is the evidence transcendent truth. While my speaking of an "asymptotically approachable limit" may be obscure for those less comfortable with mathematics, it does nicely capture the truth-realist sense of truth. The evidence available to any particular population of relevance can only approach the evidence-transcendent truth, never quite reaching it. For the realist, despite all the evidence in support of some judgement, we may never the less be wrong -- even, contra the anti-realists, the ultimate collective judgements of a suitably relevant population. In a truth-anti-realist sense, on the other hand, the evidence-transcendent limit does not exist absolutely. It only exists as the (ultimate?) determination of the collective judgement of a suitably relevant population. Never-the-less, for any one particular claim to knowledge, that population-dependent limit is as evidence transcendent as is the absolute limit to the truth-realist.
(3) Goldman, Alvin. 1976. "Discrimination and Perceptual Knowledge." The Journal of Philosophy 73, pp. 771-791.
(4) Barnes, J., The Toils of Scepticism, The Journal of Hellenic Studies, Vol. 113. (1993), pp. 199-200.
(5) Hume, David (Eric Steinberg, Ed.) An Enquiry Concerning Human Understanding (Second Edition) ; Hackett Publishing Company, Indianapolis, 1993. Section IV, Part I.
(6) Dretske, Fred, Epistemic Operators, The Journal of Philosophy, Vol. 67, No. 24. (Dec. 24, 1970), pp. 1007-1023.
(7) Putnam, Hilary, "Reason, Truth, and History", Cambridge University Press, 1982. Chapter 1, pp 1-21.
(8) Brueckner, Tony, "Brains in a Vat", The Stanford Encyclopedia of Philosophy (Winter 2004 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/win2004/entries/brain-vat/
(9) Blanshard, Brand. The Nature of Thought, London: Allen and Unwin, 1939, vol. 2, p. 227.
(10) Dretske, Fred. 1981. Knowledge and the Flow of Information. Cambridge: MIT Press.
(11) DeRose, Keith. "Contextualism: An Examination and Defense", Epistemology, J.Greco & E.Sosa (eds.), Basil Blackwell, Oxford, 1999.
(12) Lewis, David. "Elusive Knowledge", Australian Journal of Philosophy, vol 74, ppg 549-567. 1996.
(13) Williams, Michael, Unnatural Doubts: Epistemological Realism and the Basis of Scepticism, Basil Blackwell, Oxford. 1991.
-- -- -- "Skepticism", Epistemology, J. Greco & E. Sosa (eds.), Basil Blackwell, Oxford. 1999.
-- -- -- , "Is Contextualism Statable?", Philosophical Issues 10, 80:5. 2000.
-- -- -- , "Contextualism, Externalism and Epistemic Standards", Philosophical Studies 103, 1:23. 2001.
-- -- -- -, Problems of Knowledge: A Critical Introduction to Epistemology, Oxford University Press, Oxford. 2001.
(14) Cohen, S. "Contextualism and Skepticism", Philosophical Issues, vol 10, 2000, pg 94.
(15) Williams, M. Unnatural Doubts: Epistemological Realism and the Basis of Scepticism, Basil Blackwell, Oxford. 1991, Pg 119.
(16) Nozick, Robert. Philosophical Explanations. Oxford University Press, 1981.
(17) ibid. Pg 176
(18) McDowell, John. Meaning, Communication, and Knowledge, Philosophical Subjects: Essays Presented to P.F.Strawson, Zak Van Straaten (ed.), Oxford University Press, New York, New York. 1980.
(19) Chakrabarti, Arindam. On Knowing by Being Told, Philosophy East and West, Vol. 42, No. 3. (Jul., 1992), pg 430.
(a) Gettier, Edmund L.; Is Justified True Belief Knowledge?; Analysis 23 ( 1963): 121-123.
(b) Fumerton, Richard, "Foundationalist Theories of Epistemic Justification", The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2006/entries/justep-foundational/>.
Adler, Jonathan, "Epistemological Problems of Testimony", The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2007/entries/testimony-episprob/>.
Alston, William. 1989. Epistemic Justification. Essays in the Theory of Knowledge. Ithaca: Cornell University Press.
-- -- -- -. 1991. Perceiving God. The Epistemology of Religious Experience. Ithaca: Cornell University Press.
-- -- -- -. 1993. The Reliability of Sense Perception. Ithaca: Cornell University Press.
-- -- -- -. 1996. A Realist Conception of Truth. Ithaca: Cornell University Press.
-- -- -- - Internalism and Externalism in Epistemology; Philosophical Topics, 14, No 1 (1986); Pgs 185-226.
Ayer, A.J.; The Problem of Knowledge; Penguin Books, Ltd., 1956.
BonJour, Laurence, "Epistemological Problems of Perception", The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2007/entries/perception-episprob/>.
Boyd, Richard, "Scientific Realism", The Stanford Encyclopedia of Philosophy (Summer 2002 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2002/entries/scientific-realism/>.
Brueckner, Tony, "Brains in a Vat", The Stanford Encyclopedia of Philosophy (Winter 2004 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2004/entries/brain-vat/>.
Chisholm, Roderick. 1989. Theory of Knowledge, 3rd. ed., Englewood Cliffs: Prentice Hall.
Cohen, Stewart. 1984. "Justification and Truth," Philosophical Studies 46, pp. 279-95.
DeRose, Keith. 1999. "Contextualism: An Explanation and Defense." In: Greco and Sosa (eds.) 1999, pp. 187.
Dretske, Fred. 1981. Knowledge and the Flow of Information. Cambridge: MIT Press.
-- -- -- -. 2005. "The Case Against Closure." In: Steup and Sosa 2005, pp. 1-26
Elgin, Catherine; Considered Judgment; Princeton University Press.
Feldman, Richard, "Naturalized Epistemology", The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2006/entries/epistemology-naturalized/>.
Fumerton, Richard, "Knowledge by Acquaintance vs. Description", The Stanford Encyclopedia of Philosophy (Summer 2008 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanford.edu/archives/sum2008/entries/knowledge-acquaindescrip/>.
Goldman, Alvin. 1976. "Discrimination and Perceptual Knowledge." The Journal of Philosophy 73
-- -- -- -, "Reliabilism", The Stanford Encyclopedia of Philosophy (Summer 2008 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanford.edu/archives/sum2008/entries/reliabilism/>.
Hendricks, Vincent and John Symons, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2006/entries/logic-epistemic/>.
Kelly, Thomas, "Evidence", The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2006/entries/evidence/>.
Klein, Peter, "Skepticism", The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2005/entries/skepticism/>.
Korcz, Keith Allen, "The Epistemic Basing Relation", The Stanford Encyclopedia of Philosophy (Winter 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2006/entries/basing-epistemic/>.
Kvanvig, Jonathan, "Coherentist Theories of Epistemic Justification", The Stanford Encyclopedia of Philosophy (Fall 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2007/entries/justep-coherence/>.
Luper, Steven, "The Epistemic Closure Principle", The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2006/entries/closure-epistemic/>.
Pappas, George, "Internalist vs. Externalist Conceptions of Epistemic Justification", The Stanford Encyclopedia of Philosophy (Spring 2005 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2005/entries/justep-intext/>.
Pritchard, Duncan, "The Value of Knowledge", The Stanford Encyclopedia of Philosophy (Fall 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2007/entries/knowledge-value/>.
Neta, Ram, "S knows that P".
Russell, Bruce, "A Priori Justification and Knowledge", The Stanford Encyclopedia of Philosophy (Winter 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2007/entries/apriori/>.
Rysiew, Patrick, "Epistemic Contextualism", The Stanford Encyclopedia of Philosophy (Fall 2007 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2007/entries/contextualism-epistemology/>.
Sorensen, Roy, "Epistemic Paradoxes", The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2006/entries/epistemic-paradoxes/>.
Sosa, Ernest. 1991. Knowledge in Perspective. Selected Essays in Epistemology. Cambridge: Cambridge University Press.
Steup, Matthias. 1996. An Introduction to Contemporary Epistemology. Upper Saddle River: Prentice Hall.
Steup, Matthias, "The Analysis of Knowledge", The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2006/entries/knowledge-analysis/>.
Steup, Matthias and Sosa, Ernest (eds). 2005. Contemporary Debates in Epistemology. Malden (MA): Blackwell.
Williams, Michael, Unnatural Doubts: Epistemological Realism and the Basis of Scepticism, Basil Blackwell, Oxford. 1991.
-- -- -- "Skepticism" in Epistemology, J. Greco & E. Sosa (eds.), Basil Blackwell, Oxford. 1999.
-- -- -- "Is Contextualism Statable?" in Philosophical Issues 10, 80:5. 2000.
-- -- -- "Contextualism, Externalism and Epistemic Standards in Philosophical Studies 103, 1:23. 2001.
-- -- -- -, Problems of Knowledge: A Critical Introduction to Epistemology, Oxford University Press, Oxford. 2001.