You are here

5: BonJourian Coherentism

Coherentism pure and unalloyed shows little promise; but it is important not to leave matters at such an abstract level. Any real, flesh-and-blood coherentist will have her own particular insights; she will make modifications, adaptations, qualifications, and additions; problems, difficulties, and inadequacies of coherentism überhaupt might not afflict certain developments of it. Furthermore, a view that was fundamentally coherentist but imported noncoherentist elements—an impure (contaminated?) coherentism, we might say—could be more plausible and enlightening than coherentism pure and unalloyed. Laurence BonJour's The Structure of Empirical Knowledge1 presents just such a chastened coherentism; in this chapter I shall explain and examine it.2

I. BonJourian Coherentism Explained

BonJour's book is entitled The Structure of Empirical Knowledge (my emphasis); in it he aims to work out a satisfying coherentist account of (roughly speaking) perceptual knowledge and the a posteriori elements of common sense and scientific knowledge. (As we shall see, he endorses a traditional foundationalist account of a priori knowledge.) BonJour sees the traditional justified true belief account of knowledge as “at least approximately correct” (p. 3); he therefore concentrates his attention on the question, What is justification? But if the justified true belief account of knowledge is at least approximately correct, then justification (as BonJour explains it) is at least approximately what distinguishes true belief from knowledge; if so, justification is at least very close to warrant or positive epistemic status. I shall therefore take BonJour as offering an account of warrant.3

A. Justification and Warrant

Suppose we begin by asking what justification is as BonJour sees it. What is its basic nature and what sort of animal is it? “The goal of our distinctively cognitive endeavors,” he says, “is truth: we want our beliefs to correctly and accurately depict the world” (p. 7). (Apparently he thinks it is we ourselves who set this goal for ourselves as cognitive agents.) He continues:

We cannot, in most cases at least, bring it about directly that our beliefs are true, but we can presumably bring it about directly (though perhaps only in the long run) that they are epistemically justified. And, if our standards of epistemic justification are appropriately chosen, bringing it about that our beliefs are epistemically justified will also tend to bring it about, in the perhaps even longer run and with the usual slippage and uncertainty which our finitude mandates, that they are true. If epistemic justification were not conducive to truth in this way, if finding epistemically justified beliefs did not substantially increase the likelihood of finding true ones, then epistemic justification would be irrelevant to our main cognitive goal and of dubious worth.… Epistemic justification is therefore in the final analysis only an instrumental value, not an intrinsic one. (p. 8)

So we have the goal of seeking truth, and holding justified beliefs is a means (perhaps the only means available to us) for achieving that goal. BonJour next sounds a Lockean, Chisholmian note: this goal is apparently one we are obliged to have and strive to fulfill. At any rate it is irresponsible to neglect it:

It follows that one's cognitive endeavors are epistemically justified only if and to the extent that they are aimed at this goal, which means very roughly that one accepts all and only those beliefs which one has good reason to think are true. To accept a belief in the absence of such a reason… is to neglect the pursuit of truth; such acceptance is, one might say, epistemically irresponsible. My contention here is that the idea of avoiding such irresponsibility, of being epistemically responsible in one's believings, is the core of the notion of epistemic justification. (p. 8)

A person is justified in her beliefs, therefore, if and only if she is epistemically responsible in her believings—that is, if and only if she is epistemically responsible in regulating and governing her belief acceptance and maintenance. But she governs her beliefs responsibly only if she accepts all and only those beliefs she has a good reason for thinking true.

Now the task of the epistemologist, says BonJour, is twofold: “The first part is to give an account of the standards of epistemic justification; and the second is to provide what I will call a metajustification for the proposed account by showing the proposed standards to be adequately truth conducive.” (Here [pp. 11–12] he chides Chisholm for failing to produce such a metajustification; Chisholm gives an account of the standards of epistemic justification, but fails to show or even argue that the proposed standards are truth conducive.) But it isn't only the epistemologist who must come up with a metajustification:

If a given putative knower is himself to be epistemically responsible in accepting beliefs in virtue of their meeting the standards of a given epistemological account, then it seems to follow that an appropriate metajustification for those principles must, in principle at least, be available to him. For how can the fact that a belief meets those standards give that believer a reason for thinking that it is likely to be true (and thus an epistemically appropriate reason for accepting it) unless he himself knows that beliefs satisfying those standards are likely to be true? (p. 10)

For me to be justified in accepting a belief A, therefore, it is not sufficient that there be a good reason for believing A, or that someone have a good reason for it; I must myself (in principle, at any rate) have such a reason, and I must believe A on the basis of that good reason. This conception of justification or warrant gives BonJour a perspective from which to criticize what he takes to be the principal alternatives to the coherentist view he favors: traditional foundationalist accounts of warrant (chapter 2), and more recent externalist and reliabilist accounts of it (chapter 3). It is irresponsible and therefore unjustified to accept a belief unless one accepts that belief on the basis of a good reason for thinking it true; hence the foundationalist errs in holding that some beliefs are properly basic and acquire warrant just by virtue (say) of being formed in the right experiential circumstances. Similarly, the reliabilist errs in thinking that a belief can acquire warrant just by virtue of being formed by a reliable belief-producing process or mechanism; if you hold a belief but have no reason for thinking it true, then that belief is not justified and you are irresponsible in accepting it, even if as a matter of fact it is produced by a reliable process.

Of course, BonJour thinks warrant and coherence are intimately connected, but he does not identify the former with the latter: “What is at issue here is the connection between coherence and epistemic justification: why, if a system of empirical beliefs is coherent (and more coherent than any rival system), is it thereby justified in the epistemic sense, that is, why is it thereby likely to be true?” (p. 93). But if the issue is the connection between coherence and epistemic justification, then it will not be the case that justification (or warrant) just is coherence. No: to be justified is to be appropriately responsible—to be responsible with respect to the governance of belief formation and maintenance. It is obvious from our discussion of classical and post-classical Chisholmian internalism, I think, that warrant cannot possibly be related in just that way to epistemic responsibility (it can't be that, necessarily, a belief has warrant for me if and only if I am epistemically responsible in forming or maintaining it): clearly a person could be as responsible as you please and still (by virtue, perhaps, of cognitive dysfunction) hold beliefs that have little or no warrant for him. But BonJour adds a specific view as to what epistemic responsibility consists in: if a person governs his belief formation and maintenance responsibly, then he accepts a belief only if he has (or thinks he has) good reason to think that belief true. This, in turn, by an alchemy that isn't entirely easy to make out (p. 92), gets transmuted into the claim that what justifies a particular empirical belief is that it is a member of a coherent system of beliefs. Perhaps BonJour's thought here is that the only good reason I could have for thinking a particular empirical belief true is that it is a member of a coherent system of beliefs. In any event, “The justification of a particular empirical belief finally depends, not on other particular beliefs as the linear conception of justification would have it, but instead on the overall system and its coherence” (p. 92). As I understand the view, then, (a) a belief is justified for me if and only if I am responsible in forming or maintaining it; (b) I am responsible in forming or maintaining a belief if and only if I have a good reason for thinking that belief true; and (c) the only good reason for thinking an empirical belief true is that it is an element of a coherent system of beliefs.

B. Coherence

What, precisely, is coherence, and how does the fact that a belief is an element of a coherent system provide a good reason for thinking it true? “The main points are: first, coherence is not to be equated with mere consistency; second, coherence, as already suggested, has to do with the mutual inferability of the beliefs in the system; third, relations of explanation are one central ingredient in coherence,… and, fourth, coherence may be enhanced through conceptual change” (p. 95). BonJour goes on to state five principles governing coherence (pp. 95, 98):

  1. A system of beliefs is coherent only if it is logically consistent,

  2. A system of beliefs is coherent in proportion to its degree of probabilistic consistency,

  3. The coherence of a system of beliefs is increased by the presence of inferential connections between its component beliefs and increased in proportion to the number and strength of such connections,

  4. The coherence of a system of beliefs is diminished to the extent to which it is divided into subsystems of beliefs which are relatively unconnected to each other by inferential connections,


  1. The coherence of a system of beliefs is decreased in proportion to the presence of unexplained anomalies in the believed content of the system.

These principles have a rough and ready initial clarity except perhaps for the second one: what is probabilistic consistency and inconsistency? The paradigm case: “Suppose that my system of beliefs contains both the belief that P and also the belief that it is extremely improbable that P.… it is… clear from an intuitive standpoint that a system which contains two such beliefs is significantly less coherent than it would be without them and thus that probabilistic consistency is a… factor determining coherence” (p. 95).

Many questions could be raised about these principles. The first principle, for example, seems initially unproblematic; in fact, however, it harbors several deep issues—issues we cannot properly pursue here. I shall mention just one. We may not be able to bring it about directly that our beliefs are true, says Ron lour, but at least we can bring it about directly that our beliefs are coherent (even if only “in the long run”). So (by this first principle) we can bring it about directly that our beliefs are consistent. But then what sort of consistency does the principle speak of? The best candidate, one thinks, would be consistency in the broadly logical sense—truth in some possible world. For, first, it would be wholly arbitrary to claim that consistency in first-order logic, say, is a necessary condition of coherence, while allowing that a coherent noetic structure might harbor mathematical impossibilities and necessary falsehoods of other kinds. Second, insofar as coherence is supposed to be “truth conducive,” falsehood in every possible world is just as bad as inconsistency in first-order logic. But it is far from obvious that consistency in that broadly logical sense is any easier for us to attain than truth. Philosophical disputes, for example, are typically about noncontingent propositions—propositions that are necessarily true or necessarily false. In these cases, therefore, broadly logical consistency is exactly as easy to attain as truth; and in these cases truth, as the continuing disputes attest, is an elusive quarry indeed.

And in fact things are not a whole lot better if we stick to logical consistency taken more narrowly—taken as, say, consistency in first-order logic—as many of us (Frege, for example) have discovered to our sorrow. Is there any guarantee that if we try really hard, we will always be able to avoid such inconsistency, or always be able to discover and extirpate it, if we do happen to fall into it? Obviously not; I might try my level best over my entire lifetime, but by virtue of noetic malfunction have a system of beliefs in which first-order inconsistency runs absolutely riot. And even if we disregard noetic malfunction there is of course still no such guarantee. What is really involved in coherence, insofar as it is (broadly speaking) up to us whether we achieve it, is, presumably, not possibility in the broadly logical sense, nor consistency in first-order logic, but something weaker, something like absence of obvious impossibility, or perhaps impossibility that would be obvious after a certain degree of reflection.

The second principle is equally problematic and in fact thoroughly obscure. What sort of probability is at issue here? Not personal or subjective probability, presumably; so some variety of objective probability. But which variety? Statistical probability—the sort exemplified by the probability that an American male will be 6 feet tall by the age of 14 is .08—is what leaps immediately to mind. But there, on the face of it, anyway (and given what appear to be the plausible reference classes), what is improbable is surely the rule rather than the exception. That precisely that mosquito should bite you precisely when and where it does, that on your cross-country trip on December 23 at 4:13 P.M. you should be precisely where you are at that time, that there should be precisely the number of blades of grass in your backyard that in fact adorn it (to say nothing about their length, width, thickness, length of life, weight, color, and the like), that there should be just the number of cows that there are, that a given frog's egg should beat the odds and achieve froghood—either these things are all improbable in the relevant sense or else I have no idea what that relevant sense might be. But if improbability is in fact so rampant, why should I (or my noetic structure) suffer demerits for recognizing and recording it?

C. Other Requirements for Warrant

The remaining principles also have their problems. Still, they give us some guidance, and in any event perhaps we do not need a really precise and well worked out notion of coherence in order to carry the discussion further. But coherence, according to BonJour, is not the only thing that counts for justification; there are other requirements. First, the idea, of course, is that the belief in question must be a member of some person's coherent system of beliefs; the mere fact that one of my beliefs is a member of some coherent system of beliefs, whether or not it is my system of beliefs, confers no warrant on that belief.

Second, a system of beliefs capable of conferring warrant upon its members must also meet the “Observation Requirement”:

Thus, as a straightforward consequence of the idea that epistemic justification must be truth-conducive, a coherence theory of empirical justification must require that in order for the beliefs of a cognitive system to be even candidates for empirical justification, that system must contain laws attributing a high degree of reliability to a reasonable variety of cognitively spontaneous beliefs (including in particular those kinds of introspective beliefs which are required for the recognition of other cognitively spontaneous beliefs). (p. 141)

A belief is “cognitively spontaneous” when it is not acquired via inference but immediately, in the way in which memory beliefs and beliefs formed by sense perception and introspection are typically acquired. And a system whose members are candidates for justification must contain both beliefs that are cognitively spontaneous, and also laws according to which those beliefs are for the most part true (pp. 112–32 and 141–44).

Third, if the members of a system of beliefs are to be candidates for justification, that system must be coherent not only at the time in question, but over “the long run”; furthermore, it must be “stable”; that is to say, it must not undergo undue change from moment to moment; and finally it must “converge” to a stable system in which the only changes are those “allowed or even required by the general picture of the world thus presented” (p. 170).

The point is that it is only in that latter sort of case—the case in which the belief system converges on and eventually presents a relatively stable long-run picture of the world, thus achieving coherence over time as well as at particular times—that the coherence of the system provides any strong reason for thinking that the component beliefs are thereby likely to be true. (p. 170)

(BonJour adds later on that the system in question must be more coherent than any alternative that also satisfies the Observation Requirement.) How long must this long run be? Presumably we aren't speaking of the long run (else a person could know what her name is only if she were immortal, which would yield a powerful epistemic argument for immortality); but (understandably enough) BonJour does not say how long a run is required. And finally, if S's beliefs are to have justification, S must have “a reflective grasp” of the fact that his system of beliefs satisfies these conditions, “and this reflective grasp must be, ultimately but perhaps only very implicitly, the reason why he continues to accept the belief whose justification is in question” (p. 154).

D. The Metajustification

So S's belief that p is justified, has warrant, only if (a) S's noetic structure satisfies the Observation Requirement, displays a high degree of coherence (at the moment and in the long run), and is more coherent than any alternative that also satisfies the Observation Requirement, and (b) S continues to accept p because he “reflectively grasps” these facts about his system of beliefs. But even this is not enough to confer warrant upon S's beliefs. What is also required is that there be a good argument for the contention that a noetic structure's satisfying those conditions is “truth-conducive,” as BonJour puts it. What could that argument be? This question assumes a certain urgency in that, as he sees the matter, each of us, if her beliefs are to attain justification, must in principle be able to tell that coherence is truth conducive: “If coherentism is to be even a dialectically interesting alternative, the coherentist justification must, in principle at least, be accessible to the believer himself” (p. 89). Further, each of us must be able to determine this a priori (p. 157). (It is no good examining a large number of beliefs from coherent noetic structures, noting that most are true and concluding that membership in a coherent noetic structure is truth conducive.) How is this supposed to work? What kind of coherentist justification is available to each of us? How is such a “justificatory argument” supposed to go?

As follows. Let B be one of my beliefs; and suppose B meets conditions (a) and (b) noted previously. Then there is an argument of the following sort for the conclusion that B is likely to be true:

(P1) B is a member of my set of beliefs; and my set of beliefs is coherent and stable (in the long run) and meets the Observation Requirement (‘long-run coherent’ for short)

(P2) Any belief that is a member of someone's long-run coherent set of beliefs is likely (i.e., more likely than not) to be true


(3) B is likely to be true.

This argument has two premises; what is my justification for believing them?

1. The Second Premise

Consider first BonJour's argument for (P2). Here he argues that:

A system of beliefs which (a) remains coherent (and stable) over the long run and (b) continues to satisfy the Observation Requirement is likely, to a degree which is proportional to the degree of coherence (and stability) and the longness of the run, to correspond closely to independent reality. (pp. 170–71)

His crucial premise is:

The best explanation, the likeliest to be true, for a system of beliefs remaining coherent (and stable) over the long run while continuing to satisfy the Observation Requirement is that (a) the cognitively spontaneous beliefs which are claimed, within the system, to be reliable, are systematically caused by the sorts of situations which are depicted by their content, and (b) the entire system of beliefs corresponds, within a reasonable degree of approximation, to the independent reality which it purports to describe; and the preferability of this explanation increases in proportion to the degree of coherence (and stability) and the longness of the run. (p. 171)

Let S be my system of beliefs; and suppose that S is long-run coherent. Then (a) and (b) together comprise what BonJour calls the “correspondence hypothesis” with respect to S; and “what needs to be shown,” he says,

is that the correspondence hypothesis is more likely to be true relative to the conditions indicated than is any alternative explanation. The underlying claim is that a system of beliefs for which the correspondence hypothesis was false would be unlikely to remain coherent (and continue to satisfy the Observation Requirement) unless it were revised in the direction of greater correspondence with reality—thereby destroying the stability of the original system and gradually leading to a new and stable system of beliefs for which the correspondence hypothesis is true. (p. 172)

Well, how is this to be shown, and shown a priori? What about the suggestions that I could be a brain in a vat, my beliefs being manipulated by an inquisitive Alpha Centaurian superscientist in such a way as to be coherent (and meet those other conditions), but mainly false? What about Descartes’ suggestion that he might be a victim of a whimsically malevolent demon, so that although his beliefs form an appropriately coherent system, they are for the most part false? What about all the other possibilities involving cognitive malfunction—possibilities in which my beliefs are coherent but far from the truth? How are these hypotheses to be shown less probable, on the supposition that my system S of beliefs is coherent, than the correspondence hypothesis with respect to S? BonJour argues (pp. 183–84) that these are all less probable, a priori, than the correspondence hypothesis with respect to S. That I should be manipulated in this way by a demon or Alpha Centaurian is enormously improbable, where the probability involved is not statistical probability, but a priori probability:

As already suggested, there seems to be only one real possibility at this point: just as the claim was made above that the elaborated chance hypothesis is antecedently extremely unlikely to be true, that its a priori probability or likelihood is extremely low, so also must an analogous claim be made about the elaborated demon hypothesis. (p. 184)

BonJour argues that the correspondence hypothesis, although itself “highly unlikely” (p. 186), is more probable than these various skeptical hypotheses; that is, its a priori probability, though much less than one-half, is nonetheless considerably greater than that of any skeptical hypothesis. This argument is not at all easy to follow but crucially involves the following idea:

There is available a complicated albeit schematic account in terms of biological evolution and to some extent also cultural and conceptual evolution which explains how cognitive beings whose spontaneous beliefs are connected with the world in the right way come to exist—an explanation which, speaking very intuitively, arises from within the general picture provided by… the correspondence hypothesis, rather than being imposed from the outside. (p. 187)

Given that the a priori probability of the correspondence hypothesis exceeds that of the skeptical hypotheses, he concludes that the conditional probability of the correspondence hypothesis on the proposition S is long-run coherent also exceeds the conditional probability of any skeptical hypothesis on that same proposition. It is not in general true, of course, that for a proposition C, if the prior probability of A exceeds the prior probability of B, then the conditional probability of A on C exceeds that of B on C; this does hold, however, where A and B entail C, and the skeptical and correspondence hypotheses can be stated so as to meet this condition.

2. The First Premise

So my justification for the second premise is essentially that I can see a priori that the a priori probability of the correspondence hypothesis with respect to S is greater than that of any skeptical hypothesis; but then the conditional probability of the correspondence hypothesis on the proposition S is long-run coherent is greater than that of any of the skeptical hypotheses on that proposition. But how about (P1)? What is my justification for believing that I do indeed believe B, that B is a member of my system of beliefs? If I need a justificatory reason for each of my empirical beliefs, I shall also need justification for that. And of course I must also believe, for some group of propositions A1,… An that they comprise my system of beliefs; again, what is my justification for that? And what is my justification for the belief that A1,… An form a coherent system?

With respect to the third question, the answer, presumably, is that one can determine a priori whether a given system of beliefs is coherent.4 What about the first two questions? Here it is not easy to see precisely what BonJour has in mind. As far as I can see, however, he disclaims any view as to what sort of warrant or justification such beliefs as I believe A0 or I believe each of A1,… An have for me; as a matter of fact he doesn't seem to claim that such beliefs have warrant for me. He recognizes that the justificatory argument will be cogent for me only if I do know or justifiably believe such propositions: “But if the fact of coherence is to be accessible to the believer, it follows that he must somehow have an adequate grasp of his total system of beliefs, since it is coherence with this system which is at issue” (p. 102); and “though questions can be raised and answered with regard to particular aspects of my grasp of my system of beliefs, the approximate accuracy of my overall grasp of that system must be taken for granted in order for coherentist justification to even begin” (p. 127). To take this for granted is to accept the “Doxastic Presumption”; that is, I accept the Doxastic Presumption if I believe that my beliefs as to what I believe (my ‘metabeliefs’, we might say) are approximately correct. However, he apparently does not intend to claim that anyone has warrant or justification either for the Doxastic Presumption or for his metabeliefs: “no claim is being made that these metabeliefs possess any sort of intrinsic or independent justification or warrant of any kind (nor would such claim be defensible in light of the earlier antifoundationalist arguments)” (p. 147).

So do we or don't we have warrant for a posteriori beliefs? In the last analysis, as far as I can tell, BonJour does not commit himself on this issue: “Rather, the approximate correctness of these beliefs is an essential presupposition for coherentist justification, and both such justification itself and any resulting claims of likelihood of truth must be understood as relativised to this presupposition” (p. 147). Apparently, therefore, BonJour takes it that his project is a sort of conditional one. The attempt is to show that if I make the Doxastic Presumption, then I will have available a justificatory argument for my a posteriori beliefs:

Nothing like a justification for the presumption has been offered for the simple reason that if it is properly understood, none is required: there can obviously be no objection to asking what follows about the justification of the rest of my beliefs from the presumption that my representation of my own system of beliefs is approximately correct. (p. 106)

E. The Justification of A Priori Beliefs

The idea, then, is that we are justified in empirical beliefs only if we have good reason to think them true; and that reason must ultimately be a priori. I note that I hold a certain belief B; I also note that my system of beliefs is comprised by B together with A1,… An; I note still further that this system of beliefs is coherent (and coherent in a long-enough run); I know or see or believe a priori that any such system of beliefs is likely to “correspond to reality”; I conclude that B is likely to be true.

Accordingly, the suggestion is that I can argue a priori, somehow, that a coherent system of beliefs is likely to correspond to reality; and of course in giving myself this argument I rely upon principles of logic and other a priori beliefs. Now those a priori beliefs must themselves have warrant or justification; if they lack it, then I won't have a good reason for thinking that my belief B is likely to be true. But how shall we think about the justification or warrant of a priori beliefs? I said at the beginning of this section that BonJour was an impure coherentist. This impurity consists in a certain promiscuity: he embraces a coherentist account of (broadly speaking) empirical warrant and knowledge, but a wholly different account of a priori warrant. Here he adopts what he calls a traditional rationalistic account:

The traditional positive view is that of the rationalist: a priori justification is ultimately to be understood as intuitive grasp of necessity: a proposition is justified a priori when and only when the believer is able, either directly or via some series of individually evident steps, to intuitively ‘see’ or apprehend that its truth is an invariant feature of all possible worlds, that there is no possible world in which it is false. (p. 192)

So this coherentist account of empirical knowledge rests upon a foundationalist account of a priori knowledge. What we know a priori gets its warrant not by virtue of being a member of a coherent system, but just by virtue of being self-evident, or such that it follows from what is self-evident by self-evidently valid arguments. More exactly, I am justified in accepting such a self-evident proposition as 2 + 1 = 3 by virtue of the fact that I can ‘see’ that it is necessarily true. Hence I do have a reason for accepting such a proposition: my reason is that it is necessarily true. But I do not have (and apparently do not need) a reason for accepting that reason, for accepting the proposition that 2 + 1 = 3 is necessary.

II. BonJourian Coherentism Examined

BonJour raises many points of great interest; there are many fascinating topics here, and much to say about each one. Strictly speaking, for my project of coming to a satisfying view of the nature of warrant I need consider only what BonJour says about warrant. But he offers us much else of real interest; much of what he says invites, nay, cries out for comment; it would be unduly inappreciative to stick churlishly to what is strictly required. Before going on to an explicit consideration of his account of warrant, therefore, I wish to comment on two other interesting facets of his thought: (1) his relationship to classical foundationalism and his partiality to reason, and (2) the success of his argument for a coherentist justification of empirical belief.

A. Classical Foundationalism and Trusting Reason

1. Preliminary Questions

Justification, according to BonJour, is a matter of epistemic responsibility. How so? We have the goal of achieving truth; “one's cognitive endeavors are epistemically justified only if and to the extent that they are aimed at this goal, which means very roughly that one accepts all and only those beliefs which one has a good reason to think are true” (p. 8). Questions arise here: first, do we all have this goal? Might not some of us have no overarching cognitive goals at all? Might not others have different overarching cognitive goals—comfort, say, or salvation, or fame and fortune, or mental health? Might I not have the primary goal of doing my cognitive duty, hoping that this will lead to my holding mostly true beliefs, but not explicitly aiming at the latter? (Just as I might have the more general aim of living aright, living in accord with duty, hoping this will lead to happiness, but not explicitly aiming at it?) And what about avoiding error? According to William James, W. K. Clifford was unduly finicky and squeamish about this matter; he had an inappropriate and ultimately unhealthy horror of believing what is false. But even if James is right, isn't it perfectly proper to try to avoid error as well as believe truth? (If your only aim is to believe truth, then no doubt the thing to do is to believe as many propositions as you possibly can.) And shouldn't the goal of believing the truth be more subtly specified? I am not particularly interested in sheer quantity of truth, as if the more truths (no matter how trivial), the merrier. There are some areas of reality such that I have little interest in learning about them (I really don't care how many blades of grass there are on your front lawn). I also value such epistemic excellences as penetrating insight, gaining a deep understanding of such difficult but important phenomena as sets, possible worlds, middle knowledge, and a thousand other things; how are these goals related to the goal of having true beliefs? Second, are we all really obliged to have this goal? Suppose we don't: have we gone contrary to a duty of some kind, so that we warrant reproach and blame? That seems a bit strong.

But let us waive these questions, assuming for purposes of discussion that we all have (perhaps as one among others) the overarching cognitive goal of achieving truth. How does it follow that to act responsibly I must believe only on the basis of reasons? Even if I think that believing on the basis of reasons is the best way to achieve truth, how is it that I would be irresponsible if I did not try to believe on the basis of reasons? I might be feckless; or I might be heedless or reckless with respect to my own goals, and hence perhaps irrational in some sense of that multifarious term; but would I be irresponsible? And second, couldn't I think that believing on the basis of reasons is not a good way to achieve truth? Perhaps I think (and responsibly think) that truth gives herself, not to those who are crafty and calculating, but to those who believe with a fine impulsiveness, accepting the first idea that pops into their heads. But then it would be hard to see how responsibility requires that I believe only on the basis of reasons.

More important, might I not be perfectly responsible even if I did not always require a reason for belief, but for many beliefs simply trusted my nature, believing what nature inclines me to believe? In my present circumstances I am inclined to believe that I had breakfast this morning; my memory does not, so far as I know, play me tricks. Why isn't it entirely responsible for me, in these circumstances, to believe that I did have breakfast this morning, whether or not I can find some reason (that is, some supporting evidence in other beliefs I hold) for the belief that I did?

Indeed, do I have a real alternative to trusting my nature at some point or other? A reading of the first chapter of BonJour's book might suggest that he endorses the quixotic Enlightenment project of refusing to trust or acquiesce in my cognitive nature until I first determine that it is reliable, that it, for the most part, provides me with truth. But of course this is wholly foolish and self-forgetful, worthy only of someone who, like Kierkegaard's Hegel, forgets that she is an existing individual and confuses herself with universal reason in the abstract. For where do I stand when I conclude that since it is possible that my nature should massively deceive me, I should not trust it until I determine that it does not thus massively deceive me? Where shall I stand while making that determination, while investigating whether or not my nature is or is not reliable? Where is this Archimedian pou, stw/?

Obviously I have nothing but my epistemic nature—my natural epistemic faculties—to enable me to see (if that is what it is) that if it is possible that my nature deceive me, then I must determine whether it is reliable before I trust it. And equally obviously I must trust my nature in trying to determine whether it is reliable, and thus worthy of my trust. To determine whether it is reliable, I must determine whether for the most part it yields truth. So perhaps I conclude that my cognitive nature yields much error; perhaps I note that I am inclined to believe propositions P and Q from which by subtle or obvious arguments it follows that both R and not-R. But, of course, in so noting I rely upon my natural inclinations for the belief that I do indeed believe both P and Q, that those arguments from P and Q to R and not-R are in fact valid, and that if my nature inclines me to believe propositions that by those arguments lead to contradictions, then my nature inclines me to error. To transpose the theme into BonJourian terms, if I need a reason for everything I believe, then I need a reason for believing that, as well as a reason for thinking that what I take to be a reason for believing that is a reason for believing it, and a reason for that second belief, and a reason for thinking that what I take to be a reason for it is in fact a reason for it, and so on. That way, obviously, lies intellectual shipwreck.

As a matter of fact, however, BonJour is not involved in any such absurd and self-defeating stance; for he clearly begins his project with an initial trust in reason, the faculty that yields a priori and self-evident (or apparently self-evident) beliefs. (Indeed, he sweeps a fair amount of dust under the a priori rug.) We have noted that he accepts a traditional rationalist account of a priori knowledge, holding that in many cases we can simply see that a proposition is true, and in fact necessarily true. In these cases, he says, responsibility does not require that we have a reason for the belief in question (more exactly, for the necessity of that belief); and he gives a fine defense of that traditional view against various unpromising recent empiricist attacks. Here BonJour concurs with the classical foundationalist: self-evident propositions are indeed properly basic; they have warrant without receiving it by way of warrant transfer or coherence. (Unlike the classical foundationalist (either ancient or modern) BonJour appears to think that beliefs of the form Necessarily A are the only properly basic beliefs; here he differs from the ancient classical foundationalist, who holds that propositions evident to the senses are also properly basic, from the modern classical foundationalist, who holds that certain propositions about one's own mental life are also properly basic, and from both ancient and modern classical foundationalists, since they hold that a self-evident proposition is properly basic, whether or not it is of the form Necessarily A.)

Now here we may be inclined to think we detect a sort of incoherence in BonJour's views, or at any rate an arbitrary partiality. If I can responsibly trust my nature with respect to what seems self-evident, why can't I trust it with respect to perception and memory, say, or introspection, the faculty whereby I come to hold metabeliefs about what I believe? To put the matter BonJour's way, if I don't need a reason for believing one of the deliverances of reason, why do I need one for believing one of the deliverances of sense or memory or introspection? Initially, BonJour writes as if we need a reason for everything we believe; his objection to internalist and externalist foundationalism is that on these views a person could justifiably accept a belief without having a reason for thinking that belief true. But then, as it turns out later, he doesn't think you need a reason for (the necessary truth of) what seems self-evident; one can accept the deliverances of reason in the basic way without irresponsibility. And isn't there an arbitrary partiality there? As Thomas Reid says,

The sceptic asks me, Why do you believe the existence of the external object which you perceive? This belief, sir, is none of my manufacture; it came from the mint of Nature; it bears her image and superscription; and, if it is not right, the fault is not mine; I ever took it upon trust, and without suspicion. Reason, says the sceptic, is the only judge of truth, and you ought to throw off every opinion and every belief that is not grounded on reason. Why, sir, should I believe the faculty of reason more than that of perception? They came both out of the same shop, and were made by the same artist; and if he puts one piece of false ware into my hands, what should hinder him from putting another?5

BonJour neither asks nor answers Reid's question. He writes initially as if responsibility always requires believing only if you have a reason, only later beating an unannounced retreat from this position by way of making an exception for self-evident beliefs (more exactly, beliefs of the sort P is necessary, where P is self-evident). But perhaps there is an answer, of sorts, to Reid's question; perhaps Reid is a bit hasty. He often writes as if the skeptic (or the modern classical foundationalist) were just a whimsical, arbitrary fellow who exalts reason and consciousness (Reid's term for the faculty whereby I come to know such things as that I am appeared to redly) over perception and memory for no reason at all—or else an imperceptive fellow, who mistrusts perception and memory because he can't give a noncircular justification of them, foolishly failing to notice that the same thing holds for reason and consciousness. But skeptics need not be as arbitrary and imperceptive as that. The skeptic ordinarily begins by pointing to the uncertainty, disagreement, error, and confusion that haunt human life. One of the originating causes of skepticism, says Sextus Empiricus, is that “men of talent… were perturbed by the contradictions in things and in doubt as to which of the alternatives they ought to accept.” There is often enormous disagreement among human beings, and disagreement about matters of the greatest import, both practical and theoretical. These things—error, confusion and disagreement—show that our noetic faculties are to at least some degree unreliable. If you and I disagree, then either your faculties are misleading you or mine are misleading me. (Of course we rely upon our cognitive nature in making that judgment; but if it is mistaken there, then it is deeply mistaken indeed.)

And the skeptic about perception does not ordinarily mistrust perception merely because he can't give a noncircular justification of it; instead he begins by claiming, plausibly enough, that the senses are sometimes misleading. “I have learned from experience that the senses sometimes mislead me, and it is prudent never to trust wholly those things which have once deceived us,” says Descartes; “The same tower appears round from a distance but square from close at hand,” says Sextus Empiricus; and he adds that “sufferers from jaundice declare that objects that seem to us white are yellow.” (Not all of Sextus's pronouncements on this subject inspire equal confidence; he also holds that those whose eyes are bloodshot see everything as red, and he reports with evident approval Anaxagoras's attack on the common view that snow is white by way of the argument snow is frozen water; water is black; therefore snow is black.)

So the skeptic about sense perception typically opens his case by pointing out that perceptual appearances are often misleading. And here, of course, he is right. From some angles Mount Teewinot looks higher than the Grand Teton; a flat North Dakota road may look shiny and wet in the distance when in fact it is perfectly dry; parched travelers in the desert may be misled by perceptual appearances into thinking there is an oasis no more than a couple of miles away. On the other hand, we don't know of any such thing in the case of, say, what Reid calls ‘consciousness’, the faculty responsible for the sort of beliefs we express by saying, ‘I am appeared to redly’. We don't know of any cases where it seemed to someone that he was being appeared to redly when in fact he was being appeared to bluely or not at all; we don't know of any case where someone believed he was in severe pain when in fact he was perfectly comfortable, or believed that he was sad when in fact he was euphoric. One reason, then, for following the modern classical foundationalist in trusting reason and consciousness more than sense perception is the fact that the latter sometimes misleads us.

We should pause here to ask how we know that sense perception sometimes deceives us. Consider the phantom oasis. At time t it looks to us as if there is an oasis about a mile and a half from the place we stand; we make a thirsty dash to that spot, only to find nothing but sand; we conclude that at t we were mistaken in thinking there was an oasis there. The conclusion that we were thus misled by our senses at t clearly involves several faculties: memory, induction (whereby we come to believe that if there was no oasis there when we arrived, then there was none there at t), and sense perception itself. We might be able to do without induction; perhaps there was an old bedouin at that very spot, enjoying the warm sunshine; he reports that he has been there for at least two hours, and at no time during that time was there an oasis on that spot. Then we rid ourselves of dependence, in this instance, upon induction, but we add a couple of other dependencies; for now we depend upon the faculty or power (Reid calls it sympathy) whereby I come to believe, under certain conditions, that someone other than myself believes and is telling us such a thing as that he has been at a certain place for a couple of hours during which time there was no oasis there—as well as credulity, the faculty leading me to believe what I take it others tell me. (Alternatively, in this case I don't rely upon credulity, but replace one reliance upon induction by another: I now employ the inductively based belief that when a bedouin seems to say he has been on a certain spot for a couple of hours and there has been no oasis there during that time, then indeed there was no oasis there during that time.)

Of course we also depend, in this instance, upon reason. In some cases of error detection just one faculty, in addition to reason, is involved. I seem to remember that p, and that on a previous occasion I seemed to remember that not-p. In this case it is just reason and memory that are involved in detecting error. Various other combinations of our faculties can lead to the detection of error. It is interesting to note, however, that in every such case reason is involved; in every case where we detect error we will be relying upon some inference or self-evident truth. So perhaps we should conclude, by Mill's Method of Agreement, that reason is the source of error in all these cases! But of course that is not what we do conclude. “Why, sir,” asks Reid, “should I believe the faculty of reason anymore than that of perception?—they both came out of the same shop, and were made by the same artist.” But often we do believe the faculty of reason more than that of perception, and rightly so. If I put a couple of marbles between my fingers in a certain way, my eyes tell me that there is just one marble there and my sense of touch tells me that there are two. I do not conclude that, contrary to the deliverances of reason, there is just one marble there and furthermore two; I quite properly take it that it is my senses that mislead, not my reason.

So one possible ground for discriminating among our faculties is that some of them seem on some occasion to be misleading, to lead us into error. This is a ground for discrimination against perception and memory in favor of consciousness. Another ground for discrimination is that some but not all faculties can be validated by other faculties. Again, consciousness seems privileged here. I said earlier that we know of no cases where someone thought he was appeared to redly, or thought he was in pain but really was not. It is at least plausible to go further; some of the deliverances of consciousness are such that it is a teaching of reason that we are not deceived with respect to them. Is it not self-evident that a person who carefully considers whether he is being appeared to redly, and believes that he is being appeared to redly, is being appeared to redly?

Here we must be careful; perhaps I can believe de re, of the proposition that I am being appeared to redly, that it is true, when as a matter of fact it is false. I may now believe that everything you explicitly believe is true, and know that there is just one proposition you now explicitly believe; I can then believe of that proposition that it is true, even if I don't know which proposition it is that you now believe, and even if that proposition is (or is equivalent to) the false proposition that I am now being appeared to redly. But I don't see how it could be that I should grasp the proposition that I am being appeared to redly, consider whether it is true, and then believe that I am being appeared to redly when in fact I am not being appeared to that way. If I reflectively believe that I am being appeared to redly, then it is at least approximately true that I am so appeared to. (I say ‘approximately true’ because perhaps it's not quite redly that I am being appeared to, but something closer to pinkly or mauvely.) It is not easy to state the truth here exactly, but there is a truth here, I think, and it is one that distinguishes consciousness from such other faculties as memory and perception.

The skeptic (and the modern classical foundationalist), then, is not being just arbitrary in taking it that consciousness is more worthy of credence than some of our other faculties. But what about reason itself? Is there something about it that distinguishes it from other faculties, such that by virtue of that feature it is more worthy of credence than the others? I am inclined to think the answer is yes, although I am unable, at present, to see the issues with real clarity. An important difference between reason and my other faculties is the obvious fact that I can't think about the reliability of any of my faculties without in some sense trusting reason, taking it for granted or assuming, at least for the time being, that it is reliable. Suppose I reflect on the fact that perception, say, sometimes leads me into error; and suppose I conclude, with the Humes of this world, that perception is not to be trusted—or with Reid that it is to be trusted. Either way, of course, I employ the faculty of reason; more than that, either way, if I am serious, and if I accept the conclusions to which I am led and do not also, ironically, half reject them, I am clearly trusting reason and its deliverances. But of course the same does not go for perception, or memory; I can think about their reliability without employing or trusting them.

This is a difference, you say; but is it a relevant difference? I think it is, but it is hard indeed to explain exactly why. Perhaps the answer lies in the following slightly different direction: we cannot so much as raise the question of the reliability of reason, or any other of our faculties, without taking the reliability of reason for granted. Earlier I said there is a certain laughable foolishness and self-referential folly attaching to the enterprise of refusing to trust one's cognitive faculties before one has certified them as reliable; for of course one has to trust them or some of them in order to try to certify them. This is true a fortiori with respect to reason; clearly one cannot sensibly try to determine whether reason is trustworthy, before relying on it, since one has no recourse but to rely on it in trying to make that determination. This foolish futility, however, does not affect the corresponding projects with respect to our other faculties.

So far we have three candidates for the post of being a feature that relevantly distinguishes some faculties from others; two of these favor consciousness and one of them favors reason. But now suppose we think a bit more about the fact that reason displays this feature, a feature that distinguishes it from our other faculties. Does it follow from this fact that one could not sensibly come to the conclusion that reason is unreliable? Not obviously. What would be required for me to conclude, sensibly, that reason is not to be trusted? One suggestion: I may learn or think I learn from divine revelation, say, that some proposition is true, which proposition conflicts with what reason teaches. (Some people think thus—mistakenly, in my view—about the Christian doctrines of the Trinity and Incarnation.) But here there are deep waters. I rely upon reason to conclude that there is a conflict between reason and revelation here; so if I conclude that reason is unreliable, have I not lost my reason for thinking there is a conflict (and hence my reason for thinking reason unreliable)? Further, if I rely upon reason in concluding that there is a conflict, I can of course quite sensibly continue to rely upon it and conclude that the alleged divine revelation is not really a divine revelation after all.

What is needed, here, is a case where reason somehow indicts itself, and cannot self-servingly point the finger at something else, some other faculty. Suppose there were some propositions that are among the immediate teachings of reason—that is, were apparently self-evident; and suppose they led by argument forms also sanctioned by reason to a conclusion whose denial was an immediate teaching of reason. To put it more simply, suppose I came upon apparently self-evident propositions P1,…, Pn that led by apparently self-evidently valid arguments to a self-evidently false conclusion Q. If reason is reliable, then P1,… Pn are true; but if P1,…, Pn are true, then so is Q, in which case reason is not reliable; so if reason is reliable, it isn't reliable; so it's not reliable. Could I sensibly come to this conclusion and no longer trust reason?

Once I had accepted the argument and stopped trusting reason, then, of course, I could no longer sensibly offer this argument as my reason for mistrusting reason; at first glance the argument seems to defeat itself. At second glance, however, we see that this is not a way out. Consider this remarkable passage from Hume (a passage in which he is echoing Sextus):

The skeptical reasonings, were it possible for them to exist, and were they not destroy'd by their subtlety, wou'd be successively both strong and weak, according to the successive dispositions of the mind. Reason first appears in possession of the throne, prescribing laws, and imposing maxims, with an absolute sway and authority. Her enemy, therefore, is oblig'd to take shelter under her protection, and by making use of rational arguments to prove the fallaciousness and imbecility of reason, produces in a matter, a patent under her hand and seal. This patent has at first an authority, proportioned to the present and immediate authority of reason, from which it is deriv'd. But as it is suppos'd to be contradictory to reason, it gradually diminishes the force of that governing power, and its own at the same time; till at last they both vanish away into nothing by a regular and just diminution.… ‘Tis happy, therefore, that nature breaks the force of all skeptical arguments in time, and keeps them from having any considerable influence on the understanding.6

Hume's point, or one lurking in the nearby woods, could perhaps be put as follows. I begin in the natural condition of trusting reason; I then encounter these arguments showing by way of reason that reason is not to be trusted; I then stop trusting reason; but once I have done so, I no longer have any reason not to trust reason and my natural tendency to trust reason reasserts itself; but then I once more have reason to stop trusting reason; whereupon I lose my reason for mistrusting reason and fall back into the natural condition of trusting it; whereupon… What we have is a nasty dialectic, a movement that proceeds through a distressing loop. And after having gone through the loop a couple of times, I may find myself going through it faster and faster, until I am in effect intellectually paralyzed—or until, weary and frustrated, I give up the whole subject and go out to play backgammon with my friends. So couldn't someone in this condition—someone who had encountered a deliverance of reason that by apparently self-evident argument forms led to an apparently self-evidently false conclusion—couldn't someone in this condition sensibly stop trusting reason? I can't see why not.

But, sadly enough, this is precisely the condition in which as a matter of fact we find ourselves. There are apparently self-evident propositions that lead by apparently self-evident arguments to conclusions that are self-evidently false. I am thinking, of course, of some of the Russell paradoxes. It is apparently self-evident that for anything I can correctly say about an object x, there is a property x displays; it is also apparently self-evident that every property has a complement; and it is apparently self-evident that some properties—being a property, for example—exemplify themselves. But then it is an apparently self-evident consequence of apparently self-evident propositions that there is such a property as non-self-exemplification; and we all know the rest of the sorry tale. So we are in just the position I said could sensibly lead a person to mistrust reason itself.

Now we need not draw the conclusion that reason is not to be trusted at all; that would be absurdly excessive. But it does follow, I think, that our reason, in our present condition, is not to be trusted wholeheartedly and without reservation; for if it is wholly reliable, then it is not wholly reliable: in which case it is not wholly reliable.

To return to BonJour, then: there is indeed a sort of partiality involved in his holding that responsibility requires reasons in the case of perceptual and memory beliefs, but not in the case of self-evident beliefs. But perhaps the partiality is not altogether arbitrary: there are differences between the faculties involved, and perhaps these differences partly warrant this sort of differential treatment of them—“partly,” because the relevant differences between them are a matter of degree and not nearly as stark as BonJour and the classical foundationalist seem to suppose.

2. BonJour's Justification of Empirical Belief

As we saw earlier, BonJour holds that an empirical belief B has warrant for me if and only if I have a reason for thinking B true; and that reason can only be the conjunction of

(i) B is a member of my system of beliefs B1Bn and that system is coherent


(ii) any belief that is a member of a coherent system of beliefs is probably true.

He then offers an argument (the justificatory argument) for (ii). There are serious problems here: I shall mention four, in order of increasing severity.

(a) Coherence and Likelihood of Truth I

How are we to establish (ii)? “What is needed, in first approximation,” says BonJour, “is an argument to show that a system of empirical beliefs that is justified according to the standards of a coherence theory of this sort is thereby also likely to correspond to reality” (p. 169); he then goes on to give an argument for this conclusion. But this conclusion will not quite give us (ii); what it will give us, of course, is that any element of a coherent system of beliefs is an element of a system of beliefs that is likely to correspond to reality. And even if a coherent system of beliefs is more likely than not to correspond to reality (and BonJour does not believe that it is), it doesn't follow that each element of such a system is more likely than not to be true. (Maybe a system of beliefs corresponds to reality if, say, nine out of ten of its members are true.) To get (ii) we need an additional premise—something like if a system of beliefs corresponds to reality then every member of it is true. Unfortunately this premise is not acceptable to BonJour. It would place an even greater burden on the argument for the claim that a coherent system of beliefs is likely to correspond to reality; that argument would then have to conclude that any coherent system of beliefs is likely to have no false members, whereas what in fact he means to argue for is that a coherent system of beliefs will correspond to reality “within a reasonable degree of approximation” (p. 171).

(b) Coherence and Likelihood of Truth II

This brings us to another and even less tractable problem. Suppose B is a belief of mine; I propose to justify it in BonJourian fashion. How, precisely, does the justification go? I am to note that B is an element of a system S, of beliefs, that has been coherent for some significant period; and I am also to see a priori that such a system of beliefs is likely to correspond to reality. But what BonJour actually argues here is not that such a system of beliefs is more likely than not to correspond to reality (and hence to contain mostly true beliefs); what he argues is that the correspondence hypothesis with respect to S (CHS) is more likely to be true than any alternative explanation according to which S does not correspond to reality. More exactly, he argues that

P(CHS/S is long-run coherent)

is greater than

P(SKSS is long-run coherent)

where SKS is any skeptical hypothesis with respect to S. That is, he argues that the conditional probability of the correspondence hypothesis with respect to S, on S's being long-run coherent, is greater than the conditional probability of any of the skeptical hypotheses with respect to S, on that same condition. So the best we can say for CHS, BonJour thinks, is that it is not as unlikely as the skeptical explanations, the explanations according to which S does not correspond to reality.

But then how can I conclude that B is likely to be true? My ground for holding that B is likely is that B is probable with respect to (CHS), an explanation of the coherence of S that is more probable than any skeptical explanation. But if (CHS) is not itself more probable than not, how can the fact that B is probable with respect to it so much as even slyly suggest that B is more probable than not? And even if the CHS were more probable than not, the fact that B is probable with respect to it would not of course show that B is itself more probable than not. It is more probable than not that one of the first 51 tickets of the 100–ticket lottery will win, and probable on that that one of the first 26 tickets will win; but the latter is not more probable than not.

(c) Long-run Coherence?

I said above that a justificatory argument for one of my beliefs B involves the premise that

(ii) any belief that is a member of a coherent system of beliefs is probably true.

But (ii) is not quite accurate; the system of beliefs in question (as we saw on p. 92) must of course be some person's system of beliefs, and it must be coherent in the long run, or at any rate in some sufficiently long run. So (ii) is more accurately stated as

(ii′) If B is a member of S's system of beliefs and S's system of beliefs has been coherent for a sufficiently long run, then B is likely to be true.

The basic idea is this: the most likely explanation of someone's system of beliefs remaining coherent in the long run is that those beliefs are caused in such a way as to be for the most part true. But here another problem rears its ugly head: if I am to use this argument to justify B, then I shall have to know or justifiably believe not only that my system of beliefs is coherent, but that it has been coherent over the long run, or a long enough run. This is another a posteriori belief; and what could be my justification for it? There are two possibilities: (i) my beliefs about times other than the present get warrant in some way (via memory, for example) different from the way other empirical beliefs get warrant, or (ii) they do not. The first alternative is obviously unsatisfactory for BonJour. But of course so is the second: on BonJour's scheme, I am to be justified in holding a particular empirical or a posteriori belief B by noting (among other things) that (1) B is a member of my system of beliefs S and (2) S has been coherent over a sufficient time. But then of course (2) must be justified for me, and justified for me before B is. In the case in question, however, B is (2); hence in the case in question (2) must be justified for me before it is justified for me. It is hard to see a way out.

(d) The Doxastic Presumption

To be justified in believing B, I must know that B is a member of my system of beliefs and that that system has been coherent for a sufficiently long run; as we have seen, the second conjunct causes real trouble. But so does the first; in fact here there is a problem that threatens the structural integrity of BonJour's whole position. Clearly my justificatory argument for B requires that I justifiably believe such propositions as I believe B and I believe each of B1,… B (where these are the members of my system of beliefs). And the question is: what sort of warrant or justification do such beliefs have for me? As we have seen (p. 96), BonJour does not give an answer; as a matter of fact he doesn't claim that such beliefs have warrant for me. Instead, he represents his project as a conditional one, showing that if I make the Doxastic Presumption, then I will have available a justificatory argument for my a posteriori beliefs. But isn't there an error here? The Doxastic Presumption (with respect to myself) is the proposition that my metabeliefs (my beliefs as to what I believe) are approximately correct. BonJour recognizes that a justificatory argument with respect to one of my beliefs requires the Doxastic Presumption:

Thus when, as in the following discussion, appeal is made to the Doxastic Presumption in setting out a particular line of justification, this should be understood to mean that the justificatory argument depends on the believer's grasp of his overall system of beliefs and is cogent only on the presumption that his grasp is accurate. (p. 128)

BonJour's idea seems to be that a cogent justificatory argument for one of my beliefs requires the truth of the Doxastic Presumption. That is, if the Doxastic Presumption is not true (with respect to me) then my justificatory argument will have a false premise; I believe B and I believe each of B1,… Bn will not be true, or close enough to the truth; and my justificatory argument will be unsound.

But here I think there is confusion. What the cogency of such an argument requires is not just the truth of those metabeliefs (I believe B and I believe each of B1,… Bn), but that I be justified in holding them. It isn't sufficient that my metabeliefs happen to be true; if I am not justified in those beliefs, then my justificatory argument confers no justification upon B for me. And given BonJour's claim that an empirical or a posteriori belief (any empirical or a posteriori belief) is justified for me only if I have a good reason for it, it follows, on his view, that I am not justified in those metabeliefs, and hence not justified in holding any of the a posteriori beliefs I do in fact hold. BonJour apparently holds that my justificatory argument is conditional upon the truth of the Doxastic Presumption (with respect to me); but in fact it must be conditional upon the justification, for me, of that presumption. Other elements of BonJour's position imply, however, that I cannot be justified in this presumption.7 Hence it looks as if BonJour's position implies that no one has warrant for any empirical or a posteriori belief.

Of course BonJour has already made one exception to the general claim that I am justified in believing B only if I have a good reason for thinking B true: beliefs of the sort of which necessarily, 2 + 1 = 3 is an example. Perhaps he could make another: perhaps he could hold that my beliefs about what I believe also have warrant immediately and in themselves. Then his position would be an unusual variety of modern classical foundationalism. He would concur with the foundationalist in holding that self-evident beliefs and metabeliefs have noncoherentist warrant; he would differ from him in two respects: first in denying that other propositions about one's own immediate experience have such warrant and, second, in holding that warrant transfer to a particular a posteriori belief occurs only by way of a justificatory argument.

In concluding this section, I want to point to one more kind of difficulty. This difficulty is presented by BonJour's claim that the a priori probability of the sorts of propositions relevant to his argument are knowable a priori. The objection is not that there isn't any such thing as a priori probability. I see no objection to the claim that there is such a variety of probability, although I think it is better to call it ‘logical probability’.8 But BonJour's argument requires not merely that propositions do have a priori probabilities; it also requires the premise that the Correspondence Hypothesis, for a given coherent structure S of beliefs, be more probable, a priori, than any of the skeptical explanations of the coherence of S—that is, any of the explanations according to which it is not the case that S corresponds to reality; and of course that too must be knowable a priori. This is monumentally dubious. Even if such a hypothesis as (CHS) and these skeptical explanations do have an a priori probability, a probability on necessary truths alone, it is surely anyone's guess what that probability might be. Assuming there is such a thing as a priori probability, what would be the a priori probability of our having been created by a good God who (all else being equal) would not deceive us? What would be the a priori probability of our having been created by an evil demon who delights in deception? And which, if either, would have the greater a priori probability? Short of being able to argue that God exists necessarily (in which case the first would have a probability of 1), how could we possibly tell?

3. Coherence and Warrant

I now turn finally (and more briefly) to the question central to my concerns here: does BonJour's coherentism give a satisfactory account of warrant? More specifically: suppose a belief is a member of a system of beliefs coherent in BonJour's sense: will this be necessary and sufficient for warrant? Still more specifically, since both coherence in BonJour's sense and warrant come in degrees, is it necessary that the warrant of a belief and the coherence of the system of beliefs to which it belongs vary together?

(a) No Degrees of Warrant

The first thing to see is that BonJour's account seems not to accommodate the obvious fact that warrant comes in degrees. I believe both that I live in Indiana and that Aristotle once lived in Stagira; clearly the first has more warrant for me than the second. I believe both that there are oak trees in my backyard and that once there were many cedars in Lebanon; again, the first has more by way of warrant for me than the second. But of course the second member of each pair is as much an element of my system of beliefs as the first (elementhood not coming in degrees); so if what confers warrant on such a proposition for me is my knowing or believing that it is an element of a coherent system of beliefs then all of my beliefs will have the same degree of warrant. If I know anything, then any true belief of mine constitutes knowledge. Further, high coherence is not perfect coherence; BonJour clearly means to hold that a coherent system of beliefs can be sufficient for high warrant, for its members, even if it contains pockets of incoherence. But then suppose a given belief is at the source and center of such a pocket: on BonJour's suggestion that belief would have as much warrant for me as my most secure item of empirical knowledge. Both of these consequences are thoroughly unwholesome.

But perhaps this difficulty can be overcome; a little Chisholming may save the day. Perhaps there are ways of partitioning my system of beliefs into subsystems, some of which display more coherence (and thus confer more warrant) than others; or perhaps a particularly large and impressive subsystem could be identified as a central core, other beliefs having warrant in proportion to their probability with respect to that central core. There are a number of intriguing possibilities here, most of which look pretty difficult; but perhaps it can be done.

(b) Coherence Not Sufficient for Warrant

Is high coherence sufficient for high warrant? It seems not. It seems clear that a belief could be a member of a highly coherent system of beliefs and still have very little warrant. Return, for example, to the Case of the Epistemically Inflexible Climber (see p. 82), the unfortunate whose beliefs became fixed, no longer responsive to experience, due to an errant burst of high-energy radiation. Stipulate that his system of beliefs is coherent; and to adapt this example to BonJour's specific account of justification, stipulate also that his system of beliefs has both been coherent for some time and that it meets the Observation Requirement.9 (Many beliefs about the geography of China, say, spontaneously arise in his cognitive system, along with the appropriate introspective beliefs.) His system of beliefs thus meets BonJour's conditions for justification; but then it meets those conditions equally well later on, when he is at the opera and his experience is of a wholly different character. At that later time, however, his beliefs that he is seated on the belay ledge, that there is a hawk floating in lazy circles a couple of hundred feet below his feet, and so on, clearly have little or no warrant for him. If one of these beliefs happens to be true, it will not qualify as an instance of knowledge for him, even though it is a member of a coherent system of beliefs.

Or suppose I am one of Descartes’ madmen: I think I am a squash, perhaps a pumpkin. This psychotic delusion is the pivot of my whole system of beliefs and the rest of my beliefs settle into a coherent pattern around it. (Thus I believe that I was grown in a famous Frisian garden, that I alone among the vegetables have been granted rationality and self-consciousness, that the explanation for my having been thus exalted is to be found in God's middle knowledge about what various possible gourds would freely do in various situations, and so on.) We could also turn to the skeptical hypothesis BonJour mentions. Perhaps I have been captured by Alpha Centaurian cognitive scientists, who make me the subject of a cognitive experiment; their aim is to give me a system of beliefs in which falsehood and coherence are maximized. They succeed in giving me a thoroughly coherent set of beliefs, but in a few cases they slip, giving me a true belief rather than a false one. Or perhaps I am victimized by a Cartesian demon who chooses a set of beliefs at random and then adds enough beliefs so that my noetic system is coherent. (We must add, in each case, that the Observation Requirement is satisfied.) In such cases my beliefs may have a great deal of coherence, but they will have very little warrant. Those that are true, are true just by accident, and surely do not constitute knowledge for me.

BonJour recognizes (p. 150) that in these cases there is not knowledge; he adds that what we have in these cases are or are similar to Gettier situations.10 Now, first, BonJour believes that the traditional justified true belief account of knowledge is “at least approximately correct” (p. 3). Perhaps, then, it would follow that in the present cases, even if the subject doesn't have knowledge, he almost does; it would be approximately true, or near the truth to say that he does. But surely not. If my cognitive faculties are subject to malfunction of that degree of seriousness, surely I don't have anything like knowledge; my beliefs have far too little warrant for knowledge. Second, would these really be much like Gettier cases? I shouldn't have thought that a situation in which I was deceived in such wholesale, global fashion was really a Gettier situation. The latter typically involves the subject's cognitive environment's being mildly but not overwhelmingly misleading: someone shows me a lot of evidence for the false proposition that he owns a Ford, or I mistake a wolf in sheep's clothing for a sheep, or am deceived by a lot of barn facades the native Wisconsinites have erected.11 The sort of case where my cognitive faculties are massively malfunctioning does not seem very similar to these paradigm Gettier situations.

But of course it might be hard to tell what is close to the truth and what not; and equally hard to tell what sorts of situations achieve the distinction of being similar to genuine Gettier situations. What is important here is that high BonJourian coherence is not sufficient for high warrant; indeed, great coherence can go with very little warrant. Furthermore, BonJour gives not so much as a hint as to what further is required for knowledge, or for a degree of warrant high enough for knowledge; he does not discuss Gettier situations at all. So (pace the book's title) what we have is not really an account of empirical knowledge; it is at best an account of one condition necessary for empirical knowledge. What more might be required? BonJour says nothing at all on this head. In Warrant and Proper Function I suggest that one condition crucially required is the absence of the sort of cognitive malfunction or pathology involved in the above examples.

(c) Coherence Not Necessary for Warrant

But where that condition (plus a couple of others) is satisfied, we don't also need high coherence for high warrant; high BonJourian coherence is not sufficient for high warrant, but it isn't necessary either. Couldn't I know what my name is, or that I live in Indiana or that I am not bald even if much of my noetic structure is in disarray and displays little coherence? Concede for purposes of argument that a belief B constitutes knowledge for me only if there is a fair degree of coherence in the near neighborhood of B; surely it isn't necessary that my noetic structure overall displays much coherence. And a fortiori, it does not seem necessary, in order for me to know B, that B be a member of a system of beliefs that is coherent in even a moderately long run. I am captured by Alpha Centaurian cognitive scientists who for a period of a year manipulate my beliefs in such a way that my noetic structure is not coherent to any significant degree; their experiment concluded, they return me back to earth and restore me and my faculties to normal functioning. Couldn't I then know, for example, that I see a sheep, even though I am killed a couple of days later in a car accident, so that the belief in question is not a member of a noetic structure that is coherent for any significant run?

Now of course the resolute BonJourian coherentist might protest that he has an argument for holding that in these cases there is little or no justification (and hence no knowledge) and that I am ignoring that argument. The argument, substantially, would be that

(a) one must have a reason to be justified in accepting a belief,


(b) the only possible reason would be that the belief is an element of a system of beliefs coherent in the long run, and such a system probably corresponds to reality.

But as we have already seen, BonJour himself does not accept (a); he makes an exception for some beliefs of the form Necessarily A. He must therefore restrict (a) to empirical or a posteriori beliefs. But then why distinguish in this way among beliefs; why discriminate, in this fashion, against a posteriori beliefs? BonJour doesn't say.

As we saw above, however, perhaps there is a way to make a principled distinction between reason, on the one hand, and perception on the other, trusting the former more fully than the latter. But as we also saw, the differences between the two on the basis of which such discrimination could be justified are both relatively tenuous and a matter of degree. They do not support the claim that I can have maximal warrant for, say, necessarily 2 + 1 = 3, while a perceptual judgment has warrant for me only if I accept it on the evidential basis of other beliefs. It therefore seems to me that we don't have here the materials for an argument in support of the appropriately qualified version of (a). So why should we accept it; why suppose there is anything even remotely irresponsible in accepting in the basic way the deliverances of memory, say, or introspection, or perception? We all regularly do this; presumably a good argument would be required to show that such behavior is irresponsible; but no such argument seems available.

I conclude, therefore, that BonJour has not provided anything like a satisfactory account of warrant; BonJourian coherence is neither necessary nor sufficient for warrant. In the next chapter I shall turn to still another account of warrant, an account that deserves to be called ‘coherentist’ even though it is quite thoroughly different from the sorts of views traditionally so-called.