You are here

9: Reliabilism

The views so far considered have all been examples of internalism—some very close to the deontological heart and soul (and origin) of the internalist tradition, and others at some analogical distance. None of these views, as we saw, offers the resources for a proper understanding of warrant or positive epistemic status. Chisholm's dutiful but malfunctioning epistemic agents, Pollock's agent who reasons in accordance with incorrect norms, the coherent but inflexible climber—all have come to epistemic grief. None of the suggestions so far considered is anywhere nearly sufficient for warrant. And the reason is not far to seek. Internalism is a congeries of analogically related ideas centering about access—special access, of some kind, on the part of the epistemic agent to justification and its ground.1 What holds these ideas together, what is the source of the motivation for internalism, is deontology: the notion that epistemic permission, or satisfaction of epistemic duty, or conforming to epistemic obligation, is necessary and sufficient (perhaps with a codicil to propitiate Gettier) for warrant. Deontology generates internalism. Upon reflection, however, it is wholly clear that satisfaction of epistemic duty is nowhere nearly sufficient for warrant. I may be ever so dutiful; I may be performing works of magnificent epistemic supererogation, and nonetheless, by virtue of cognitive malfunction, be such that my beliefs have next to no warrant at all. Warrant is indeed a profoundly normative notion; it contains a deep and essential normative component; but that normative component is not or is not merely deontological.

It is equally clear, however, that internalism cut loose from deontology won't do the trick. Think of the conditions or states to which I can plausibly be thought to have the requisite sort of privileged access: doing my epistemic duty and trying my best, of course, but also being appeared to in such and such a way, having a coherent noetic structure (perhaps), following the epistemic policies that I think are appropriate, or the policies that upon sufficient reflection I would think are appropriate,2 and so on. For many (perhaps most) of these states it is far from easy to spell out a plausible notion of privileged access such that we do indeed have that sort of privileged access to them. These are monumental difficulties. Even if we ignore them, however, what we have seen is this: things can go as well as you please with respect to the states in question and some particular belief of mine; but that belief may still (by virtue of cognitive malfunction of one sort or another) be without warrant.

Internalism, therefore, is quite insufficient; for an account of warrant we must look elsewhere. But if internalism seems not to do the job, what more natural than to try externalism? I shall think of externalism as the complement of internalism; the externalist holds that it is not the case that in order for one of my beliefs to have warrant for me, I must have some sort of special or privileged access to the fact that I have warrant, or to its ground. Clearly externalism thus conceived is something of a catchall. Recently, however, there has been a great flurry of quite appropriate interest in a certain specific kind of externalism: the original and exciting reliabilist and quasi-reliabilist views of David Armstrong,3 Fred Dretske,4 and Alvin Goldman,5 and of those who take inspiration from them, such as William Alston,6 Marshall Swain,7 Robert Nozick,8 and many others. Reliabilism is the new boy on the block; it is innovative and original in the contemporary epistemological context. As a matter of fact, however, it isn't quite as original as it initially seems; Frank Ramsey proposed the germ of a reliabilist account of warrant in his 1926 essay “Truth and Probability.”9 According to Ramsey (roughly) the “reasonable” degree of belief is the proportion of cases in which the “habit” producing the belief produces true beliefs.10 Reliabilism, therefore, goes back at least to Ramsey; but externalism (taken broadly) goes back much further, back to Aquinas, back, in fact, all the way to Aristotle.11 Indeed (apart from some of the skeptics of the later Platonic Academy), it isn't easy to find internalists in epistemology prior to Descartes. On the long view it is really externalism, in one form or another, that has been dominant in our tradition. Armstrong, Dretske, Goldman, and their confreres are not so much proposing a startling new view as recalling us to the main lines of our tradition. (Before you take this as a point against them, remember that, as Hobbes observed, he who says what has never been said before says what will probably never be said again.)

Externalism, taken broadly, is right about warrant. But externalism as such is simply the denial of internalism: and what is needed is not simply the denial of deontology and internalism. What is needed is a positive (and, we hope, correct) account of warrant. In this chapter I propose to examine three externalist and reliabilist accounts of warrant: those offered or suggested by Alston, Goldman, and Dretske—offered or suggested, I say, because Alston and Goldman speak explicitly of justification rather than warrant. I shall argue that these accounts look in the right direction; but each also overlooks an element absolutely essential to our conception of warrant. Then in Warrant and Proper Function I shall spell out what I take to be the sober epistemological truth of the matter.

I shall argue, however, that no brief and simple, semialgorithmic account of warrant carries much by way of illumination. Our epistemic establishment of noetic faculties or powers is complex and highly articulated; it is detailed and many-sided. There is knowledge of (or, to beg no questions, belief about) an astonishingly wide variety of topics—our everyday external environment, the thoughts and feelings of others, our own internal life (an internal soliloquy can occupy an entire novel), the past, logic and mathematics, beauty, science, morality, modality, God, and a host of other topics. These faculties work with exquisite subtlety and discrimination, producing beliefs on these and other topics that vary all the way from the merest suspicion to absolute dead certainty. And once we see the enormous extent of this articulation and subtlety, we can also see that warrant has different requirements in different divisions or components or compartments or modules (the right word is hard to find) of that establishment; perhaps in some of these areas internalist constraints are indeed necessary for warrant.

I. Alstonian Justification

A. The Concept

I begin with William P. Alston's account of justification as presented in “Concepts of Epistemic Justification”12 and “An Internalist Externalism”13 (cited hereafter as CEJ and IE). As the the second title indicates, Alston's thought here is a sort of bridge between internalism and externalism, a sort of halfway house between the two; beginning our transition to externalism with it may therefore reduce the shock. The account is externalist and even reliabilist in that, as we shall see, he holds that a person is justified in believing a proposition only if she believes it on the basis of a reliable indicator. Of course Alston's account is of justification, not warrant. Warrant is that (whatever it is) such that enough of it together with truth (and perhaps a codicil aimed at Gettier) is necessary and sufficient for knowledge; as we shall see, Alston does not claim that justification (as he conceives it) fills that bill. Still, I think we may be able to make progress toward a deeper understanding of warrant by considering his account of justification.

Anglo-American epistemologists of this century have concentrated on the notion of epistemic justification; but exactly what, asks Alston, is justification? Much more energy has gone into the question under what conditions beliefs have justification than into investigation of the nature of justification, into analyzing and making explicit our concept (or concepts) of justification. Setting out to redress the balance, Alston initially points out that justification has at least the following four features: it is a concept of something that applies to beliefs or believings; it is evaluative, and positively evaluative, so that rating a belief as justified is to attribute a desirable or favorable character to it; more specifically, it is epistemically evaluative, having to do with a favorable position with respect to truth (or the aim of acquiring true beliefs); and finally, it comes in degrees (CEJ, pp. 58–59). (We may therefore prefer to think of it as a quantity [rather than a property], a quantity that perhaps varies as a function of other quantities or properties.) Of course this gives us only a distant view of the concept. Trying for a closer look, Alston asks the following question: what is this favorable status which, according to the central core of the idea of justification, accrues to a justified belief? Here he notes an important watershed:

As I see it, the major divide in this terrain has to do with whether believing and refraining from believing are subject to obligation, duty, and the like. If they are, we can think of the favorable evaluative status of a certain belief as consisting in the fact that in holding that belief one has fulfilled one's obligations, or refrained from violating one's obligations to achieve the fundamental aim in question [that is, “the aim of maximizing truth and minimizing falsity in a large body of beliefs”]. If they are not so subject, the favorable status will have to be thought of in some other way. (CEJ, p. 59)

There is a hint here that the notion of justification as a matter of permission, of freedom from blameworthiness, of fulfillment of epistemic duty and obligation—in a word, the deontological notion of justification—is more natural, or at any rate more familiar than alternatives. This is surely plausible; as we saw in chapter 1, deontological notions of justification have been overwhelmingly dominant in twentieth-century Anglo-American epistemology. Exploring this family of ideas with care and insight, Alston pays particular attention to the ways in which doxastic phenomena can be within our voluntary control. His verdict is that none of the deontological notions will do the job: even the most promising of the bunch, he says, “does not give us what we expect of epistemic justification. The most serious defect is that it does not hook up in the right way with an adequate, truth-conducive ground. I may have done what could reasonably be expected of me in the management and cultivation of my doxastic life, and still hold a belief on outrageously inadequate grounds” (CEJ p. 67).

So the deontological answer to the question ‘What sort of evaluation is involved in justification?’ can't be right. “Perhaps it was misguided all along,” he says, “to think of epistemic justification as freedom from blameworthiness. Is there any alternative, given the non-negotiable point that we are looking for a concept of epistemic evaluation?” (CEJ, p. 69) The answer, of course, is that there are many alternatives. After another careful exploration of the field, he chooses his candidate:

S is Jeg [‘e’ for ‘evaluative’ and ‘g’ for ‘grounds’] justified in believing that p iff S's believing that p, as S did, was a good thing from the epistemic point of view, in that S's belief that p was based on adequate grounds and S lacked sufficient overriding reasons to the contrary. (CEJ, p. 77)

Here “grounds” would include other beliefs, but also experience (as in the case of perceptual beliefs); I refer the reader to the text for the qualification “as S did” (CEJ, p. 70) and for discussion of the nature of the basing relation (CEJ, pp. 71–72; IE, pp. 265–77). But why this emphasis upon grounds? Because, says Alston, in asking whether S's belief that p is justified (in the evaluative but nondeontological sense) “we are asking whether the truth of p is indicated by what S has to go on; whether given what S had to go on, it is at least quite likely that p is true. We want to know whether S had adequate grounds for believing that p, where adequate grounds are those sufficiently indicative of the truth of p” (CEJ, p. 71). Alston explains the idea of grounds being indicative of the truth of p in terms of conditional probability: “In other terms, the ground must be such that the probability of the belief's being true, given that ground, is very high” (IE, p. 269).14

A belief is epistemically justified, therefore, only if it is accepted on the basis of adequate grounds. But there is a further condition: these grounds must be accessible to the believer:

I find widely shared and strong intuitions in favor of some kind of accessibility requirement for justification. We expect that if there is something that justifies my belief that p, I will be able to determine what it is. We find something incongruous, or conceptually impossible, in the notion of my being justified in believing that p while totally lacking any capacity to determine what is responsible for that justification. (IE, p. 272)

The specific form of the internalist requirement Alston offers is determined by what he takes to be the origin of the intuitions supporting the accessibility requirement:

I suggest that the concept [that of epistemic justification] was developed, and got its hold on us, because of the practice of critical reflection on our beliefs, of challenging their credentials and responding to such challenges, in short the practice of attempting to justify beliefs. Suppose there were no such practice; suppose that no one ever challenged the credentials of anyone's beliefs; suppose that no one ever critically reflected on the grounds or basis of one's own beliefs. In that case would we be interested in determining whether one or another belief is justified? I think not. (IE, p. 273)

(Alston argues that what must be accessible to the agent is only the ground of the belief;15 the relationship between ground and belief by virtue of which the ground supports or is a reliable indicator of the belief need not be accessible.) And finally, how accessible must the ground be to the agent? Here (naturally enough) there is no very precise answer: “What is needed here is a concept of something like ‘fairly direct accessibility’. In order that justifiers be generally available for presentation as what legitimates the belief, they must be fairly readily available to the subject, available through some mode of access much quicker than that of lengthy research, observation, or experiment” (IE, p. 275).

B. Questions about Alstonian Justification

1. Where Does It Come From?

Alston begins by arguing that the deontological family of concepts of justification won't do, despite their naturalness and familiarity. What we need, he says, is an evaluative conception that is not deontological; he settles on one of a wide variety of possibilities. But what guides the search here? How do we determine which of all the many epistemic nondeontological values is the right one? There are hosts of epistemically valuable but nondeontological states of affairs: usually believing the truth; now believing the truth; having a belief formed by a reliable belief-producing mechanism; knowing that one's beliefs are formed by a reliable belief-producing mechanism; being Foley rational with respect to one's beliefs; having true beliefs on topics important for survival, or a good life, or deep understanding, or spiritual excellence; being such that one's cognitive faculties are nondefective, being such that one's beliefs are proportioned to the evidence; being suited to one's epistemic environment; being able to forget what would otherwise clutter one's memory; believing on the basis of a reliable indicator; believing on the basis of an accessible reliable indicator; believing on the basis of an accessible reliable indicator you know or justifiably believe is reliable; and many more. The rejectees as well as the lucky winner are all epistemically desirable; each is an epistemically valuable state of affairs. How does Alston decide among them, and what guides him in selecting one of them as the one that goes with epistemic justification?

To answer this question we must return to the brief historical excursus of chapter 1. First, the dominant tradition in Anglo-American epistemology has certainly been heavily deontological. The “justified true belief” account of knowledge is of course the one we learned at our mother's knee;16 a belief constitutes knowledge only if it is a justified belief. Justification is necessary for knowledge, and (along with truth) nearly sufficient for it; perhaps a fillip or epicycle (“the fourth condition”) is needed to appease Gettier, but the basic contours of the notion of knowledge are given by justification and truth.

And the fundamental notions of justification in this tradition—the ‘received tradition’, as we may call it to mark its dominance—have been deontological notions, or notions analogically but intimately related to deontological notions. Think, for example, of the classical Chisholm (see chapter 2): positive epistemic status, for him, is aptness for epistemic duty fulfillment. No doubt Chisholm is the dominant figure among contemporary deontologists, but he is only one deontologist among many,17 Indeed, the very term ‘justification’ is redolent of deontology. To be justified is to be blameless, to have done what is required, to be, like Caesar's wife Calpurnia, above reproach. It is to be such as not to be properly subject to censure, blame, reproach, reproof. Alston therefore objects to the use of the term ‘justification’ for any concept that is not deontological:

I must confess that I do not find ‘justified’ an apt term for a favorable or desirable state or condition, when what makes it desirable is cut loose from considerations of obligation and blame. Nevertheless, since the term is firmly ensconced in the literature as the term to use for any concept that satisfies the four conditions set out in section II, I will stifle my linguistic scruples and employ it for a non-deontological concept. (CEJ, p. 86 n. 21)

But of course ‘justification’ occupies the field; it is the term typically used to denote warrant, that which (Gettier to one side for the moment) stands between mere true belief and knowledge.

So the first thing to see is that in the received tradition, justification is necessary and nearly sufficient for warrant; and the second is that justification, in the received tradition, is thought of deontologically. But the next thing to see is that the received tradition follows John Locke in being inclined to see the central epistemic duty here as that of believing only on the basis of evidence, of proportioning belief to evidence. This tradition goes back to Locke; it has boasted clouds of witnesses ever since. (Among them, as we saw in chapter 1, are W. K. Clifford, Sigmund Freud, Brand Blanshard, H. H. Price, Bertrand Russell, Michael Scriven, and, more recently, Richard Feldman and Earl Conee.18) Perhaps this duty arises, as Alston and Chisholm suggest, out of a more basic Ur-duty to try to achieve the right relation to the truth; or perhaps (as Locke seems to suggest) the duty in question is sui generis, one attaching to a person whether or not conforming to it leads for the most part to truth; but in any event a fundamental duty is that of believing only on the basis of evidence. And when this thought is combined with the deontological conception of justification, the result is a powerful emphasis upon evidence, a strong tendency to see justification as in most cases a function of quality and quantity of evidence. “In most cases”; for of course insofar as modern classical foundationalism is an important part of the received tradition, beliefs that are either self-evident or appropriately about one's own introspectable states will have warrant without being accepted on the basis of evidence.

The shape of the concept of justification in the received tradition is clear: it involves a marriage of the idea that deontological justification is central to warrant (and hence to knowledge) with the notion that—at any rate over vast areas of the epistemic terrain—a fundamental intellectual duty is that of believing only on the basis of evidence.

Now return to the question: what guides Alston's search for the right or appropriate conception of justification? Why, of all the epistemically valuable states of affairs to link with justification, does he settle on the one he picks? What exactly is his project here? Perhaps the answer is to be found along the following lines: he aims to make explicit the various notions of justification lurking in the contemporary neighborhood, and aims to select the candidate that best fits the conditions laid down by the received tradition. As we have seen, there are three essential elements to that tradition: (a) justification is conceived deontologically, (b) justification heavily involves evidence or grounds, and (c) justification is necessary and nearly sufficient for warrant. We can understand Alston's choice among all those epistemically valuable states of affairs as a matter of trying to select the candidate that best fits these three conditions. Of course he quite correctly sees that no concept fits really well; no deontological concept is anywhere nearly sufficient for warrant; hence the received position (as I argued in chapter 1) is incoherent. He therefore looks for another epistemically valuable state of affairs—one that is perforce nondeontological—that will fill or nearly fill the bill.

An initial problem looms, however. In the tradition in question, justification is thought to be necessary for warrant and nearly sufficient for it (in the sense that in addition to justification all that is needed for warrant is a proviso to appease Gettier). Alstonian justification, however, is not in (Alston's view) necessary for knowledge (and hence not necessary for warrant): “Beliefs that, so far as the subject can tell, just pop into his head out of nowhere would not be counted as justified on this position” (IE, p. 281)—because there would not be an accessible ground for the belief. Such beliefs, however, might nonetheless constitute knowledge: “I do hold that mere reliable belief production, suitably construed, is sufficient for knowledge” (IE, p. 281). So Alstonian justification, unlike justification on the received conception, is not necessary for warrant.

But perhaps this problem is less real than apparent. Alston expresses skepticism as to whether there is knowledge that isn't based on grounds, even though he thinks perhaps there could be; he doubts that it ever happens that an item of knowledge consists in a belief that is reliably produced but simply pops into the subject's head.19 There could be knowledge of that sort, and if there were, it would be knowledge unaccompanied by justification; but perhaps the idea is that justification is a necessary condition for knowledge in the case of any belief that does not just pop into the agent's head. (And if, as Alston thinks likely, no cases of knowledge are in fact of that sort, every case of knowledge would be a case of justified belief, even if not necessarily so.)

If we understand him thus, it is easy to see why Alston's notion of justification takes the shape it does. First, it is no wonder, given the heavy emphasis upon evidence in the received tradition, that Alston gives pride of place to grounds. Second, it is equally easy to see the source of the requirement that these grounds be accessible to the agent. Alston finds “widely shared and strong intuitions in favor of some kind of accessibility requirement for justification” (p. 186). These intuitions, I suggest, are to be accounted for in terms of the widely shared deontological conception of justification; deontology, as I argued in chapter 1, requires accessibility. Alston recognizes that deontology cannot play the role it is assigned in the received tradition: nonetheless a prima facie desideratum for a reconstruction of the received notion of justification, he thinks, is that it recognize and accommodate those widely shared intuitions in favor of an accessibility requirement for justification. But what, finally, about the specifically externalist component of Alstonian justification, the suggestion that the grounds on which I believe p must in fact be an indicator of its truth, if I am to be justified? How does this fit in? Perhaps in a double way. In the first place (as I argued), the received tradition prominently features the notion that a chief epistemic duty is that of believing only on the basis of evidence; that tradition also features the deontological conception of justification; and these two together yield the thought that being justified in believing p requires having evidence for p. Further, where q is indeed evidence for p, q presumably will indeed be an indicator of the truth of p. We can therefore see Alston's suggestion—that being justified in believing p requires that one believe p on the basis of a ground that is an indicator of the truth of p—as a generalization or broadening of the received tradition's emphasis upon evidence.

2. Alstonian Justification Nowhere Nearly Sufficient for Warrant

I think we can see that Alstonian justification is itself by no means either necessary or nearly sufficient for warrant. Of course Alston does not claim that it is necessary; as we have seen, he suggests that a belief that just popped into my head could possibly be knowledge, but would not be justified because it was not grounded. But neither is it sufficient, or even sufficient up to Gettier problems. S's belief that p is Alston-justified if it is based on a ground that is both accessible to S and is a reliable indicator of the truth of p. Clearly, however, a belief could meet that condition even if it had little or no warrant. There are different kinds of examples here. Note that Alstonian justification does not require that S know or justifiably believe that the ground of her belief is in fact reliably connected with the truth of that belief (although it may preclude your believing that the ground in question is not a reliable indicator: that belief would be a defeating and possibly overriding condition). Accordingly: suppose I often believe that someone I meet is a fine fellow on the basis of a certain kind of facial appearance, a sort of scrunched-up look around the eyes. I now form beliefs in this way only because once years ago I very much admired a comic book character who looked like that. Even if it turns out that the look in question really is a reliable indicator of being a fine fellow, my belief still has little or no warrant for me. Even if it is a powerful indicator and I believe the proposition very firmly, I still wouldn't know it.

A second kind of case: suppose that some standard set of axioms for real analysis has the consequence that there are 4 successive 7s in the decimal expansion of π; suppose further that I believe those axioms, and believe that there are 4 successive 7s there on the basis of them. This is not, however, because I can see or show that they do have that consequence, or because I believe that someone else has shown that. Instead, it is due to a nasty little glitch in my belief-forming apparatus: on the basis of those axioms I believe for any number n I've thought of that there are n successive 7s in the decimal expansion of n. Under these conditions, I may be justified in accepting these beliefs, but none of them has any warrant for me, not even the ones entailed by the axioms in question.

A third kind of case: suppose (contrary to what most of us believe) the National Enquirer is in fact extremely reliable in its accounts of extraterrestrial events. One day it carries screaming headlines: Statue of Elvis Found on Mars!! Due to cognitive malfunction (inducing the “epistemic incontinence” Alston speaks of elsewhere), I am extremely gullible, in particular with respect to the National Enquirer, always trusting it implicitly on the topic of extraterrestrials. (And, due to the same malfunction, I don't believe anything that would override the belief in question.) Then my belief that a statue of Elvis was found on Mars is in fact based on a reliable indicator which is appropriately accessible to me; and I don't know or believe anything overriding this belief. But surely the belief has little by way of warrant.

A fourth example. Where the ground of a belief is in fact a reliable indicator, this will be, naturally enough, because of the nature of the indictor and the relation between it and the proposition in question. More generally, it will be because of the character of the cognitive environment in which the subject finds himself. Imagine, therefore, that I suffer from a rare sort of malady. A certain tune is such that whenever I hear it, I form the belief that there is a large purple animal nearby. Now in my cognitive environment, this is not in fact an indicator of the truth of this belief; so the belief has no Alstonian justification. But imagine that I am suddenly transported without my knowledge to some foreign environment—Australia, say; and imagine further that there, when that tune is heard, there is almost always a large purple animal nearby. (The tune in question, as it turns out, is the love call of the double-wattled purple cassowary.) In my new cognitive environment, the tune is indeed a reliable indicator of the truth of the belief; but of course the belief in question would (initially, at least) have no warrant—it would have no more warrant for me in Australia than it did in my original cognitive environment.

It is easy to see a recipe for constructing examples here. All we need are cases where some phenomenon is in fact a reliable indicator of the truth of a proposition, but my believing the proposition in question on the basis of that phenomenon arises from cognitive malfunction. A last example, then: suppose I suffer from two maladies. First, I visit a neural clinic for orthopedic surgery. Due to an appalling mix-up, I emerge from the operation with a serious disorder: whenever I have a pain in my right shoulder I form the belief, on the basis of that pain, that there is something wrong with my left knee. Next, I fall victim to a brain tumor that involves a specific and highly characteristic symptom: every so often it causes a vascular disorder—a constriction in a certain vein—in my left knee, and at the same time a sharp pain in my right shoulder. (I am entirely unaware of mix-up and tumor.) As things now stand, then, the pain in my shoulder, which is the ground of my belief that there is something wrong with my knee, is a reliable indicator of a disorder there. (We may add that this belief is not overridden by anything else I know or believe; perhaps the tumor also suppresses any beliefs that would otherwise defeat the belief in question.) I therefore meet the conditions for Alstonian justification; but surely that belief has little or no warrant for me. What is important to see in this case is that an indicator may in fact be a reliable indicator, but only accidentally reliable—reliable in a way that from an epistemic or cognitive point of view is merely accidental. This presages a problem that arises much more generally for reliabilism. A ground, or indicator, or a belief-forming mechanism, can be reliable just by accident—due, for example, to a freakish malfunction; and in those cases there will be no warrant even though there is reliability.

Once more, the important thing to see here, I think, is the central role, for warrant, of the idea of proper function, of absence of cognitive dysfunction.

II. Dretskian Reliabilism

A. The Basic Idea

Reliabilists come in at least two styles. The first sees warrant in terms of origin and provenance: a belief has warrant for me if it is produced and sustained by a reliable belief-producing mechanism. The second sees warrant as a matter of probability; a person is said to know a (true) proposition A if he believes it, and if the right probability relations hold between A and its significant others. On the first style, probability may figure in the account of what it is for a belief-producing mechanism to be reliable, but nothing need be said about the probability, conditional or otherwise, of the particular belief in question. On the second what matters is the probability of the belief in question; pedigree counts only as it figures into probability. Alvin Goldman's account of warrant is of the first sort, and I shall turn to it in the next section. The most powerfully developed account of the second sort, however, is to be found in Fred Dretske's Knowledge and the Flow of Information (hereafter cited as KFI), to which I now turn.

According to Dretske,

(D1) K knows that s is F = K's belief that s is F is caused (or causally sustained) by the information that s is F. (KFI, p. 86)

Two preliminary comments: Dretske is primarily concerned with perceptual knowledge; in particular his account is not designed to apply to a priori knowledge, such as K's knowledge that, say, 7 + 5 = 12. Secondly, the account is restricted to what he calls “de re content” (KFI, p. 66); it is restricted, he says, to the kind of case where what K knows is a piece of information of or about s.

Now what sort of animal is this “information that s is F”? And what is it for a thing of that sort—presumably an abstract object or ensemble of abstract objects—to cause or causally sustain a belief? So far as I can see, Dretske gives little by way of answer to the first question. What he does give are many examples of the sort the information that s is F. There is, for example, the information that Sam is happy, that the peanut is under shell number 3, that Susan is jogging. One might say that these are bits of information, except that the term ‘bit’ has been preempted for a measure of information. We are to think of information as being generated by or associated with states of affairs; and the amount of information generated by a given state of affairs depends upon the number and probability of the possibilities that state of affairs excludes. Suppose I throw a fair—64–sided die. The information that the die came up on a side numbered from 1 to 32 reduces the possibilities by a half and carries 1 bit of information; the knowledge that the die came up on a side numbered from 1 to 16 reduces the possibilities by another half and accordingly carries 2 bits; the information that the die came up (say) 3 reduces the original 64 possibilities to one and carries 6 bits of information. As you can guess from the example, if a piece of news reduces n (equally probable) possibilities to 1, then the amount of information that piece of news boasts is given by log (to base 2) n. In the general case, where the possibilities involved need not be equiprobable (and where P(A) is the probability that a given possibility A is realized) the amount of information generated by A is given by

(D2) I(A) = log (1/P(A)), that is, −log P(A).

There are deep perplexities here. What are the relevant alternative possibilities for, for example, Susan is jogging? (D2) is applicable only where the possibilities involved are finite in cardinality; is that so for such an uncontrived real life possibility as Susan is jogging? What would be the probability of something like Susan is jogging? These are pressing questions for an account of this kind, and I don't know of any even reasonably satisfactory answers to them. They are also less urgent than they look, however, because the notion of the amount of information does not crucially enter into Dretske's account of knowledge. Nor need we know, for Dretskian purposes, just what information is; all we really need to know is what it is for a piece of information to cause or causally sustain a belief. Here the answer is disarmingly straightforward:

Suppose a signal r carries the information that s is F and carries this information in virtue of having the property F. That is, it is r's being F (not, say, its being G) that is responsible for r's carrying this specific piece of information. Not just any knock on the door tells the spy that the courier has arrived. The signal is three quick knocks followed by a pause and another three quick knocks.… It is the temporal pattern of knocks that constitutes the information-carrying feature (F′) of the signal. The same is obviously true in telegraphic communication.

When, therefore, a signal carries the information that s is F in virtue of having property F′, when it is the signal's being F that carries the information, then (and only then) will we say that the information that F causes whatever the signal's being F′causes. (KFI, p. 87)

Given (D1), then, what we have is that a person K knows that s is F if and only if (1) K believes that s is F, (2) there is a signal r such that r has some property F′in virtue of which it carries the information that s is F, and (3) r's having F′causes K to believe that s is F. Since we can safely drop the reference to the property F′of the signal by virtue of which it carries the information that S is F, what the analysis boils down to is the idea that K knows that s is F if and only if K believes that s is F and this belief is caused by a signal that carries the information that s is F. So what we still need to know is what it is for a signal to carry the information that s is F. This is given by

(D3) A signal r carries the information that s is F= the conditional probability of s's being F, given r (and k) is 1 (but, given k alone, less than 1). (KFI, p. 65)

Now, k, as Dretske explains, is the background knowledge of the receiver. (D2) must therefore be relativized to be accurate; a signal may carry the information that s is F relative to you but not to me. You already know that s is F; so the probability of s's being F relative to your background knowledge is 1; no signal carries the information that s is F relative to you. I don't know that s is F; so any signal r which is such that the probability of s's being F on r&k equals 1 (where k is my background knowledge) carries the information that s is F with respect to me. If you know that s is F, then no signal carries the information that s is F with respect to you; if you don't know that s is F, then that information is carried with respect to you by any state of affairs whose conjunction with what you do know entails that s is F. We can therefore rewrite (D3) as

(D4) r carries the information that s is F relative to K iff P((s is F) ⁄ (r&k)) = 1 and P((s is F) ⁄k) < 1.

And now we can say that

(D5) K knows that s is F if and only if K believes that s is F and there is a state of affairs r's being G such that (1) r's being G causes K to believe that s is F and (2) P((s is F) ⁄(r's being G & k)) = 1 and P((s isF) ⁄k) < 1.

We saw that the disturbing notion of the amount of information associated with a specific event or state of affairs can safely be ignored, since that notion plays no role in Dretske's final account of knowledge. But now we see that the same goes for the other specifically information theoretic concepts; this analysis of knowledge, when spelled out, involves only the notions of probability, belief, and causation, and doesn't involve them in the problematic ways in which they show up in information theory.20

B. Problems

As we have already seen, deontological internalism, coherentism, Bayesianism—all come to grief when we reflect on the ways in which our noetic faculties can malfunction. But the same sorts of problems plague Dretske's account. Consider D5. To see that it won't do the trick, consider The Case of the Epistemically Serendipitous Lesion. Suppose K suffers from a serious abnormality—a brain lesion, let's say. This lesion wreaks havoc with K's noetic structure, causing him to believe a variety of propositions, most of which are wildly false. It also causes him to believe, however, that he is suffering from a brain lesion. K has no evidence at all that he is abnormal in this way, thinking of his unusual beliefs as resulting from an engagingly original turn of mind. Now according to D5, it follows that K knows that he is suffering from a brain lesion. His having this lesion causes him to believe that he is thus afflicted; the probability of his suffering from a brain lesion on his background knowledge k is less than 1; but of course its probability on k & K is suffering from a brain lesion is 1. But surely K does not know that he is suffering from a brain lesion. He has no evidence of any kind—sensory, memory, introspective, whatever—that he has such a lesion; his holding this belief is, from a cognitive point of view, no more than a lucky (or unlucky) accident.

Indeed, we can add, if we like, that K has powerful evidence for the conclusion that he is not thus suffering; he has just been examined by a trio of world famous experts from the Mayo Clinic who (mistakenly) assure him that his brain is entirely normal. In this case, then, K's belief that he has a brain lesion not only is such that he has no evidence for it; he has first-rate evidence against it. In such a situation K clearly does not know that he has a brain lesion, despite the fact that this belief meets Dretske's conditions for knowledge.

Examples of this kind can be multiplied; so let's multiply a couple. You have wronged me; you have stolen my Frisian flag. By way of exacting revenge I sneak into your house at night and implant in your dog a source of extremely high frequency electromagnetic radiation. This radiation has no effect on you or your dog, except to cause you to form the belief that aliens from outer space have invaded your house and replaced your dog with a nonterrestrial look-alike that emits ultraviolet radiation. You christen this creature (who is in fact your dog) ‘Spot’. Your belief that Spot emits ultraviolet radiation then satisfies Dretske's conditions for knowledge: Spot's emitting ultraviolet radiation causes you to believe that he does; relative to what you know this is not probable, but relative to the conjunction of what you know with Spot emits ultraviolet radiation, its probability is, of course, 1. Surely you do not know that Spot emits such radiation. Indeed, as in the previous case we can add that you have powerful (though misleading) evidence against this proposition. You have had Spot examined by a highly competent group of physicists based at the Stanford linear accelerator; I have corrupted them, bribing them to tell you that Spot is wholly normal; but you are nevertheless unable to divest yourself of the belief in question. Surely you don't know.

A third example. You and I each hold a ticket for a valuable lottery; first prize is an all-expenses-paid week in Philadelphia. (Second prize, of course, is two weeks in Philadelphia.) I approach the person designated to make the official draw and offer him a bribe to fix the lottery: I am to coat my ticket with a substance S and he is to coat his hand with a substance S in virtue of which my ticket will stick to his hand. After I leave, you appear and offer him twice as much; he accepts. He then coats his hand with a substance S; this causes your ticket to stick to his hand, thus causing you to win. It also causes me, by virtue of an otherwise undetectable abnormality on my part, to believe that you will win. You and I witness the drawing; I suddenly and unaccountably find myself with the belief that you will win. On Dretske's account, I know that you will win, despite my knowledge that I have fixed the lottery. For (where T5 is your ticket) T5's being coated with S causes me to believe that you will win; that you will win (we may suppose) has a probability of 1 on the conjunction of my background knowledge with T5has been coated with S but a lower probability on my background knowledge alone. But surely I don't know, under these conditions, that you will win.

Someone might object21 that these examples have conformed to the letter but not the spirit of (D5). On that definition, K knows that s is F if there is a signal r having some property G such that r's being G causes K to believe that s is F, and the probability of the latter on the former (plus k) is equal to 1 and greater than that of the latter on k alone; but in my examples, r's being G is identical with s's being F. That is quite true; in these examples I did indeed collapse r's being G with s's being F. I did so, however, only in order to avoid avoidable problems as to whether P(s's being F ⁄ r's being G) really equals 1. (After all, if the same state of affairs is both r's being G and s's being F, then it will be beyond dispute that the probability of s's being F on r's being G (and k) is 1.) The identity of r's being G with s's being F, however, is an inessential feature of these examples; we can easily amend them in such a way as to mollify the objector. In The Case of the Missing Frisian Flag, let r's being G be as before and let s's being F be Spot is emitting radiation, or Spot would cause a Geiger counter to go crazy, or Spot is composed of atoms many of which are unstable. For the sake of concreteness, revise the example as follows. I implant a source of high-energy radiation in your dog Spot; it is a lawlike truth that any dog in which a source of high-energy radiation has been implanted will lose its hair within seven days; Spot's emitting this high-energy radiation causes a brain lesion in you which in turn causes you to form a large number of wildly false beliefs about Spot (that he is in fact a mermaid, that he can speak fluent French but refuses to out of sheer obstinacy, and so on), but also causes you to form the true belief that Spot will lose his hair within the next two weeks. You have no evidence of any sort for your belief and much evidence against it. (You have just had Spot examined by a team of veterinarians who assure you that he is entirely normal along tonsorial lines.) Here r's being G is not collapsed into s's being F; you satisfy the conditions laid down by D5 for knowledge; but surely you don't know. (The Philadelphia lottery example can be amended similarly.)

Clearly, there are as many examples of this sort as you please. One recipe for constructing them is just to consider some event e that causes K to believe that e occurs (or to believe some proposition entailed by e's occurrence, or some proposition whose probability with respect to e's occurrence is 1) where e causes K to form the belief in question by virtue of some cognitive abnormality, and in such a way that it is just an accident, from a cognitive point of view, that the belief is true. The problem for Dretske's account is clear. If we restrict ourselves to the sort of knowledge he is thinking of, then indeed if I know that s is F there must be a signal r's being G related to s's being F in something like the way he suggests. But the problem is that this isn't sufficient for knowledge; knowledge can be absent even if r's being G and s's being F are related in the way Dretske suggests; for they can be related in that way when it is merely a cognitive accident that s is F is true. As I argue in Warrant and Proper Function, they can be related in this way but fail to be related in the way required by the design plan of our noetic structure; but then my belief that s is F does not constitute knowledge. What these examples show is that something further must be added to Dretske's account; we must add, somehow, that K's noetic faculties, or those involved in the production of the belief in question, are functioning properly, are in good working order.

Dretske's account, then, like the others, suffers because it fails to pay explicit attention to the notion of the proper function of our cognitive equipment.22

III. Goldmanian Reliabilism

A. The Old Goldman

Alvin Goldman's first version of reliabilism is reliabilist indeed: “The justificational status of a belief,” he says, “is a function of the reliability of the process or processes that cause it, where (as a first approximation) reliability consists in the tendency of a process to produce beliefs that are true rather than false.” After some interesting preliminary skirmishes, he gives his official account in a sort of recursive form:

(a) If S's belief in p results from a reliable cognitive process, and there is no reliable or conditionally reliable process available to S which, had it been used by S in addition to the process actually used, would have resulted in S's not believing p at t, then S's belief in p at t is justified.

(b) If S's belief in p at t results (“immediately”) from a belief-dependent process that is (at least) conditionally reliable, and if the beliefs (if any) on which this process operates in producing S's belief in p at t are themselves justified, then S's belief in p at t is justified.23

He then adds an appropriate closure clause. Here Goldman speaks of justification rather than warrant. As I argued in chapter 1, however, Goldman does not use the term ‘justification’ as a name for justification (properly so-called); he uses it instead as a near synonym for warrant. (Only a near synonym, because what he calls justification is not quite sufficient for warrant; he adds a fourth condition, ‘local reliability’, to justification to get warrant.)

This is reliabilism pure and unalloyed: call it paradigm reliabilism. Although its initial appeal is undeniable, it faces many difficult problems—problems I have explored elsewhere.24 In particular, it is clear that the conditions laid down by paradigm reliabilism as necessary and sufficient for warrant are nowhere nearly sufficient. Perhaps we can summarize this point as follows. Note first that (on Goldman's showing) what determines the justification of a belief is a process type (not token).25 Now clearly any given concrete cognitive process will be a token of many different cognitive process types—types with varying degrees of reliability. So consider a given belief—Paul's belief that he is watching “Dynasty,” for example—and the concrete process that yields it. There are many types of which that process is a token: which is the relevant type—that is, which of these types is the one such that its degree of reliability determines the degree of justification Paul's belief enjoys? Here we encounter the problem of generality, noted by Goldman and developed by Richard Feldman:26 if we take the relevant type to be relatively narrow, then we face one set of unhappy consequences; if we take it to be broad, we face other unhappy consequences.

Let me put this problem my own way. What determines the degree of justification, according to Goldman, is the degree of reliability of the relevant process type. But then the relevant process type, the one that determines the degree of warrant of the belief in question, must be a very narrow type: it must be such that all the beliefs in its output have the same degree of warrant. (It couldn't be a broad type like vision, say, because the outputs of processes exemplifying this type will have many different degrees of warrant: perceptual beliefs resulting from examining a middle-sized object from ten feet in bright and sunny conditions, obviously, will have more warrant than beliefs arising from distant vision on a dark and foggy night.) So suppose we take relevant types narrowly enough so that all the beliefs in the output of a relevant type have the same degree of justification or warrant: then first, it will be extremely difficult to specify any relevant type. Indeed, if, as Goldman suggests, the relevant type must be specified in psychological or physiological terms, we won't be able to specify any such types at all; our knowledge is much too limited for that. But second and more important, there will be many processes (thus narrowly construed) that are reliable, but not such that the output beliefs have much by way of warrant.

There are many kinds of examples here; I shall mention just one. Adapt The Case of the Epistemically Serendipitous Lesion. There is a rare but specific sort of brain lesion (we may suppose) that is always associated with a number of cognitive processes of the relevant degree of specificity, most of which cause its victim to hold absurdly false beliefs. One of the associated processes, however, causes the victim to believe that he has a brain lesion. Suppose, then, that S suffers from this sort of disorder and accordingly believes that he suffers from a brain lesion. Add that he has no evidence at all for this belief: no symptoms of which he is aware, no testimony on the part of physicians or other expert witnesses, nothing. (Add, if you like, that he has much evidence against it; but then add also that the malfunction induced by the lesion makes it impossible for him to take appropriate account of this evidence.) Then the relevant type (while it may be hard to specify in detail) will certainly be highly reliable; but the resulting belief—that he has a brain lesion—will have little by way of warrant for S.27

B. The New Goldman

Goldman's paradigm reliabilism has deep and debilitating problems. This isn't the only brand of reliabilism he offers, however, and I turn now to the significantly different account of warrant to be found in his book Epistemology and Cognition (hereafter cited as EC)28. He quite properly begins by pointing out (p. 3) that there is an important normative element in such crucial epistemic notions as warrant, justification, evidence, and the like. No doubt epistemological deontologism is false; no doubt warrant cannot be explained in terms of satisfaction of duty; nonetheless the central notions of epistemology are profoundly normative. Goldman sees this normativity as essentially a matter of permission and obligation, of conformity to standards or rules. His notion of warrant, therefore, while it is not deontological, is analogically related (related by the analogy of rule governance) to deontological notions of warrant. And what is perhaps of greatest interest here is his combining this idea with the notion that what he calls ‘justification’ (call it ‘Goldmanian justification’) and warrant crucially involve reliability. Suppose we begin with Goldmanian justification. A belief is justified for a person, says Goldman, if it is permitted by a right rule of justification; a justification rule is right if it is an element of a right system of justification rules; and a system of rules is right if it is appropriately reliable—that is, has a high enough “truth ratio.” But the actual spelling out of Goldman's view isn't quite so simple. Suppose we follow him in approaching his conception of justification (Goldmanian justification) by stages. At the first stage, we have

(P1) S's believing P at time t is justified if and only if (a) S's believing p at t is permitted by a right system of justificational rules (J-rules), and (b) this permission is not undermined by S's cognitive state at t. (EC, p. 63)29

“Systems of J-rules,” says Goldman, “are assumed to permit or prohibit beliefs, directly or indirectly, as a function of some states, relations, or processes of the cognizer.… Thus someone being ‘appeared to’ in a certain way at t might be permitted to believe p at t” (EC, pp. 60–61).

Why does Goldman speak here of rule systems rather than rules simpliciter? Perhaps for the following sort of reason. The justification of a belief B at a time t may very well depend upon the justificational status of one or more other beliefs B1Bn at earlier times t1tn—or, indeed, on the justificational status of a belief B at a later time t (EC, p. 78). For it could be that some processes are fairly reliable (have a fairly high associated truth ratio) themselves but are such that their combination with certain other processes are unreliable. For example, some processes take beliefs (among other things) as input and yield other beliefs as output. Thus what leads to my present assessment of your intentions toward me is (in part) my opinion of how you are ordinarily disposed toward me together with my grasp of your present behavior. And even if this process—this way of coming to beliefs about someone's intentions toward me—is ordinarily reliable, its combination with some unreliable process for forming the input beliefs may be quite unreliable. I am pathologically paranoid; I believe you (and everyone else) have been biding your time, waiting for a propitious opportunity to do me in; I also believe that in your opinion the time is now ripe. You (a world-class karate expert) approach me, your hand upraised in friendly greeting; I form the belief that you are about to deal me a deadly blow and run off in terror. My belief that you are about to strike has little by way of Goldmanian justification or warrant, even though the process yielding this belief is in general highly reliable. So a J-rule that certified just any belief produced by just any reliable process would certify beliefs that have little by way of justification. Whether a given belief in the output of a process like this one has justification will depend in part upon the epistemic status of the beliefs forming its input. In cases of this sort, therefore, we shall have to think of a given belief as the result of a series of processes, where the beliefs produced at earlier stages must themselves have been formed according to right rules; accordingly, what is at issue (as Goldman sees it) is systems of rules—or, if single rules, then rules of great complexity. Further, the rule systems in question must have a certain completeness—a completeness I shall forebear to explore, remarking only that the relevant rules or rule sets will have to take account of a good deal of an agent's epistemic history.

Now Goldman argues that J-rules should be stated, not in terms of cognitive state transitions (EC, p. 77) but in terms of what he calls ‘cognitive processes’ “where by ‘process’ we mean a determinate kind of causal chain” (p. 85)—that is (I take it), a set of events e1,…, en so related that the earlier ei, stand in the relevant causal relation to the later ei. So the “framework principle”:

(P1) A cognizer's belief in p at time t is justified if and only if it is the final member of a finite sequence of doxastic states of the cognizer such that some (single) right J-rule system licenses the transition of each member of the sequence from some earlier state(s). (EC, p. 83)

But now for the crucial question: under what conditions is a system of J-rules a right system? Here Goldman specifies a criterion, a set of necessary and sufficient conditions for a set of J-rules being right (and it is here that we see what, according to him, warrant is):

(ARI) A J-rule system R is right if and only if R permits certain (basic) psychological processes, and the instantiation of these processes would result in a truth ratio of belief that meets some specified high threshold (greater than .5). (EC, p. 106)

How shall we understand this? Here we encounter some real problems. First: these rules, says Goldman, take the form of process permissions; a rule permits or authorizes certain processes, processes typically involving a belief or a kind of belief. The processes in question, furthermore, are to be thought of as process types: “no criterion will be plausible unless the rules it authorizes are permission rules for specific (types of) cognitive processes” (EC, p. 85; Goldman's emphasis). The reason, he says (mistakenly, as I shall argue), is that it is only cognitive process types, not tokens, that can properly be said to be reliable or unreliable. So the J-rules are rules that apply to specific process types. Of course not just any cognitive process type will be the sort in terms of which the rules are to be stated. Any given concrete process (obviously enough) will be a token of many different types—types of widely differing degrees of reliability. Suppose Paul is in general fairly reliable, but unfortunately forms the false belief that he is Napoleon. Then his belief is a token both of the type process terminating in the belief that Paul is Napoleon, and of the type cognitive process taking place in Paul; the latter but not the former is reliable. Of course the belief in question has little warrant. The latter type, then, despite its reliability, is not such that every token of it will have warrant, and a J-rule set that licensed processes of that sort will not be a right J-rule set, even if it only licenses processes that are reliable. This means that the J-rules do not concern just any type: only certain kinds of types will be the ones to which J-rules are addressed, the ones that J-rules permit or prohibit. These relevant types—the types to which the J-rules are addressed—will be of great specificity. They will not be such types as the visual process, or even the visual process in a 40-year-old Australian male but much more specific types—“the critical type,” says Goldman, “is the narrowest type causally operative in producing the belief token in question” (EC, p. 50). So J-rules permit or prohibit types, and the types they permit or prohibit will be very narrow or specific.

But (and here is the first problem) what would such a narrow type be like? Goldman gives us no guidance at all. When he argues for the plausibility of reliabilism, he mentions such types as memory, perception, deductive reasoning, and the like. As he points out, we think of these as reliable and as for the most part issuing in beliefs that have warrant; we contrast them with, for example, wishful thinking and paranoia, which we think unreliable, and such that the beliefs they issue in have little or no warrant. But of course perception, memory, deductive inference, and their kin are not at all the sort of types for which the J-rules must be stated; they are much too broad. Goldman does not give any examples of relevant types—no doubt for the very good reason that we don't know of any. Take your belief that Aberdeen, Scotland, is some thousand miles north of Aberdeen, South Dakota: we have no ideas at all as to what might be the narrowest type causally operative in producing this belief (if there is one: perhaps a pair of types tie for narrowest). Indeed, we have no ideas as to what the candidates for this post might be. Goldman says that “by ‘process’ we mean a determinate kind of causal chain” (EC, p. 85)—a causal chain, presumably, consisting of events taking place in someone's cognitive system, the last item of which is a belief. But what are these events like? What sort of events are they? Neural events, presumably, and ones we aren't at present so much as able to describe or think about. Perhaps all we can say is that the relevant types will be ones of this sort: event of kind e1, followed by event of kind e2,…, followed by event of kind en (where the class of eligible e's is to be specified in psychological or physiological terms [and will thus have to be left to future science], and where that class is to be restricted in some way so as to eliminate captious and not so captious counterexamples.)

Further: these process types, naturally enough, have or can have instantiations. If a process is a type, then an instantiation of a process will be a token of that type, a sequence of concrete specific events the last item of which is a belief; and the events in the sequence will be so related that the earlier events are causes (or part causes) of the later events. So the J-rules will be or contain such items as process A is permitted, where process A will be a type k1,… kn of sequences of cognitive events, kn being, for example, the belief that p for some proposition p. (If we say that the types must be stateable in terms of psychological or physiological terminology, kn will presumably have to be the cortical or (more broadly) physiological correlate of the belief that p (if indeed there is any such thing). An instantiation or token of A will be a sequence of cognitive events; the last item in the sequence will be a belief that p, and will be appropriately caused by earlier members of the sequence.

And now we come to a really thorny problem. The criterion for rightness of systems of rules is counter factual: a J-rule system R is right if and only if R permits certain psychological processes, and the instantiation of these processes would result in a high truth ratio of belief. Would, if what? What is the antecedent of the relevant counterfactual and what is its modality? Goldman does not say. Perhaps, however, we can make progress as follows. A system of J-rules, he says, has a truth ratio associated with it in a given possible world W (EC, p. 107). How can we understand that? Again, Goldman does not say. Perhaps, however, we are to think of it as follows. First, a system S of J-rules, says Goldman, will permit or certify certain patterns of processes; process P is permissible, provided that it has been preceded by any of processes P1Pn; a process taking a belief B as input is permissible provided B is the output of some permissible process. So consider the process patterns that are permitted by S. These, of course, will be patterns of types of cognitive processes. We can combine these permitted patterns of such types into a megatype T: for example, displaying processes P1Pn (in that order), or P1Pn, or P1Pn, or… Now some (but of course not all) of these megatypes will be instantiated: a megatype T is instantiated if there is a person who displays the ordered sets of processes it permits in the order it permits them. And to determine the truth ratio of a set of rules (that is, the actual truth ratio, its truth ratio in the actual world), we compute the ratio of true beliefs to total beliefs in the instantiations of the megatypes it permits. (There are problems here, but I shall resolutely ignore them.)

Once we see what the truth ratio of a set of J-rules is, then of course it is easy to see what the truth ratio of a set of rules in a possible world is: the truth ratio of S in W is simply the truth ratio S would have had, had W been actual; to put it differently, the truth ratio of S in W is the ratio of true beliefs to beliefs simpliciter taken over all the instantiations, in W, of all the megatypes licensed by S. (So of course a system of rules will have different truth ratios in different possible worlds.) And Goldman's idea is that the relevant truth ratio for a given system S of J-rules in a possible world W is not simply its truth ratio in the actual world; what counts, he says, is the truth ratio of a system of rules in possible worlds close to W:30

(ARI) A J-rule system S is right in a possible world W if and only if S has a sufficiently high truth ratio in the worlds close to W.31

Technical and quasi-technical problems still lurk;32 but the main and crucial problem is that the suggested necessary and sufficient condition of warrant is vastly too weak to be anywhere nearly sufficient. Justification, for Goldman isn't quite warrant; it is not the case that a sufficient degree of justification is (together with truth) necessary and sufficient for knowledge. What must be added for the latter is “local reliability,” a codicil designed to handle Gettier problems: “a true belief fails to be knowledge if there are any relevant alternative situations in which the proposition p would be false, but the process used would cause S to believe p anyway. If there are such relevant alternatives, then the utilized process cannot discriminate the truth of p from them; so S does not know” (EC, p. 46). So on the complete account a belief has warrant if and only if it is justified (sanctioned by a right set of J-rules) and displays local reliability.

It is easy to see, however, that this alleged necessary and sufficient condition isn't anywhere nearly sufficient. There are many kinds of cases here. To see the first, let S be any system of J-rules that is right in the sense of (ARI). Now in the typical case, we can adjoin to S further rules of equal or greater truth ratio without reducing S's truth ratio;33 let S, therefore, be the result of adding

R1 Any process is permitted whose output is a tautology

to S. Obviously the truth ratio of S will be at least as high in the worlds close to the actual world as that of S; hence S will be right in the sense of (ARI). But then any belief whose content is a tautology will be in the output of a process sanctioned by a set of J-rules right in the sense of (ARI); clearly the belief meets the above condition for local reliability, since there aren't any alternative situations in which it is false; hence any such belief will have warrant on Goldman's account. Yet not just any belief of a tautology has warrant (as Goldman himself observes); it could be that I believe a complicated tautology T not because I see that it is true (perhaps it is too complicated for that) but because you have asserted the denial of T and I, due to excessive contrariness, have a powerful tendency to believe the denial of anything you assert—or because I have a pathological tendency to believe whatever looks complicated and is found in a logic book.

Now here it might be objected34 that the J-rules ought to be stated in psychological or physiological terms; they shouldn't involve terms like ‘tautology’, which guarantee the truth of the permitted beliefs. Fair enough. (Of course we don't know how to state the relevant J-rules that way, since we don't know anything about the sorts of narrow psychological processes that the J-rules are to permit or prohibit.) But clearly the rules must be such that they do license certain beliefs or classes of beliefs: otherwise the whole project fails. And clearly there will be processes in Goldman's sense whose last items are beliefs whose contents are tautologies. So choose some large class C of processes whose last items are tautological beliefs (some of them being fairly complicated); there will be a (possibly complex) psychological or physiological property P had by all and only the members of C. Then let S be any suitable right rule system and adjoin

R1 Any process having P is permitted

to form rule system S. On Goldman's showing, any belief permitted by S will have a high degree of warrant, at least if it meets the condition of local reliability (as all the beliefs permitted by R1 do). But of course many such beliefs will in fact have no appreciable degree of warrant at all. Paul is in awe of logicians: if he runs across a complicated logical sounding proposition replete with p's and q's, he invariably believes it. He sees such a proposition in a logic book (in a “Prove or give a counterexample” exercise); it is in fact a tautology, but he can't see that it is and has no other reason for believing it; this belief, then, has little by way of justification for him, despite its being certified by the above rule.

Of course it isn't just processes whose outputs are tautologies that cause a problem: we get the same problem by adjoining to a right system S a rule sanctioning any process whose output is a proposition true in all the worlds sufficiently similar to the actual world. Goldman doesn't say much about what these worlds would be like, but presumably the basic laws of physics that characterize the actual world would also characterize them. So start from a right system S of J-rules and add

R2 Any process is permitted whose output is the belief that the velocity of light is constant35

(or that it is about 186,000 miles per second, or that planets travel in elliptical orbits, or whatever) and you will have a right system of J-rules. But then my belief that the velocity of light is a constant will be justified, on Goldman's view, no matter how it is formed—even if I believe it only because it turned up in a letter to the editor in the National Enquirer and I believe everything I read there. Similarly for any philosophical truth: that there are at least uncountably many possible worlds, for example. If we add

R3 Any process is permitted whose output is the belief that there are at least uncountably many possible worlds36

to a right set of J-rules, the resulting set will be right; but then any belief that there are at least uncountably many possible worlds will be justified, no matter how bizarre the process that produces it. (Even if it is formed, for example, on the Oscar Wilde Principle of Belief Formation: Always believe what will cause maximum consternation and dismay among your colleagues.)

A second kind of example: return to The Case of the Epistemically Serendipitous Lesion. As a result of this lesion (and we can stipulate, if we like, that it develops prenatally), most of my beliefs are absurdly false. It also causes me to believe, however, that I am suffering from a brain lesion. This belief of mine, pathologically caused as it is, is clearly one that has little or no warrant. But there will be a cognitive process, in Goldman's sense, whose output is this belief; and we may suppose that this process—call it ‘P1’—occurs only in conjunction with a lesion of the sort in question. So the result of adding

R4 P1 is permitted

to a right system of J-rules will itself be a right system. Hence Goldman's account yields the conclusion that this belief is justified and, indeed, that under these conditions I know that I am suffering from a lesion.37

Indeed, perhaps we can argue that on Goldman's account nearly all human beliefs are justified. First, a plausibility argument. No doubt the vast majority of human beliefs are true; most of the myriads or millions of our quotidian beliefs, after all, are such relatively unrisky items as it looks like another rainy day, there's that pain in my left knee again, she looks bored this morning, that cereal tastes no better than the last time I tried it, and the miserable lawn needs to be mowed again. If so, however, the rule

R5 Any cognitive process on the part of a human being is permitted

will have a high truth ratio in the actual world. But then presumably the same will go for the worlds sufficiently similar to the actual world; if the vast majority of human beliefs are in fact true, then any world in which most human beliefs are false will be very different. There is therefore reason to think that R5 has a relatively high truth ratio—a truth ratio higher than ½—in all the relevant worlds. (More to the point [since it is all that is needed for my argument] there is reason to make the weaker claim that an appropriate infinitary analogue of an average truth ratio of R5, taken over all the relevant worlds, is appropriately high.) Of course, if R5 does have a high truth ratio in those worlds, then (depending upon how high a truth ratio is required for epistemic justification) on Goldman's showing all human beliefs—at any rate, those produced by processes that are locally reliable—will be justified and have warrant.

But perhaps you are inclined to doubt that most human beliefs are in fact true; perhaps you doubt that R5 has a high truth ratio in the relevant worlds. Very well; then consider

R6 Any triple of processes are permitted, provided two of them have as output theorems of elementary arithmetic.38

Clearly R6 will have a high truth ratio in all worlds, and hence in all relevant worlds (and if you do not think a truth ratio of 2⁄3 is high enough, adjust the example to suit). It does not follow, of course, that all of my beliefs will be justified, on Goldman's account; but it does follow that they will all be justified, provided I believe two theorems of elementary arithmetic for each non-arithmetical proposition I believe—even if my nonarithmetical beliefs are absolute paradigms of lack of epistemic justification.

The main point, I think, is this: contrary to Goldman's suggestion, what determines whether the output of a process has warrant is not simply the truth ratios of the J-rule sets permitting it. To confer warrant or positive epistemic status, the process in question must meet another condition. It must be non-pathological; we might say that the process in question must be one that can be found in cognizers whose cognitive equipment is working properly; it must be the sort of process that can occur in someone whose faculties are functioning aright.

IV. Concluding Peroration

Goldman's EC account is complex—complex enough to obscure the main lines of the accounts and introduce extraneous difficulties. In some ways, the earlier paradigm reliabilism is more attractive, if only because of its simplicity. Let me conclude, therefore, by a brief but more general consideration of paradigm reliabilism. The basic idea is that a belief has warrant if it is produced by a reliable belief-producing mechanism or power or faculty. Now initially, I think, the natural way to understand this suggestion would be in terms of actual concrete belief-producing faculties such as Paul's memory, or Paul's visual system. Then the idea would be that one of Paul's beliefs has a good deal of warrant if it is produced by his memory or vision, say, but not if it is produced by way of wishful thinking. Here what counts is the reliability of the concrete system, or organ, or faculty, or source of the belief, however exactly that is to be construed.

The early Goldman, however, demurs: he proposes that it is not concrete systems or faculties, but abstract types that are relevant. A belief has warrant for me if it is produced by some concrete process (a series of events of some kind) which process is an example of the relevant type—the relevant type because, of course, every process will be an instance of very many different types. Which types are the relevant types? This question precipitates the generality problem (see pp. 198–99) and is exceedingly hard to answer. Suppose we had a way to answer it, however: then a belief has warrant for me just in case the relevant type T of the process that produces it is reliable. (And perhaps we can understand this as the claim that T is such that most of the beliefs produced by processes falling under T [in the appropriate nearby possible worlds as well as in the actual world] are true.) Now why does Goldman turn away from concrete faculties or powers and toward these types? The answer: “On this interpretation, a process is a type as opposed to a token. This is fully appropriate, since it is only types that have such statistical properties as producing truth 80% of the time; and it is precisely such statistical properties that determine the reliability of a process.”39 But as it stands this seems quite mistaken. An actual concrete barometer (as opposed to the type barometer) can perfectly well be reliable, and can have the statistical property of correctly registering the atmospheric pressure 80 percent of the time. My eight-dollar digital watch is much more accurate than any of those expensive spring-driven seventeen-jewel Bulovas that seemed so elegant 30 years ago. Here it isn't types that are compared for accuracy: what I say is that my digital watch, that very concrete mechanism, is more accurate than any spring-driven Bulova. So this isn't a good reason for developing paradigm reliabilism in terms of types rather than tokens.

But I think there is a good reason, from the Goldmanian perspective, and it emerges when we try to accommodate degrees of warrant. Goldman says that the degree of warrant enjoyed by a particular belief is a function of the degree of reliability of the relevant process type. Now suppose we try to develop the theory, not in terms of types, but of such concreta as Paul's vision. First, we should have to invoke the notion of proper function: the visual beliefs Paul forms when drunk or when his vision is otherwise impaired won't have much by way of warrant for him, even though his vision is overall pretty reliable. But then the theory would not be a reliabilist theory anymore; it would be a theory more like the one to be presented in Warrant and Proper Function. Second and just as important, the problem will be that some of Paul's vision-induced beliefs will have a good deal more warrant than others; his beliefs about what he sees up close in a good light will have more warrant than what he sees in a dim light at some distance; and then it can't be that what determines the degree of warrant of one of Paul's vision-induced beliefs will be just the overall reliability of his vision. So if we try to explain degree of warrant in terms of concrete faculties or powers such as vision, we run into a dead end.

Of course, there are subfaculties as well as faculties: and just as vision is a subfaculty of perception, so there may be subfaculties of vision. Perhaps there are subfaculties narrow enough so that all of their outputs have the same degree of warrant, so that the reliabilist could say that the degree of warrant enjoyed by the outputs of these subfaculties are a function of the reliability of that subfaculty (although she would still have to stipulate that the subfaculties in question were functioning properly). Perhaps there are subfaculties as narrow as all that—but then again perhaps there are not. We can't just make up subfaculties and organs; there presumably isn't any organ that is the sum of my heart and my left knee, for example, and there isn't any subfaculty whose output consists, say, in visual beliefs produced on Saturdays together with the belief that 4 + 1 = 5. We can't gerrymander concrete processes or mechanisms just any way we want. There is the faculty or power of speech; but there isn't any such thing as a faculty or organ whose output is, say, exactly half the words I speak. There is no organ whose output is half of what my larynx outputs, together with half of what your heart does. The questions what subfaculties there are and whether there are subfaculties of the right degree of generality are empirical questions to which we do not at the moment have even the beginnings of decent answers.

So we can't be at all sure that there are concrete faculties or subfaculties of the right narrowness; and just here is where types are handy. We can make up types ad libitum (more realistically, any type we might find useful is already there). So, for example, there is the type is either a heart or a stomach and there is also the type visual belief about tiger lilies in Martha's backyard; there is even the type belief on the part of Paul that is either produced on Saturday or is the belief that 4 + 1 = 5. More to the present point, there are types of the sort mentioned above: (cognitive) event of kind e1, followed by event of kind e2,…, followed by event of kind en, where the last item is a belief. These types can clearly be of the relevant degree of specificity, since they can be of any degree of specificity you please.

Sadly enough, however, if we develop the theory in terms of types of that sort, then we have the problems I just mentioned: for example, the type process whose last item is belief in a tautology (or the appropriate cortical correlate thereof) is highly reliable, but not nearly all the outputs of that type have warrant. And of course we also have that problem of generality. Let me give one final example to illustrate that problem. Suppose I am struck by a burst of cosmic rays, resulting in the following unfortunate malfunction. Whenever I hear the word ‘prime’ in any context, I form a belief, with respect to a randomly chosen natural number less than 100,000, that it is not prime. So you say “Pacific Palisades is prime residential area” or “Prime ribs is my favorite” or “First you must prime the pump” or “(17′) entails (14′)” or “The prime rate is dropping again” or anything else in which the word occurs; in each case I form a belief, with respect to a randomly selected natural number between 1 and 100,000, that it is not prime. The process or mechanism in question is indeed reliable (given the vast preponderance of nonprimes in the first 100,000 natural numbers) but my belief—that, say, 41 is not prime—has little or no warrant. The problem is not simply that the belief is false; the same goes for my (true) belief that 631 is not prime, if it is formed in this fashion. So reliable belief formation is not sufficient for warrant. The belief-producing process or mechanism is indeed reliable; but it is only accidentally reliable; it just happens, by virtue of a piece of cognitive serendipity, epistemic good fortune, to produce mostly true beliefs; and that is not sufficient for warrant. In Warrant and Proper Function I shall try to explain what it is for a process to be only accidentally reliable; once we see that, we shall also be able to see what is required for warrant.40