You are here

2. Warrant: Objections and Refinements

I. The Design Plan

As we saw in chapter 1, the notion of warrant involves the notion of proper function, which involves or presupposes the idea of a design plan. In order to achieve a deeper understanding of warrant (and in reply to some objections), we must look further into this idea of design: it is much richer and more complex than it initially looks. A thing's design plan is the way the thing in question is ‘supposed’ to work, the way in which it works when it is functioning as it ought to, when there is nothing wrong with it, when it is not damaged or broken or nonfunctional. Both human artifacts and natural organisms and their parts have design plans. Computers, automobiles, and linear accelerators have design plans, but so do plants, animals, hearts, livers, immune systems, digestive tracts, and the like. There is a way in which your heart is supposed to work: for example, your pulse rate should be about 55–80 beats per minute when you are at rest, and (depending on your age) achieve a maximum rate of some 180 beats per minute when you are exercising. If your resting pulse is only 10, or if it goes up to 250 upon walking slowly up a short flight of stairs, then your heart (or something in your circulatory system) is not functioning properly; it is not functioning according to the design plan for human circulatory systems.

We need not initially take the notions of design plan and way in which a thing is supposed to work to entail conscious design or purpose (see pp. 13ff.); it is perhaps possible that evolution (undirected by God or anyone else) has somehow furnished us with our design plans. But in the central and paradigm cases, design plans do indeed involve the thing's having been designed by one or more conscious designers who are aiming at an end of some sort, and design the thing in question to achieve or accomplish that end. In exploring the notion of design plan, therefore, we must keep close to the front of our minds the way things go in these central and paradigm cases. We must therefore bear in mind the way in which a radio, say, or a rope, or an airplane, or some other kind of artifact can be said to function properly, and what the connection in those cases is with a design plan. Here I shall explore the notion of proper function and design plan and shall do so with respect to the following five rubrics: the max plan versus the design plan, unintended by-products, functional multiplicity, the distinction between purpose and design, trade-offs and compromises, and defeaters and overriders.

A. Design Plan versus Max Plan

The design plan of an organism or artifact specifies how it works when it works properly: that is (for a large set of conditions), it specifies how the organism should work. When the coolant temperature of my car gets up to 200°F, the thermostat should open and the engine, if idling, should slow to the warm idling speed. When body temperature begins to rise, surface blood vessels will expand, pores will open, perspiration will begin, and so on—provided the organism is functioning properly. We might initially think of the design plan as a set of circumstance-response pairs; for each member of some class of circumstances it specifies what the appropriate response is, what the thing in question will do if it is functioning properly.

What makes that response appropriate, of course, is the fact that the thing in question is designed to accomplish an end or purpose; it has a function to perform. Perhaps it pumps the blood, or reduces the voltage, or turns the engine over in order to start it, or provides a shot of adrenalin. It would therefore be better to think of the design plan as a set of triples: circumstance, response, and purpose or function—and, of course, the latter could itself be complex in one way or another. Under ordinary conditions your kidneys function a certain way: they respond in a certain way to circumstances, and they do so in order to accomplish their purpose or function, which is the removal of metabolic waste products from the bloodstream. Of course a given system or organ may have both proximate and remote functions: its immediate or proximate function may be to remove metabolic waste from the blood, but its ultimate function is to contribute to the survival and well-being of the organism. (It may also have functions intermediate between proximate and ultimate functions.) What will be contained in a triple of the design plan will not be the ultimate function (since that will be the same for nearly all of the organs and systems of an organism) but some more proximate function.

Here we are thinking of the design plan as specifying how the thing works now, or at a given time: call this a snapshot design plan. But the design plan, at least in the case of organisms, may also specify how the thing will change over time. There is such a thing as maturation; and it can be thought of as involving a master design plan, which specifies a succession of snapshot design plans.

But the design plan of an organism or artifact does not say what it is supposed to do in just any old circumstance. I design a radio in such a way that it has certain fail-safe features: if there is an electrical surge of moderate voltage along the line, a circuit breaker will trip, thus forestalling damage. The design plan may say nothing, however, about how it will respond (or what will happen to it) when struck by lightning, or when it sinks to the bottom of the Mindanao Trench, or is run over by a steamroller. The design plan of my self-sealing tire says nothing about what it will do when it is ‘punctured’ by an object that leaves a hole six inches in diameter, or when it is blown to bits in a terrorist dynamite attack. The radio is supposed to work in a certain way when there is an electrical surge: the circuit breaker should trip. But there is no particular way in which it is supposed to work if subjected to a current strong enough to melt (or vaporize) it. The self-sealing tire is supposed to work a certain way if punctured by a nail; but there is no particular way in which it is supposed to work if it is blown up by a high-explosive device.

Tires and radios are artifacts, of course; but the same goes for organisms. My body is so designed that it will respond to attack by microbes: antibodies will be rushed to the scene, heart rate and temperature may be elevated, and all the rest. This is how things are supposed to work under those conditions. But there isn't any particular way in which my body is supposed to work if I am smashed by a huge boulder or crushed by a runaway steam-roller or vaporized in a nuclear explosion. My cognitive design plan says something about how I will respond (if my cognitive faculties are functioning properly) when appeared to redly: for example, under those conditions (properly filled out) I may be inclined to form the belief that what I see is red. But my design plan says nothing about how I will respond when I am appeared to redly, but am also suffering a massive heart attack, or have just ingested huge quantities of strychnine, or have just hit the ground after a 300-foot fall. So the design plan does not include a description of how the thing will work under just any or all circumstances, but under only some: those that in some sense (in the paradigm artifactual case) the designer(s) plan for, or have in mind, or intend.

Of course, radio, tire, and human body will indeed respond in a certain way (better, something will happen to them) under the conditions not catered for by the design plan. If run over by a steamroller, for example, each will lose its shape and structure, become very flat, achieve a new level of density, and cease to do much else of interest. If a squirrel is hit by a car, it will no longer respond (to the approach of a hungry fox, say) as before. So here we must distinguish the design plan of the thing in question from what we might call the maximum plan (‘max plan’ for short). The design plan is a set of triples; the max plan is a set of circumstances—response pairs. It is maximal in three ways. First, unlike the design plan, it is not a description of how the thing works under just those circumstances (as in the paradigm cases) the designer plans for or takes into account; it includes a much broader set of circumstances. How much broader? Well, not all logically possible circumstances, and not circumstances involving a change in the natural laws. Nor does it include circumstances involving an extensive change in the structure of the thing: it describes how the thing will work given its present structure and organization. What it describes is how the thing will work1 so long as it retains its approximate present structure in circumstances involving the natural laws that do in fact obtain. (So perhaps we should refer to it as a mini-max plan.)

As a special case, therefore (and secondly) the max plan says how the thing will behave, what it will do (what will happen to it) when it is broken or damaged or destroyed as well as what it will do when it functioning properly. It also specifies the new max plans the thing would acquire as a result of change in its structure. Here we need a distinction like that between master and snapshot design plan: the master max plan says that if the thing is run over by a steamroller, it will get smashed and will acquire a new and less interesting snapshot max plan. And third, any given circumstance member M of a circumstance—response pair is maximal in the sense that it is a complete specification of relevant circumstances. That is to say, in any circumstance including M the organism or artifact will behave in the same way; it includes all that could be relevant to the response of the artifact or organism in question. Thus, for my radio or refrigerator, M will not specify, say, who won the battle of Marathon or whether the Continuum Hypothesis is true; but for any circumstance that could affect how it works, it will specify how things stand with respect to that circumstance. Cut down the design plan by removing the purpose from each triple, and add to the circumstance that the thing in question is functioning properly: the resulting set of circumstance-response pairs will be a subset of the max plan. It will be a set such that each circumstance member of a circumstance-response pair will both include proper function (that is, will include the circumstance that the thing in question is functioning properly) and also will be one of the circumstances that (in the paradigm cases) the designer plans for.

Functional generalizations (see pp. 270ff.) involve members of the design plan. A hungry cat will pounce on a mouse it sees emerging from a mouse hole—but only if it is not deranged, or crippled, or in the grip of an evil demon, or in some other way malfunctioning. A human being in my present circumstances being appeared to in the way I am will believe that she is sitting before a computer—but again, only with similar provisos.

B. Unintended By-products

There is another distinction intimately related to the design plan/max plan distinction: that between what the thing in question is designed to do and unintended by-products of the way it works. I design a refrigerator; one consequence of my design is that when a screwdriver touches a certain wire, the refrigerator will emit a loud angry squawk. I didn't intend for it to work this way; I have no interest in its doing that under those conditions and don't care whether it does so or not. Its working that way is no part of the design plan; there is no circumstance-response-purpose triple according to which it will do that. But of course its working that way is a consequence of how it is designed, and its working that way in those circumstances will be an element of the max plan. Although I did not intend it to work this way, the fact that it does is no indication of malfunction. This is an accidental or unintended by-product of the design. It is not accidental in the sense that it happens just by chance, or isn't caused to happen; its working that way is of course a causal consequence of the way the refrigerator is constructed. It is accidental, rather, from the point of view of the intentions of the designer; I designed the thing to do certain things, to work a certain way, and that way of working also has the unintended consequence that it makes that loud squawk. Of course it could be that a given bit of behaviour or response has more than one purpose; then it might be an unintended by-product with respect to the one but not with respect to the other.

Again, this distinction is present for organisms as well as artifacts: that thumpa-thumpa sound the heart makes is (so we think) merely accidental, a by product of how it works when it works in such a way as to fulfill its purpose or function of pumping the blood. The sound is accidental with respect to the design plan; it is no part of the design plan that it will make that sound; the circumstance-response-purpose triples that constitute the design plan will not include pairs where the response is making that sound. (I assume for purposes of illustration, that this thumping sound has no purpose.) My brain might be so constructed that pressure on a certain area (from a lesion or tumor, say) may cause headache, or bizarre beliefs—among which might be the belief that I have a lesion or tumor. That the lesion happens to cause me to form that true belief is no part of my cognitive design plan; it is either an unintended by product of the design of my brain or else just a part of the max plan that is not involved in the design plan at all. As a result, this belief does not have warrant.

C. Functional Multiplicity

Different parts of the design plan may be aimed at different ends; different triples, obviously enough, may include different purposes. Indeed, the same bit of behaviour or response can serve different purposes. What we have here is functional multiplicity. And different parts of the design plan governing the cognitive establishment may include different purposes. We took advantage of this fact in chapter 1 in nothing that a given belief has warrant only if the part of the design plan governing the production of that belief is aimed at truth—only if, in other words, the relevant triples of the design plan include as purpose the production of true belief (rather than, say, the production of beliefs that by virtue of statistically unjustified optimism contribute to the survival of a dangerous illness or to the successful leaping of a wide crevasse).

There are other important kinds of functional multiplicity. The design plan of an organism, I say, specifies how it will function when it is functioning properly. But it may also specify how it will function when it has been invaded properly—when it has been damaged in some way, as when it has been invaded by a virus, or stung by a wasp, or cut or abraded. The design plan may include triple where the purpose or function is damage control or recovery and where the circumstance member includes malfunction elsewhere in the organism or artifact. (So the circumstance members of the triples include not only environmental circumstances exterior to the organism or artifact in question; they may also include the way some other part of it is functioning.) Upon being invaded by a virus, the organism (from one perspective) no longer functions properly; there is elevated temperature and pulse, dizziness, and inability to walk a straight line. But there are certain ways of responding to that invasion that constitute proper function and other that do no. You contrast an influenza bug; your temperature rises. From one perspective your temperature-regulating mechanisms are functioning improperly, in that your temperature is not now the temperature of a healthy human being. But from another perspective they are functioning just as they ought; part of your body's defense against the invader involves your temperature's rising, and if it does not rise, there is malfunction (further malfunction).

Here we have another way in which different segments of the design plan may be aimed at different ends. Your normal temperature and the mechanisms for maintaining it are aimed at providing the best (or a good) temperature for the ordinary conduct of organic life for a creature of your sort; the higher temperature when attacked by the virus is a result of a part of the design plan aimed at damage control and healing. (Perhaps, for example, viruses and bacteria reproduce less rapidly at those higher temperatures.) Alternatively, this rise in temperature could be a by-product of defense and damage-control mechanisms.

This distinction too will be relevant to the cognitive case. When the organism is under attack, parts of the cognitive establishment may fail to work in the way they ordinarily do. This failure may be due to simple malfunction, as when a tumor pressing on a part of the brain may cause bizarre belief formation, or it may be due to the damage-control and defense mode of functioning, as when you optimistically believe that you will survive the disease, this belief itself helping you survive the disease. In still other cases, a certain mode of belief formation may be an accidental by-product of the damage-control and defense mode of function. So a belief can arise by way of proper function of a module of the design plan aimed at truth; it can arise by way of proper function of a module not aimed at truth; it can arise as an unintended by-product of a damage-control mode of function; and it can arise by way of simple malfunction. It is only in the first case that there is warrant.

One final kind of functional multiplicity: a thing can obviously acquire a new purpose and a new design plan. I construct my refrigerator and then make a minor adjustment; now the refrigerator works and is intended to work slightly differently. It has acquired a new design plan. Relative to its old design plan, it is not functioning properly; relative to the new, it is. More drastically, I turn my refrigerator into a food warmer by reversing a crucial circuit: then when it functions properly according to its new design plan, it works badly indeed according to the old. So suppose our cognitive faculties are redesigned by Alpha Centaurians, whose aims here are aesthetic rather than epistemic. Then it might be that our faculties work in accord with the new design plan and that the old design plan is aimed at truth; that would not be sufficient for warrant, however, since what is required is that your faculties work in accord with their current design plan and that that design plan be aimed at truth.

D. Purpose versus Design

Another important distinction is that between what a thing is designed to do (its purpose, say) and how it is designed to accomplish that purpose (its design, we might say).2 (Computers, as programmers know, do what you tell them to do, not what you want them to do.) There is a sort of ambiguity in the notion of working properly. On the one hand, a thing works just the way in which it was designed to work. My radio works properly when there is nothing wrong with it and it works just as its designer designed it to. But what shall we say when it works as it was designed to, all right, but has a very poor design and won't receive stations more than 500 yards away? Then it does not work very well, despite its functioning precisely in accord with its design plan. I aim to make a refrigerator that will keep its contents at a constant temperature of 33°F; through incompetence and inattention, I fail; as a result of a lamentably inferior design, its internal temperature varies between 70° and 85° F. Then the thing works properly in one way: it works just as it was designed to work. But in another way it works badly: it works just as it was designed to work. But in another way it works badly: it does not do what it was designed to do. Paul designs a deicer for an airplane. As we might expect, his design is deficient; so while the device function just as it was designed to, it never removes ice from the wings, and occasionally adds a bit of its own.3 Then in one way it works properly and in another way it does not. I once owned an automobile with an air conditioner whose operation caused the automobile to overheat unless the outside temperature was below 70° F; this could have been due either to failure to work as it was designed to or to poor design. (One hopes it was the former.)

Again, the same distinction holds in the case of an animal or other organism. Perhaps you think the human knee is poorly designed; then you may think that a knee functioning in accord with its design plan is nonetheless not functioning very well. Someone might think the panda's thumb doesn't work well, even when the way it works does not deviate from its design plan. Evolution might not always or often hit on maximally apt design plans; in many cases, says Stephen Gould, a reasonably clever engineer could do a lot better. Clearly, something works properly in the fullest sense only when both conditions are met to an appropriate degree.

Subtleties arise here. For example, I might design my refrigerator, intending and expecting that it function a certain way. As it turns out, it functions in some wholly different way—not because it is broken, but only because I am mistaken with respect to how a thing with that kind of construction will in fact work. Is it then functioning in accord with its design plan? Here, clearly enough, a more complete account would make further distinctions; but perhaps they are not needed for present purposes. Another subtlety: if a large rock falls on my radio and it no longer does what it was designed to do, it is broken; it no longer works right. But suppose a small rock falls on it; now it no longer works the way it did before, but instead, by some wild chance, performs its functions better than before so that it receives stations further away, with less static, more fidelity, and so on. (Indeed, it goes so far as to correct errors in the performance of Beethoven sonatas.) Is my radio then malfunctioning, broken? That's not so clear; here one of the two conditions for working properly is met (and met magnificently) but the other is not. It works in such a way as to fulfill its purpose, but it does not work in accordance with its design plan. So is it working properly or not? In cases like this the answer is not always clear; here we reach that penumbral area of vagueness surrounding the central paradigm.

Is one of these conditions more important or central than the other? Well, if the radio does not perform the function it was designed to perform (it only emits a loud hum and does not receive any stations at all) it isn't working properly, even if it is functioning exactly as it was designed to function. On the other hand, one is inclined to some degree to believe that it is working properly if it performs its function, even if it is not functioning in accord with its design plan. Perhaps what we must say here is this: in the paradigm or central cases, a properly functioning device does what it is designed to do and does it the way it was designed to do it; there will be analogical extensions of the term to the sorts of cases where it does what it was designed to do but by some fortuitous circumstance doesn't do it the way it was designed to do it; but there are not analogical extensions to the sort of case where it functions in accord with its design plan, but doesn't at all perform the function the designer was aiming at.

The distinction between purpose and design is what occasions the addition (in chapter 1, pp. 17–19) of the fourth condition for warrant: that the design plan be a good one. Suppose our cognitive faculties had been designed by, say, one of Hume's infant deities or an incompetent angel. She intends that we have true beliefs on a wide variety of topics but botches the job: the fact is the design guarantees beliefs that are nearly always false. Then it looks as if our faculties would be functioning properly, functioning in accord with their design plan, but surely our beliefs would have very little by way of warrant for us. The occasional true belief we have would not constitute knowledge. But the answer is clear: under those conditions our cognitive faculties would be functioning properly in the sense that they would be functioning in accord with our design plan, but not functioning properly in the sense of enjoying good design. Both are necessary for proper function and both are necessary for warrant.

But if I am obliged to speak of distinct faculties or belief-producing mechanisms or processes, don't I then fall victim to the dreaded generality problem that afflicts Alvin Goldman's reliabilism (in particular in the earlier and in some ways more satisfactory formulation to be found in “What Is Justified Belief?”4)? On that account, a belief has justification if and only if it is produced by a reliable belief-producing cognitive process: the more reliable the process, the more justification enjoyed by the belief. Such processes, says Goldman, are to be thought of as types, not tokens. Clearly enough, however, a given concrete process culminating in the production of a belief will be a token of many different types: which type is the one determining its degree of justification? But just here is the problem: take the relevant processes broadly, and the same process will have outputs of differing degrees of justification, so that—contrary to the theory—the degree of justification of a belief is not determined by the degree of reliability of the relevant associated process; take them narrowly enough so as to avoid this difficulty, and there will be or could be reliable belief-producing mechanisms—the tumor that causes several belief-producing processes, most of which produce absurdly false beliefs, but one of which causes you to believe that you suffer from a tumor—that produce beliefs that do not have warrant.5

Now Goldman suggests6 that my account suffers from the same problem; he adds that “a little reflection should make it clear that cognitive faculty individuation is no trivial matter.” Indeed it isn't, but how does that create a problem for my view? The fact that it is not easy to individuate faculties is not, by itself, much cause for alarm. It is also hard to individuate mountains and sentences; that does not mean that there is automatically a problem in talking about mountains and sentences; everything depends upon what you propose to say about them. I don't, of course, say that the degree of warrant of a belief is determined by the degree of reliability of the faculty or faculties that produce it; the analogue of that claim for processes is what creates the problem for Goldman; so at any rate I am not afflicted with the very same problem.

Is there a problem here at all? According to Goldman, “Plantinga owes us an answer to the question precisely what cognitive faculties are there, and which ones must be functioning properly for a given belief to be justified?” But why do I owe us an answer to that question? No doubt it would be nice to have one, and no doubt a really complete theory would include something like such an account. But this just means that without such an answer my account is incomplete—which of course it is. For Goldman, on the other hand, the problem is not incompleteness, but something much more debilitating: we can see that no matter which level of generality we select, the analysis will give us the wrong results. No analogue of that problem, so far as I can see, afflicts my account.

We can employ some of the above distinctions to respond to an objection by William Hasker, who asks us to consider

Geoffrey, who as the result of a random genetic mutation, not directed or planned by anyone, finds himself in an unusual cognitive situation. On the one hand, Geoffrey is totally blind, and there is no hope of his ever regaining any degree of vision. But the mutation which rendered Geoffrey blind also had another result. The portion of his brain which would normally be devoted to processing visual information has now acquired another ability: it registers, in an extremely sensitive way, the magnetic fields generated by the earth and by objects in the environment. Because of this, Geoffrey has the ability, hitherto verified only in certain marine organisms, to locate himself and to make his way around by magnetolocation. As he grows to maturity, he is able to determine his location in the neighborhood and to make his way about with considerable facility; automobiles are still a problem for him, to be sure, but so long as he stays on the sidewalk he is all right.

Under the described circumstances I think we are obliged to say that Geoffrey's beliefs about his whereabouts are warranted, and indeed that he knows where he is. It seems, however, that Geoffrey's cognitive situation is by no means in accordance with his design plan; that plan calls for that portion of his brain to be devoted to visual perception. So there can be warranted belief even where there is not function according to the design plan, and TPEF [the Theory of Proper Function—that is, my account of warrant] is false.7

Hasker clearly intends that Geoffrey has acquired a new max plan, and has acquired it by chance. There is a way in which his cognitive faculties regularly work, a way different from that of the rest of us human beings; it is not the case, for example, that his cognitive responses are determined by Alpha Centaurians who decide what a given response will be by way of some chance device such as throwing dice. But if Geoffrey has acquired a new max plan by chance, what is to prevent his acquiring a new design plan by chance as well? If evolutionary theory is fundamentally correct, organisms acquire their design plans by way of random genetic mutation (or some other source of variation) and natural selection. Well, can't Geoffrey have acquired his new design plan by way of a sizable (and improbable) random genetic mutation? Then his cognitive faculties would be working properly according to the new design plan (even if improperly according to the old) and there would be no reason to deny his beliefs warrant.

But perhaps you think it impossible that a creature should acquire a design plan, or a new design plan by random genetic mutation, or, indeed, by mere chance in any form. Hasker considers the possibility that Geoffrey has been given his new design plan by God or by someone else—some young and inept angel, perhaps—to whom God has delegated this task. True (as Hasker points out), you would not ordinarily expect God to revise Geoffrey's cognitive capacities in the direction of less rather than greater cognitive capacity, but of course that is by no means decisive. Perhaps there is some great good for Geoffrey that God can best achieve by making or permitting this revision.

Of course, if God revises Geoffrey's cognitive design plan, then this new way in which his cognitive system works is not (contrary to the original stipulation) “the result of a random genetic mutation, not directed or planned by anyone,” The example must be so understood, then, that Geoffrey has acquired a new and different max plan but not a new design plan; his cognitive faculties malfunction, and do not work in accord with his design plan. And we must stipulate that he has acquired his new max plan just by chance. Perhaps we must imagine God permitting an angel to determine the relevant part of his cognitive max plan by some random choice method. What then? Shall we say that Geoffrey knows? Our first thought might be that Geoffrey's beliefs are warranted, all right, but warranted by nothing more arcane than induction; he has learned of correlations between how he is (nonstandardly) appeared to or what he is nonstandardly inclined to believe and what his location (as he learns from those around him) is. According to Hasker, however, “the example can be redescribed (focusing on very early examples of Geoffrey's exercise of this faculty) so as to obviate this objection.”

But is it at all clear that under these conditions (when we explicitly stipulate that his faculties are not functioning in accord with his design plan) those first exercises of this faculty do provide Geoffrey with warranted belief? I think not. Perhaps in those cases what he has is not warranted true belief, but beliefs true by lucky accident. But it is equally unclear, I think, that Geoffrey's beliefs lack warrant. What we have is a situation rather like that with the radio whose performance improves after being hit by a random rock (and perhaps we could make the example more striking by adding that Geoffrey's cognitive performance is better than that of the rest of us). One of the purposes involved in Geoffrey's design plan (that is, the provision of true beliefs about his location) is still served by his new max plan, although not in accord with the design plan. In a case like that we are pulled in two directions: on the one hand since that purpose is served, we are inclined to think that the relevant cognitive module is functioning properly; on the other hand since it is not functioning in accord with the design plan we are inclined to think that it is not functioning properly. This hesitation is mirrored, I think, by our hesitation as to whether or not the relevant beliefs have warrant. So the right answer is that the module in question is not functioning properly in the full or strict sense, but is functioning properly in an analogically extended sense; and in a corresponding analogically extended sense of ‘warrant’, his beliefs do have warrant, although they do not have warrant in the full or strict sense. Here we must follow Aristotle (who did not have Hasker in mind): “we should perhaps say that in a manner he knows, in a manner not.”8

E. Gettier, Trade-offs, and Compromises

1. Gettier

Knowledge is justified true belief: so we thought from time immemorial. Then God said, “Let Gettier be”; not quite all was light, perhaps, but at any rate we learned we had been standing in a dark corner. Edmund Gettier's three-page paper9 is surely unique in contemporary philosophy in what we might call ‘significance ratio’: the ratio between the number of pages that have been written in response to it, and its own length; and the havoc he has wrought in contemporary epistemology has been entirely salutary. Never have so many learned so much from so few (pages). What Gettier pointed out, of course, is that belief, truth, and justification are not sufficient for knowledge. Naturally, there have been many attempts to provide a “fourth condition,” many attempts to add an epicycle or two to circumvent Gettier. Sadly enough, however, in most cases the quick response has been another counterepicycle that circumvents the circumvention—which then calls for a counter-counterepicycle, and so on, world without end.10 I don't mean at all to denigrate this often illuminating literature; but what Gettier examples really show, as I shall argue, is that internalist accounts of warrant are fundamentally wanting; hence the added epicycles, so long as they appeal only to internalist factors, are doomed to failure. My aim in this section is to try to understand what really underlies Gettier situations, and then to see how Gettier looks from the vantage point of the present conception of warrant. It will become clear, I think, that Gettier problems do not in fact bedevil it; considering them will nonetheless enable us to deepen our analysis.

Gettier problems come in several forms. There is Gettier's original Smith owns a Ford or Brown is in Barcelona version: Smith comes into your office, bragging about his new Ford, shows you the bill of sale and title, takes you for a ride in it, and in general supplies you with a great deal of evidence for the proposition that he owns a Ford. Naturally enough you believe the proposition Smith owns a Ford. Acting on the maxim that it never hurts to believe an extra truth or two, you infer from that proposition its disjunction with Brown is in Barcelona (Brown is an acquaintance of yours about whose whereabouts you have no information). As luck would have it, Smith is lying (he does not own a Ford) but Brown, by happy coincidence, is indeed in Barcelona. So your belief Smith owns a Ford or Brown is in Barcelona is indeed both true and justified; but surely you can't properly be said to know it. In a similar example (due to Keith Lehrer), you see (at about ten yards) what you take to be a sheep in the field; acting again on the same principle, you infer that the field contains at least one sheep. As it turns out, what you see is not a sheep (but a wolf in sheep's clothing); by virtue of sheer serendipity, however, there is indeed a sheep in a part of the field you can't see. Your belief that there is a sheep in the field is true and justified, but hardly a case of knowledge.

In these cases you infer the justified true belief from a justified false belief (that Smith owns a Ford, that that is a sheep); your justification, we might say, goes through a false belief. Naturally enough, some of the early attempts at repairs stipulated that a belief constitutes knowledge only if it is true, and justified, and its justification does not go by way of inference from a false belief. But this is not the key to Gettier problems. Modify the sheep case so that you don't first form the belief that that is a sheep, but proceed directly to the belief that there is a sheep in that field. Or consider the following case, due originally to Carl Ginet. You are driving through southern Wisconsin, near Waupun. In an effort to make themselves look more prosperous, the inhabitants have erected a large number of fake barns or barn facades-three for each real barn. From the road, these facades are indistinguishable from real barns. You are unaware of this innocent deception; looking at what is in fact a real barn you form the belief now that's a fine barn! Again, the belief is true; you are justified in holding it; but it seems to many that it does not constitute knowledge. Continue the bucolic motif with the following case. The Park Service has just cleaned up a popular bridle trail in Yellowstone, in anticipation of a visit from a Department of the Interior bigwig. A wag with a perverse sense of humor comes along and scatters two bushels of horse manure on the trail. The official arrives and goes for a walk on the trail, naturally forming the belief that horses have recently been by. Once more, his belief is true and justified, but does not constitute knowledge. Still another Gettier example, this one, oddly enough, predating Gettier's birth by a good twenty years: consider a person who at noon happens to look at a clock that stopped at midnight last night, thus acquiring the belief that it is noon; this belief is true and (we may stipulate) justified, but clearly not knowledge.11

But why not, precisely? What is going on in these cases? One salient point: in each of these cases it is merely by accident that the justified true belief in question is true. It just happens that Brown is in Barcelona, that there is a sheep in another part of the field, that what you are looking at is a barn rather than a barn facade, that the clock stopped just at midnight (and you happened to look at it at exactly noon). In each of these cases, the belief in question could just as well have been false. (As a matter of fact, that's not putting it strongly enough; these beliefs could much better have been false. There are so many other places Brown could have been; there are many more barn facades than barns there in southern Wisconsin; wags don't often or ordinarily take the trouble to make the Park Service look bad; there are so many other times at which the clock could have stopped; and so on.) But what is the force, here, of saying that the beliefs are true by accident?

The basic idea is simple enough: a true belief is formed in these cases, all right, but not as a result of the proper function of the cognitive modules governed by the relevant parts of the design plan. The faculties involved are functioning properly, but there is still no warrant; and the reason has to do with the local cognitive environment in which the belief is formed. Consider the first example, the original Smith owns a Ford or Brown is in Barcelona example. Our design plan leads us to believe what we are told by others; there is what Thomas Reid calls “the Principle of Credulity,”12 a belief-forming process whereby for the most part we believe what our fellows tell us. Of course credulity is modified by experience; we learn to believe some people under some circumstances and disbelieve others under others. (We learn not to form beliefs about a marital quarrel until we have heard from both parties; we discount statements made by television salesmen and candidates for public office.) Still, credulity is part of our design plan. But it does not work well when our fellows lie to us or deceive us in some other manner, as in the case of Smith, who lies about the Ford, or the Wisconsinites, who set out to deceive the city-slicker tourists, or the wag aiming to hoodwink the Interior Department official. It does not work well in the sense that the result of its proper operation in those circumstances does not or need not result in true belief. More exactly, it's not that credulity does not work well in these cases—after all, it may be working precisely according to its specifications in the design plan. It is rather that credulity is designed, we might say, to work in a certain kind of situation (one in which our fellows do not intend to mislead us), and designed to produce a given result (our learning something from our fellows) in that situation. But when our fellows tell us what they think is false, then credulity fails to achieve the aimed at result.

But of course it isn't just our fellow's intentions that count; even if they are entirely well intentioned, we may still have a Gettier situation. Consider a child whose parents are seriously confused and therefore teach him a battery of falsehoods with only an occasional truth thrown in by accident: such a child does not know the occasional truths he is thus taught, despite the fact that there is nothing wrong with his cognitive equipment and despite the fact that his parents intend to teach him nothing but the unvarnished truth. Gilbert Harman has called our attention to increasingly subtle cases of this general sort, cases where it becomes increasingly difficult to tell whether there is or isn't warrant. CBS evening news reports that General X has been assassinated; you are watching the news and, naturally enough, form the belief that General X was assassinated; as it happens, there was a retraction on a later CBS newscast, but also a still later retraction of the retraction; and General X was in fact assassinated. You don't hear either the retraction or the retraction-retraction and so believe all along that he was assassinated: do you know that he was? What if the first retraction was due merely to the malicious actions of a prankster, so that no one in the know believed it? What if there wasn't a retraction of the assassination report, but only because of some communications malfunction (the courier carrying the report that the general is safe falls into a creek and his message is devoured by piranhas)? It can become monumentally difficult to say whether there is warrant, or sufficient warrant to constitute knowledge in such cases. But what is clearly at issue in these cases is a malfunction of some sort, a deviation from the norm, in the communications chain.

What counts, for the warrant of a belief you form on the basis of credulity, is not just that belief-forming mechanism and its virtues, but also the epistemic credentials the proposition you believe has for the person from whom you acquire it.13 Credulity is designed to operate in the presence of a certain condition: that of our fellows knowing the truth and being both willing and able to communicate it. In the absence of that condition, if it produces a true belief, it is just by accident, by virtue of a piece of epistemic good luck: in which case the belief in question has little or nothing by way of warrant.

Not nearly all Gettier examples involve credulity, of course: consider the sheep in the field case, or the clock that stopped at midnight. In these cases, just as in the case of the examples involving credulity, there is a sort of glitch in the cognitive situation, a minor infelicity, a small lack of fit between cognitive faculties and cognitive environment. The locus of the infelicity, in these cases too, is not the cognitive faculties of the person forming the justified true belief that lacks warrant; they function just as they should. The locus is instead in the cognitive environment; it deviates, ordinarily to a small degree, from the paradigm situations for which the faculty in question has been designed. (In brain-in-vat cases the cognitive environment is deceptive, but on a massive scale; hence brain-in-vat cases are not Gettier cases.) Thus the wolf is in sheep's clothing, or Smith is lying, or the Wisconsinites are deceptively giving the appearance of affluence, or the wag is having his little joke. Take it, for the moment, that this notion of a design plan is more than metaphor: imagine that we have in fact been consciously designed (by God perhaps): then the designer of our cognitive powers will have designed those powers for certain situations. They will be designed to produce mostly true beliefs in the main sorts of situations that (as the designer sees it) their owners will ordinarily encounter. What we have in Gettier situations is a belief's being formed in circumstances differing from the paradigm circumstances for which our faculties have been designed.

So the first thing to see about the Gettier situations is that the true beliefs in these situations are true by accident, not by virtue of the proper function of the faculties or belief-producing mechanisms involved. And the second thing to see is that in the typical Gettier case, the locus of the cognitive glitch is in the cognitive environment: the latter is in some small way misleading. The clock has unexpectedly stopped, the usually reliable Smith is lying, the wolf is dressed in sheepskin, and so on. (And here we might note that Gettier examples come in degrees, the degree in question being a function of the degree of departure from the paradigm circumstances for which our cognitive equipment is designed.)

But is this really an essential feature of Gettier situations—that is, is it really essential to Gettier situations that the glitch in question be a (misleading) feature of the cognitive environment? I think not. Consider the following Gettier example, attributed by Chisholm to Meinong (and again predating Gettier's birth). An aging Austrian forest ranger lives in a cottage in the mountains with his daughter. There is a set of wind chimes hanging from a bough just outside the kitchen window; when these wind chimes sound, the ranger forms the belief that the wind is blowing. As he ages, his hearing (unbeknownst to him) deteriorates; he can no longer hear the chimes. He is also sometimes subject to small auditory hallucinations in which he is appeared to in that wind-chimes way; and occasionally these hallucinations occur when the wind is blowing.14In these cases, then, he has justified true belief but not knowledge. The problem is not with the environment, however, but with his hearing. So it isn't right to think of Gettier situations as involving only cognitive environmental pollution; the problem can be with the agent's faculties rather than with the environment.

Well, what is essential here? First, of course, Gettier situations are ones in which the believer is justified15 in her beliefs; she is within her rights; she has done all that could be expected of her, and the unfortunate outcome, the lack of warrant, is in no way to be laid to her account. But there is also something broader. In chapter 1 of my Warrant: The Current Debate, I argued that internalism essentially involves a view about cognitive accessibility: what constitutes or confers warrant, on internalist views, must be accessible, in some special way, to the agent. Note that in all these Gettier cases (including that of the aging Austrian forest ranger) the cognitive glitch has to do with what is not accessible to the agent in this way. In a Gettier case, it is as if everything connected with what is in this sense internal to the agent, is going as it ought; but there is a relatively minor hitch, a relatively minor deviation from the design plan in some other aspect of the whole cognitive situation—perhaps in the environment, but also, possibly, in some aspect of the agent's cognitive equipment that is not internal in this sense. What is essential to Gettier situations is the production of a true belief that has no warrant—despite conformity to the design plan in those aspects of the whole cognitive situation that are internal, in the appropriate sense, to the agent. In these Gettier situations there is conformity to the design plan on the part of the internal aspects of the cognitive situation, but some feature of the cognitive situation external (in the internalist's sense) to the agent forestalls warrant.

Gettier problems afflict internalist epistemologies, and they do so essentially. The essence of the Gettier problem is that it shows internalist theories of warrant to be wanting. What Gettier problems show, stated crudely and without necessary qualification, is that even if everything is going as it ought to with respect to what is internal (in the internalist sense), warrant may still be absent. The real significance of Gettier problems, therefore, is not that they are relatively minor technical annoyances that prevent us from getting a counterexample-proof analysis of knowledge; their real significance is that they show justification, conceived internalistically, to be insufficient for warrant. We should therefore expect that an externalist account such as the present account will enjoy a certain immunity to Gettier problems (unless, of course, we take the term ‘Gettier Problem’ so widely that any proposed counterexample to the sufficiency of a proposed account of warrant counts as a Gettier problem for that proposal). And, indeed, some of the Gettier cases—the Case of the Aging Ranger, for example—are immediately ruled out on the grounds that the beliefs in question are not formed by faculties functioning properly.

Still, thinking about Gettier cases enables us to see more of the shape and complexity of the design plan and to learn more about the conditions under which a belief acquires warrant. Different Gettier cases must be treated differently, First, consider again the original Smith owns a Ford or Brown is in Barcelona case, the sort of case that involves testimony and the operation of credulity. Here what we see is that we need an addition to the official account of warrant; more exactly, something implicit in it must be made explicit. Some cognitive mechanisms or faculties take as input other beliefs: inference is the obvious example, but something similar goes on with respect to credulity. In the latter case, it isn't that you necessarily form the belief Paul says so and so when forming the belief that so and so on the basis of Paul's saying it; that belief—that Paul says so and so—may never come to consciousness. Still, you are in some way aware that Paul (or someone) says so and so, and it is on the basis of that awareness that you form the belief that so and so. Now in the case of inference, it is clear that the status, with respect to warrant, of the inferred belief depends upon the status, with respect to warrant, of the belief from which the inference is drawn. But something similar is true for testimony. The status of the belief you form on the basis of testimony depends upon the status of this proposition in the noetic structure of her from whom you get it. I say ‘proposition’ rather than ‘belief’, because, of course, in some of the Gettier cases the person from whom you acquire your belief by testimony does not himself hold it—Smith bragging about his Ford, for example, or the proud but impecunious Wisconsinites. In the case of such second-level belief-forming mechanisms (or faculties) as inference and credulity, then, the notion of proper function must include the input proposition's having the right sort of credentials: it must be a belief on the part of the testifier, and it must be a belief that has warrant for her. In the central and paradigm cases, you will have warrant for a belief that you acquire by way of testimony only if the person from whom you acquire the belief himself holds the belief and has warrant for it.16

There is another way in which reflection on Gettier examples can give us a deeper understanding of warrant. Consider the other Gettier cases—the case of the fake barns, or the wag in Yellowstone, or the clock that stopped at midnight. In each of these cases, what we saw was that the local cognitive environment deviated in some more or less moderate fashion from the paradigm situations for which the faculty was designed. But can we get a deeper grasp of the situation here? I think so, but to do so we must turn to trade-offs and compromises.

2. Trade-offs and Compromises

Suppose we begin by adding to the Gettier cases the following sorts of situations: straight sticks look bent in water; it can falsely appear that there is an oasis just a mile away; a dry North Dakota road looks wet on a hot summer day; an artificial apple among the real apples in the bin can deceive almost anyone (at least so long as she doesn't touch it); a hologram of a ball looks just like a ball and can confuse the unwary perceiver; in those famous Müller-Lyer illusions, the shorter line looks the longer because of the direction of the arrow heads. There seems to be no perceptual malfunction in these cases. Still, if you are a desert tyro and, due to a mirage, form the belief that there is an oasis about a mile away, your belief will have little warrant. Even if it happens to be true (by happy coincidence there is an oasis a mile away in a different direction), you don't know it. The same goes in the other cases; so here we have proper function in what certainly appears to be the environment for which our faculties are designed; but the result is not warrant. Why not?

The answer involves trade-offs and compromises. You want to minimize the time you spend mowing your lawn; you therefore want a mower that cuts a wide swath; this requires a large mower, and the larger the better. (Perhaps one that cuts a forty-foot swath will mow your entire lawn in two passes.) On the other hand, you want to be able to maneuver the mower, you want to keep its cost down, and you want to be able to park it in your garage; these requirements place limitations on its size. The mower you want, therefore, will be a result of trade-offs and compromises between large size and those other desiderata. You are designing an automobile; you want it to be fast (and the faster the better, at any rate up to two-hundred miles per hour) but you also want to maximize safety. The former calls for light construction, but the latter for substantial bumpers and a strong and heavy frame; the design plan you adopt, therefore, will have to be a result of trade-offs and compromises.

But the same goes in the cognitive case. A belief has warrant for you (so I say) only if it is formed by your faculties functioning properly in an appropriate epistemic environment, only if the right statistical probabilities hold, and only if the cognitive processes that produce that belief have the production of true beliefs as their purpose—that is, only if the segment of the design plan governing its production is aimed at the production of true beliefs. But now consider these perceptual illusion cases, and for definiteness imagine that our faculties have actually been designed; and then think about these matters from an engineering and design point of view. The designer aims at a cognitive system that delivers truth (true beliefs), of course; but he also has other constraints he means or needs to work within. He wants the system to be realized within a certain sort of medium (flesh, bone and blood rather than plastic, glass, and metal), a humanoid body (and one of a certain relatively modest size), in a certain kind of world, with certain kinds of natural laws or regularities. This means that cognition will be mediated by brain and neural activity. He also wants the cognitive system to draw an amount of energy consonant with our general type of body, and to require a brain of only modest size (given too large a brain, we might have to support our heads with both hands, so that mountaineering or tennis or basketball would be utterly out of the question). No doubt there are reasons why he wants to do things this way (that is, in accord with these constraints), but we need not try to guess what they might be. From an evolutionary perspective, we can see these constraints originating not in the conscious design of God, but in the way evolutionists are fond of; and evolution, like God, is obliged to make certain trade-offs. (A cognitive system delivering truth most of the time could be adaptively advantageous: on the other hand, too large a brain will reduce mobility and make a creature easy prey to predators.)

So the designer's overall aim is at truth, but within the constraints imposed by these other factors; and this may require trade-offs. It may not be possible, for example, to satisfy these other constraints and also have a system that (when functioning as it is designed to function) produces true beliefs in every sort of situation to be encountered in the cognitive environment for which it is designed. There are an enormous number of different situations arising within the cognitive environment for which the system is designed; and it might be impossible, given the constraints, to handle them all in the most desirable way.

Thus consider the straight stick in water. A visual system like ours works well (that is, fulfills its function, produces true belief nearly all the time) in a wide variety of circumstances, but not when (for the first time, or without other knowledge) someone is looking at a straight stick partly immersed in water: such a stick has a misleading bent look to it. Now it might be possible to design an epicycle that would circumvent this problem—perhaps it could correct the way the stick looks by taking advantage of the way in which light is reflected from water and other similar liquids. (Or maybe it would be possible to circumvent the problem partially: maybe with the partially corrected system there would still be situations that lead to false beliefs, but fewer of them.) And of course this is only one of many such situations: there are all of the other sorts of perceptual illusions cases, as well as many cases (false testimony, say) that do not involve perception. Perhaps a system could be designed in which there would be nothing misleading in any of these situations: but only at a price, and perhaps the price was not right.

Then the thing to do would be to trade-off some accuracy for efficiency (and the satisfaction of these other constraints). You would want to design a system that worked well (that is, produced true beliefs) over as large a proportion as possible of the situations in which its owners will find themselves, consistent with satisfying those other constraints. (The other constraints could be absolute and nonnegotiable, or they might also be subject to negotiation.) In this way you will wind up with a system that works well in the vast majority of circumstances; but in a few circumstances it produces false belief. (Of course, you add the important feature of learning from experience in order to mitigate the doleful effects of the compromises: after a couple of trials you no longer believe that the road is wet, that there is an oasis just a mile away, that the stick is bent, or what Paul, that habitual deceiver, says; you learn to be on the lookout for fake fruit and holograms.)

And now think about those triples of the design plan where the cognitive response R is misleading: cases where the circumstance member M involves the conditions under which the perceiver is subject to mirages or other perceptual illusions. Why does the perceptual system work this misleading way in these circumstances? Well, the answer is not that its working this way contributes directly to the main goal or purpose of the whole perceptual system, namely, the provision of true perceptual beliefs. Instead, its working this way is a trade off or compromise between fulfilling that purpose and satisfying those other constraints. The designer does not join the circumstance member M with the response member R because this contributes to the formation of true beliefs (it doesn't), but because organizing the whole system this way, though misleading in a few situations, helps enable the system as a whole to satisfy those constraints. We might say that in the misleading cases, R is joined with M not in order to satisfy the main purpose of the perceptual system but in order to satisfy those other constraints. Or perhaps more accurately (since, in a way, the aim at truth on the part of the whole system is also the reason for R's being joined with M) the thing to say is that R is joined with M not in order to serve directly the main purpose of providing true beliefs (it doesn't do that) but to do so indirectly. It indirectly serves that purpose by being a locus of a best compromise of the overall purpose, on the part of the perceptual system, of producing true beliefs.

So what shall we say about warrant, with respect to these misleading responses? Just this: a belief has warrant for you only if the segment of the design plan governing its production is directly rather than indirectly aimed at the production of true beliefs (and an addition to that effect must be made to the official account of warrant). If a given response is present only because it is a part of the best compromise and not because it directly serves the purpose of producing true beliefs, then the belief in question does not have warrant.

This is how to think of beliefs produced by perceptual illusions; but the idea can be generalized to a wider class of cases including, for example, false testimony. In all these cases there seems to be proper function but little warrant. If we add that the belief in question is true, then we have ‘quasi-Gettier cases’. (Quasi-Gettier cases, because Gettier cases properly so-called have to do with justification, as the alleged source of warrant, rather than with proper function.) Take a perceptual illusion or false testimony case and add that the belief produced is true (but by accident): then what you have is a quasi-Gettier case. The belief in question has little warrant and, though true, does not constitute knowledge; for a belief has warrant for you only when it is produced by a segment of the design plan directly aimed at truth.

F. Defeaters and Overriders

Pollock and Chisholm emphasize the importance of defeaters. You read in a usually reliable guidebook that the University of Aberdeen was founded in 1405 A.D.; you form the belief that it was founded then; that belief has a certain degree of warrant for you. You later read in an authoritative history of Aberdeen that the university was founded in 1495; you now no longer believe that it was founded in 1405, and (as we may put it) the warrant that belief had for you has been defeated. If things are going properly, you will no longer believe the first proposition, and will perhaps not believe the second as firmly as you would have, had you not first believed the first. Here the first belief gets defeated, and its warrant disappears by virtue of your getting warrant for another belief inconsistent with it. Following John Pollock,17 call such defeaters rebutting defeaters: the paradigm case of rebutting defeat occurs when you first have evidence for a certain proposition, and then get evidence for its denial. But there are also (following Pollock) undercutting defeaters. You visit a factory: the items coming down the assembly line look red and you form the belief that they are indeed red, a belief that has warrant by virtue of the way you are appeared to. You are then told by a local authority that this part of the assembly line is a quality control module, where the items are irradiated by red light in order to make it easier to detect a certain kind of flaw. You then no longer believe that the items you are looking at are red—not because you have reason to believe that they are some other color, but because your belief that they are red has been undermined by what you were told.

The basic idea here is that the design plan is such that (for example) when you are appeared to a certain way you will form a certain belief: when you are appeared to redly, you will (ceteris paribus) form the belief that you see something red. But the design plan also specifies circumstances under which, even though you are appeared to redly, you won't or don't form that belief. These circumstances would include, for example, your learning that the thing in question, despite appearances, is not red (rebutting defeater), or your coming to believe that the thing would have looked that way even if it were not red (undercutting defeater). Again, following Pollock, we may note that defeaters of either kind are themselves sometimes defeated by further defeaters (of either kind): ‘defeater-defeaters’, as Pollock calls them. The basic idea is clear enough; but there are many subtleties, much to be said by way of development and qualification, much to explore.18 This defeater structure is to be found across the length and breadth of our cognitive structure, and nearly any belief is possibly subject to defeat. I say nearly any belief; perhaps a few beliefs—such beliefs about my own mental life as that I am in pain or that I am being appeared to in some way, or that I exist, together with those that are wholly self-evident and accepted with maximal degree of belief, for example—are not thus subject to defeat. The defeater system works in nearly every area of our cognitive design plan and is a most important part of it; we must therefore explicitly understand the proper function condition of warrant as applying to the relevant portions of the defeater system.

The defeater system seems to be aimed at the formation of true beliefs (and the avoidance of false beliefs). You no longer believe that the items coming down the line are red, after hearing that they are irradiated with red light. Presumably this reflects the fact that the statistical probability of a thing's being red, given that it appears red to you, is high, but the statistical probability of its being red, given both that it appears to you to be red and that you are told by an authority that it is irradiated by red light, is not high. But there are similar structures that do not seem to be aimed at truth. To return to a previous example, you might be much more optimistic about your recovery from a serious illness than the statistics in your possession would warrant; here this excessive optimism, presumably, is aimed at survival rather than truth. And here, of course, proper function and warrant do not go together. You know the statistics: nine out of ten cases of the disease are fatal; if it were someone else who had the disease, you would form the belief that his chances were about one out of ten. But there is the optimistic overrider; so you are much more sanguine about your own recovery than the statistics warrant. We might say that your statistical evidence is ‘defeated’ by the optimistic overrider; it might be better, however, to say that it is overridden, reserving ‘defeat’ for the activity of structures of this kind that are aimed at true belief.

This optimistic overrider, of course, is part of the design plan; there is no dysfunction here, and the belief in question is formed by the relevant faculties functioning properly. Nevertheless, of course, the belief that you are very likely to recover has little by way of warrant; indeed, if by some chance the optimistic overrider failed to function and, by virtue of that dysfunction you formed that belief that your chances were about one out of ten, it is that belief that would have warrant. What confers warrant is the proper function of faculties aimed at the production of true beliefs; when such beliefs are overridden by beliefs that are the result of the proper function of modules not aimed at truth (wishful thinking, for example) the resulting beliefs do not have warrant.

II. Two Concluding Comments

We should pause for a moment to marvel at the enormously articulate, subtle, and complex nature of our cognitive faculties. These faculties produce beliefs on an enormously wide variety of topics—our everyday external environment, the thoughts of others, our own internal lives (someone's internal musings and soliloquies can be complex and interesting enough both to him and others to be worth an entire novel), the past, mathematics and logic, what is probable and improbable, what is necessary and possible, beauty, right and wrong, our relationships to God, and a host of other topics. They work with great subtlety and in a thousand ingeniously different ways. As we shall see in more detail in the next chapters, we believe on the basis of sense experience, testimony, memory, mathematical and logical intuition, philosophical intuition, introspection, extrospection (whereby we come to know the thoughts and feelings of others), induction, evidence from other beliefs, and (so I say, anyway) Calvin's sensus divinitatis. Our faculties work so as to produce beliefs of many different degrees of strength ranging from the merest shadow of an inclination to believe all the way to complete certainty. Our beliefs and the strength with which we hold them, furthermore, respond with great delicacy to changes in experience—to what people tell us, to perceptual experience, to what we read, to further reflection, and so on. There is that elaborate and highly developed defeater system, and the overrider system. There is also the fact that while most of our belief formation goes on automatically, it is sometimes also possible for us to take a hand in it: realizing that I am too easily impressed by someone in a white cost, I make corrections for it.

I spoke of truth as what (most of) our cognitive faculties are aimed at; but often it is verisimilitude rather than the truth itself that is aimed at. Further, falsehood can be a vehicle for truth; a falsehood can lead us closer to the truth than the sober truth itself; thus when the truth is too difficult for us to grasp, we may nonetheless be able to get close to it via something that is strictly speaking false, And finally, there are both maturation and learning. Cognitive proper function at the age of three is quite different from cognitive proper function at the age of thirty; a small child will have bizarre beliefs but not necessarily by way of cognitive malfunction. There is that whole series of snapshot design plans, with a master design plan specifying which of them is appropriate at which age and under which circumstances. In addition there is also learning, which also, in a way modifies the design plan. More exactly, the design plan specifies how learning new facts and skills will lead to changes in cognitive reaction.

It is because of this complication, articulation, and fine detail that simple formulas for rationality, warrant, and their like inevitably fail. Is it suggested that it is irrational to believe an inconsistent set of propositions, a set from which it is possible to deduce a contradiction by, say, ordinary first-order logic? But (before he was corrected by Russell) was Frege irrational in believing his axioms for set theory? True enough, once he clearly saw the contradiction, then it would have been irrational for him to persist (although even this requires qualification); so shall we say that it is irrational to believe a set of propositions you know or believe to be contradictory? But what about the Paradox of the Preface (see my Warrant: The Current Debate, pp. 145ff.): can't I perfectly sensibly believe, of some set S of propositions all of which are believed by me, that at least one of them is false? But then I believe all the members of S: the union of S with {At least one member of S is false}; S is inconsistent (there is no possible world in which all its members are true) and I know that it is; can't I nonetheless be perfectly rational, in so believing? Indeed, can't it be the case that each of these beliefs has a great deal of warrant for me? So sometimes it is quite rational to believe the members of an inconsistent set; and sometimes, indeed, it is rational to do so in the full knowledge that the set is inconsistent. Of course, other inconsistent sets are such that (if I am subject to no dysfunction) I could not come to accept all their members—for example, explicit contradictions; and one who succeeds in believing the members of such sets will indeed be irrational. But there is no simple formula to tell us which is which.

Another example. William Alston raises the following question. Suppose I believe A and do so on the basis of some ground B, which is in fact a reliable indicator of A: must I know or justifiably believe that indeed B is a reliable indicator in order to have warrant in believing A? If we insist on this as a general condition of justification, we fall into a nasty infinite regress; he therefore doesn't add it. Alston discusses this question with respect to justification; let's instead think about it with respect to warrant. Our noetic establishment is enormously varied and complex; it has many quite different sectors; in some of those sectors, warrant requires that you justifiably or warrantedly believe that B is a reliable indicator of A and in others it does not. Suppose, as Alston suggests, I believe 2 + 1 = 3 on the basis of its just seeming utterly obvious to me; in such cases warrant does not require that I have any views at all as to whether its seeming that way to me is a reliable indication of its actually being that way. But in other cases things are quite different. I may believe that a bear has passed by on the basis of the way the brush looks; and to have warrant for this belief, I must know or warrantedly believe (or at any rate have known or warrantedly believed) that the brush's having that particular crushed sort of look is indeed a reliable indicator (in this particular kind of forest area and at this time of the year) that a bear has been by. I believe that an electron is sporting in the cloud chamber; I believe this on the basis of the trail I see in the cloud chamber. I believe that the child has measles, and so believe on the basis of certain symptoms. If these beliefs are to have warrant, I will warrantedly believe (or have believed) that the ground in question is a reliable indicator of the truth of the belief in question.

In many other cases, such a belief is not necessary for warrant. Consider memory beliefs, for example. The indication on the basis of which I believe is just its seeming so to me (see the next chapter); and I can have warrant for a memory belief even if I have never raised the question whether its seeming to be like that is a good or reliable indication of its being like that. (Of course, if I do think of or raise that question, then, perhaps, to have warrant I must believe that my being inclined toward such a memory belief is a reliable indication of its truth.) And suppose I know that I suffer from a certain kind of memory lapse, so that there are two noticeably different sorts of phenomenology accompanying my inclinations to memory beliefs, one of which usually accompanies inclinations to false beliefs: then too perhaps to have warrant for a given memory belief I must pay attention to the sort of phenomenology accompanying my inclination to that belief, and must believe or have believed that that phenomenology is the kind that goes with reliable inclinations. So there isn't anything at all like a simple, single answer to the question whether warrant for grounded beliefs requires that the subject know that the ground is an indicator of the belief; sometimes this is required and sometimes it is not. And the reason is not far to seek. In some cases it is perfectly in accord with proper cognitive function to believe A on the basis of B even if you have never had any views at all as to whether B is an indicator of A; in a wide variety of other cases a properly functioning human being will believe A on the basis of B only if she has first learned that B reliably indicates A; in certain cases where you are aware of partial malfunction, to have warrant you will have to believe of a ground that it is a reliable indicator, even though in the absence of such malfunction you would not have had to have any views at all on the subject. Of course there will be many other complications. And the point is that it is the complex, highly articulated nature of the human design plan that makes impossible simple generalizations of these sorts about rationality and warrant.

By way of concluding this initial overview, I wish to ask and briefly answer one final question. Is the view I have outlined an example of that ‘naturalized’ of maybe ‘naturalistic’ epistemology so much in contemporary vogue? This is a vexed question; it inherits its vexation from a prior question—namely, What is it for an epistemology to be naturalized or naturalistic? The question is difficult, but perhaps the essence of a naturalistic approach to epistemology has to do with normativity. Perhaps the mildest form of naturalism would be one in which it is denied that warrant is to be understood in terms of deontology. Mild as it is, this would still signal a radical break with the received epistemological tradition; as I argued in the first chapter of Warrant: The Current Debate, the dominant tradition in twentieth-century Anglo-American epistemology conceives warrant in terms of justification, which, in turn, is at bottom conceived in terms of the fulfillment or aptness for fulfillment of epistemic duty. At the other end, the most extreme version of naturalism in epistemology eschews normativity altogether, seeking to replace traditional epistemology (with its concern with justification, rationality, reasonability, and their normative colleagues) by descriptive psychology; this seems to be W. v. Quine's suggestion.19 A more moderate version—stronger than the mildest but less radical than Quine's—is suggested by Hilary Kornblith, who says that according to the “naturalistic approach to epistemology,” “Questions about how we actually arrive at our beliefs are thus relevant to questions about how we ought to arrive at our beliefs. Descriptive questions about belief acquisition have an important bearing on normative questions about belief acquisition.”20 We are to take it, I presume, that the ‘ought’ in the quotation need not be taken deontologically; the normativity involved is not necessarily that of duty and permission.

The view I mean to urge is, of course, naturalistic in that first and mildest sense. But it is also naturalistic in Kornblith's more moderate sense. For warrant is indeed a normative notion. The sort of normativity involved is not that of duty and obligation; it is normativity nonetheless, and there is an appropriate use of the term ‘ought’ to go with it. This is the use in which we say, of a damaged knee, or a diseased pancreas, or a worn brake shoe, that it no longer functions as it ought to. This is the use in which we say that a human heart ought to beat between forty and two-hundred times per minute, and that your car's choke ought to open (and the engine ought to throttle back to 750 RPM) when it warms up. Now will it be the case that “questions about how we actually arrive at our beliefs are relevant to questions about how we ought to arrive at our beliefs?” Surely so: at any rate if we construe ‘ought’ as referring to the normativity going with warrant, and ‘actually arrive at’ as ‘actually arrive at when there is no cognitive malfunction’. Indeed, thus construed the first question is maximally relevant to the second, being identical with it. So the present account is naturalistic in Kornblith's moderate sense.

But what about that more radical Quinian view according to which normative concerns in epistemology should give way to descriptive psychology? The first thing to see is that Quine's view imports more normativity than meets the eye. The descriptive psychologist typically delivers herself of functional generalizations: “when a human organism O is in state S and conditions C obtain,” she says, “there is a probability p that O will go into state S∗.” But these functional generalizations taken neat are false; they typically won't hold of human beings who have just been attacked by sharks, or transported to Alpha Centauri, or suffered a stroke. They should therefore be seen as containing an implicit qualification: when a properly functioning human organism in an appropriate environment is in state S, then … The very sort of normativity involved in warrant, therefore, runs riot in descriptive psychology. Quine's radical naturalism is presumably the view that the only sort of normativity appropriate to epistemology is the sort to be found in natural science—descriptive psychology, perhaps. But then the question whether the view I advocate is naturalistic in the radical Quinian fashion is really the question whether a belief's having warrant just is its being produced by properly functioning faculties in the right sort of cognitive environment—or whether, on the other hand, a belief's having warrant supervenes upon such states of affairs. From Quine's perspective, this will no doubt be a distinction without a difference.

So the view I propose is a radical naturalism: striking the naturalistic pose is all the rage these days, and it's a great pleasure to be able to join the fun. The view I urge is indeed best thought of as an example of naturalistic epistemology; here I follow Quine (if only at some distance). Naturalistic epistemology, however, is ill-named. In the first place, it is quite compatible with, for example, supernaturalistic theism; indeed, the most plausible way to think of warrant, from a theistic perspective, is in terms of naturalistic epistemology.21 And second (as I shall argue in chapters 11 and 12), naturalism in epistemology flourishes best in the context of a theistic view of human beings: naturalism in epistemology requires supernaturalism in anthropology. This claim is perhaps a bit less congenial to the spirit of Quine's enterprise.

By way of concluding recapitulation, therefore: as I see it, a belief has warrant for me only if (1) it has been produced in me by cognitive faculties that are working properly (functioning as they ought to, subject to no cognitive dysfunction) in a cognitive environment that is appropriate for my kinds of cognitive faculties, (2) the segment of the design plan governing the production of that belief is aimed at the production of true beliefs, and (3) there is a high statistical probability that a belief produced under those conditions will be true. Under those conditions, furthermore, the degree of warrant is an increasing function of degree of belief. This is intended as an account of the central core of our concept of warrant; as we have seen, there is a penumbral area about the central core where there are a hundred analogical extensions of that central core; and beyond the penumbral area, still another belt of vague-ness and imprecision, a host of possible cases and circumstances where there is really no answer to the question whether what we would have would be a case of warrant. What we need to fill out the account is not an ever-increasing set of additional conditions and subconditions; that way lies paralysis. What we need instead is an explanation of how the account works in the main areas of our cognitive life. It is to just such an explanation that we now turn.