I turn now to a more recent entry into the lists, a contestant that, compared with classical foundationalism, has only lately joined the fray. Some (for example, L. J. Savage) call this new arrival ‘personalism’; others call it ‘subjectivism’; but most call it ‘Bayesianism’, on the grounds that its devotees typically recommend change of belief in accordance with a generalization of Bayes’ Theorem.^{1} Perhaps the best name from our present perspective would be ‘probabilistic coherentism’, but I shall defer to established custom and concur in ‘Bayesianism’. Bayesianism goes back essentially to Frank Plumpton Ramsey's 1926 essay “Truth and Probability.”^{2} Although there are many varieties of Bayesianism, few explicitly raise the epistemological question of the nature of warrant. Nonetheless many fascinating issues arise here, issues that bear at least obliquely on the main topic of this study.
Now there are certain difficulties in treating Bayesianism as an address to the traditional epistemological problems with which we have been dealing. First, contemporary studies in probability in general and Bayesianism in particular present an extensive and daunting literature, much of it directed to specialized and technical problems of one sort or another. And while many of these problems are of great intrinsic interest,^{3} their bearing on the question of the nature of warrant is not always easy to see. Second, Bayesians tend to speak, not of warrant or positive epistemic status, but of rationality; for example, they claim that a system of beliefs is rational only if it is coherent, conforms to the calculus of probabilities (see pp. 119ff.). But how is rationality related to warrant? The conditions for rationality proposed by Bayesians and their sympathizers (for example, coherence, strict coherence, changing belief by conditionalization or Jeffrey's “Probability Kinematics,” van Fraassen's Reflection) do not seem initially plausible as individually necessary or jointly sufficient for warrant. Still, these conditions clearly bear some interesting relations to warrant; they are also related to other nearby notions such as justification, the several varieties of rationality,^{4} epistemic duty and epistemic integrity, and so on. In this chapter I outline the essentials of Bayesianism and ask whether the latter contributes to a satisfying account of warrant; I conclude that it does not. (Bayesians will find this conclusion neither surprising nor depressing: their interest lies, as I say, not in warrant, but in rationality.) In the next chapter I shall inquire whether the Bayesian conditions are plausibly taken as necessary or sufficient for rationality, in some interesting sense of that elusive and multifarious term. The outcome will be mixed: human beings are not irrational, in any sensible sense, by virtue of failing to conform to Bayesian constraints, but in some areas partial conformity to some of those constraints is something like an ideal to be aimed at.
I. Bayesianism Explained
A. Statistical versus Normative Probability
Suppose we begin by contrasting two quite different sorts of probabilities. On the one hand, we have

The probability that a 19yearold American male who smokes more than a pack a day will live to be 70 is .87,

The probability that a radium atom will decompose within the next 1000 years is .5,

The probability that a female Frisian under the age of 50 attends church more than 4 times a month is .274,
and

The probability that a 2yearold Rhode Island Red from southern Wisconsin will contract coccidiosis within the next year is .004;
on the other there are

It is likely that Special Relativity is at least approximately true,

Given what we know, it is likely that there has been life on earth for more than 3 billion years,

Despite the flatearthers, it is extremely improbable that the earth is flat,

The Linguistic Theory of the A Priori is at best unlikely.
It is of the first importance to appreciate the difference between these two groups, (a)—(d) are ordinarily established by statistical means, by broadly speaking empirical or scientific investigation. Further, these probabilities are general; what is probable is that a thing of one kind (a 19yearold American male who smokes more than a pack a day) should also be a thing of another kind (a survivor to the age of 70), or that a member of one class (the class of twoyearold Rhode Island Reds from southern Wisconsin) should also be a member of another (the class of chickens that will contract coccidiosis within the next year). These probabilities may change over time (the probability that an American infant will reach the age of 50 is greater now than it was 100 years ago); and they do not depend upon what anyone knows or believes. Turning to the probabilities in the second group, note first that what is probable or improbable here is a proposition: Special Relativity, or The Linguistic Theory of the A Priori, or there has been life on earth for more than 3 billion years. Note second that these probabilities are explicitly or implicitly relative to some body of information or evidence;^{5} it is improbable, with respect to what we now know, that the earth is flat, but not with respect to what was known by a sixthcentury Celt. Third, note that scientific or statistical investigation is not ordinarily relevant to the establishment of these probabilities, that is, to the probability of the proposition in question relative to the body of information in question (although of course such investigation is relevant to the establishment of that body of information). And finally, note that these probabilities contain an irreducibly normative element. It is epistemically extremely probable (given our circumstances) that the earth is round; hence, there is something wrong, mistaken, substandard in believing (in those circumstances) that it is flat; to believe this in our circumstances you would have to be a fool, or perverse, or dysfunctional, or motivated by an unduly strong desire to shock your friends.
We might call probabilities of the first group factual probabilities, and those of the second normative; or we might call the first sort statistical and the second epistemic. According to Ian Hacking these statistical and epistemic probabilities are to be found intermingled in discussions of probability going back to the seventeenth century: “It is notable that the probability that emerged so suddenly [in the decade around 1660] is Janusfaced. On the one side it is statistical, concerning itself with stochastic laws of chance processes. On the other side it is epistemological, dedicated to assessing reasonable degrees of belief in propositions quite devoid of statistical background.”^{6} As a matter of fact, these two faces, like much else, seem to go all the way back to Aristotle. The first thing to see here, however, is that the Bayesian qua Bayesian is concerned with normative probabilities, not factual probabilities.
B. Degrees of Belief
The second thing to see is that the Bayesian begins his story by observing that belief comes in degrees; I believe some propositions much more firmly than others. Thus I believe that the earth has existed for millions and maybe even billions of years, and also that I live in a house (as opposed to a cave or tent); but I believe the second much more firmly, much more fully than the first. I believe that Banff is in Scotland, that there was such a thing as the American Civil War, that I am more than 10 years old, and that 7 + 5 = 12; and I believe these in ascending order of firmness. Say that a belief of yours is a partial belief if you accept it to some degree or other; partial beliefs include those you hold most firmly together with all those which you accept to some degree or other, no matter how small. (Thus the denial of one of your partial beliefs is one of your partial beliefs.) From the perspective of our project, Bayesianism can be seen as essentially suggesting conditions for a rational or reasonable set of partial beliefs; thus Ramsey himself saw his project as that of setting out a logic for partial belief.
This is a sort of roughandready initial characterization of the idea of degrees of belief; but Bayesians often follow Ramsey in suggesting ways in which degrees of belief can be more precisely measured. Ramsey held that one's degrees of belief are not at all accurately detectable by introspection; he therefore suggested the famous Ramsey Betting Behavior Test for degrees of belief. I claim that the Detroit Lions will win their division and then the Super Bowl; you scoff, inviting me to put my money where my mouth, is and propose a small wager: then the least odds at which I will bet on the Lions represents the degree to which I believe they will win. More exactly, if I will pay seven dollars for a bet that pays ten if the Lions win the Super Bowl and nothing if they do not, then I believe to degree .7 that the Lions will win. More generally, if I will pay n (but no more) for a bet worth m if the Lions win, then my degree of belief that the Lions will win is n⁄m. Still more generally, for any person S there will be a credence function P_{S}(A) from some appropriate set of propositions (perhaps the propositions S has entertained or encountered) into the unit interval; P_{S}(A) specifies the degree to which S believes A. (P_{S}(A) = 1 proclaims A's utter and unconditional adherence to A, P_{S} (A) = 0 is true just if he has no inclination at all towards A, while P_{S}(A) = 1⁄2 tells us that 5, like Buridan's ass, is suspended midway between A and −A.)
As Ramsey points out, I may also have conditional degrees of belief, corresponding to conditional probabilities; perhaps my degree of belief that Feike can swim, on the condition that he is a Frisian lifeguard, is .98. Such a conditional degree of belief can be defined as:
P(A⁄B) = P(A&B) ⁄P(B), provided P(B) does not equal 0.
In the case in question, then, it must be that my confidence that Feike is both a swimmer and a Frisian lifeguard is nearly as great as my confidence that he is a Frisian lifeguard. Although conditional degrees of belief can be so defined, it is worth noting that Ramsey introduces them in a wholly different manner. What he says is that 5's conditional belief in A given B is measured by the least odds S would accept for a conditional bet on A, given B: “We are also able to define a very useful new idea: ‘the degree of belief in p given q’… It roughly expresses the odds at which he would bet on p, the bet only to be valid if q is true.”^{7} “Such conditional bets,” Ramsey observes, “were often made in the eighteenth century.” Thus I might be willing to pay you five dollars for a bet that pays ten if the Lions win the Super Bowl, the bet to be in force only if the Lions win their division and the playoffs. This bet, clearly, is one that I win if the Lions get into the Super Bowl and win; I lose it if the Lions get into the Super Bowl and lose; the bet is called off if the Lions don't make it to the Super Bowl. And now the claim is that the least odds at which I will accept a bet on A conditional on B measures my belief in A on the condition that B.
Of course, it is not at all obvious that there really are degrees of belief of this sort; perhaps the least odds I will accept for a belief on A measures not the degree to which I believe A (maybe I don't really believe A at all) but the degree to which I (fully) believe that A is probable.^{8} And even if there are the right sorts of degrees of belief (both conditional and absolute), it isn't clear that they can really be measured in this way, as Ramsey himself noted. For first, there is the diminishing marginal utility of money; a hundred dollars means a great deal more to me than to a millionaire, and an extra hundred dollars tacked on to a small win (five dollars, say) means much more than the same amount tacked on to a large one (five thousand, say). Furthermore, there are many reasons why someone's betting behavior might not correspond to his degrees of belief. Perhaps you are by nature excessively cautious, so that you won't bet at all unless you get odds at 5 percent better than your degree of belief warrants. Perhaps, on the other hand, you like to live dangerously, often betting at long odds for the sheer excitement of it; or perhaps you bet at odds unwarranted by your degrees of belief because you love to gamble and can't find anyone who will bet at more reasonable odds. Or perhaps the bet can't be settled. You endorse existentialism (the view that existence precedes essence in such a way that if Socrates had not existed, then his individual essence would not have existed either); although I recognize some of the attractiveness of this view, I find it on balance implausible and reject it in favor of the view that Socrates’ essence exists necessarily. You propose a wager; since it is hard to see how the bet could be settled, I frivolously wager my entire fortune on my position at odds of 999 to 1, even though this does not correspond at all to my relatively modest confidence in it. Or perhaps I am a nineteenthcentury Scots Calvinist who believes that betting is wrong, refusing to bet at any odds whatever; and if you forcibly compel me to bet, I will bet completely at random.
Some of these difficulties were familiar to Ramsey (and he proposed a means of dealing with the objection from diminishing utility). In any event, insistence upon the measurability of degrees of belief (or more radically, ‘operational’ definitions of them) in terms of betting behavior or something similar is not crucial to a Bayesian program;^{9} what matters is that indeed there are the appropriate degrees of belief, whether or not it is possible to measure them.
C. Conditions of Rationality
1. Coherence
Now the next Bayesian step is to propose a certain normative constraint on partial beliefs: probabilistic coherence. The idea is that a system of beliefs that does not conform to this constraint is in some way defective, deformed, not up to snuff, such that it does not measure up to the appropriate standards for proper belief; Bayesians often put this by saying that probabilistic coherence is a constraint on rational belief. What is probabilistic coherence? According to Laurence BonJour (as we saw in the last chapter) probabilistic coherence is a matter of not believing both A and it is improbable that A. According to the present notion, however, a system of partial beliefs is coherent if and only if it conforms to the probability calculus. Here is a handy formulation:
A_{1} 0 ≤ P(A) ≤ 1,
A_{2} If A and B are necessarily equivalent, then P(A) = P(B),
A_{3} If A and B are incompatible (that is, the denial of their conjunction is necessary), then P(A∨B) = P(A) + P(B)
A_{4} If A is necessary, then P(A) = 1.^{10}
If we add the familiar definition of conditional probability in terms of absolute probability
P(A⁄B) = P(A&B) ⁄P(B) (provided P(B) does not equal 0),
we have as immediate consequence the familiar multiplicative law for conjunction:
P(A&B) = P(A)×P(B⁄A).
It is easy to see how my beliefs might fail to conform to these axioms. At the beginning of the season, I might inadvertently believe to degree .67 that the Lions will win the Super Bowl but also believe that the Giants have a 50–50 chance of winning (that is, believe to degree .5 that they will win), thus (given A_{1}) violating A_{3}. Before I have seen the proof of their equivalence, I might believe the Axiom of Choice more firmly than the proposition that the real numbers can be well ordered, thus violating both A_{2} and A_{4}. Still further, there are plenty of necessary truths I don't believe to the maximal degree. Suppose we ignore necessary truths I have never thought or heard of (and which I therefore don't believe to any degree at all): there are still such necessary truths as, for example, there are no nonexistent objects, or Peirce's Law (((p → p) → p) → p) which I believe to a less than maximal degree, thus violating A_{4}.
Now why must my beliefs conform to this coherence condition if I am to be rational? Well, suppose they don't; suppose I believe to degree .67 that the Lions will win the Super Bowl and also believe that the Giants have a 50–50 chance of winning. Noting this fact and having fewer scruples than you ought to have, you propose a couple of bets: for $66.67 you offer to sell me a bet that pays $100 if the Lions win and nothing if they don't; since I believe to degree .67 that the Lions will win, I consider this a fair bet and accept. But you go on to offer me another bet that pays $100 if the Giants win and nothing if they don't; this bet costs $50. Since I also regard this as a fair bet, I accept. And now I am in trouble. I have paid you $116.67 for the two bets, but no matter who wins the Super Bowl, the most I can win is $100. You have made a Dutch book against me: a series of bets such that no matter what happens I am bound to lose.^{11} So why think my beliefs must be coherent? Here is one possible reason: if they are not, I am vulnerable to a Dutch book.^{12}
Contemporary discussions often emphasize this answer to the question why rationality demands coherence. Thus Paul Horwich: “if a person is rational, he will distribute his probabilities—his degrees of belief—in accord with these laws. For only if he does this will he be able to avoid a socalled Dutch book being made against him.”^{13} As I shall argue, however, this is not a good reason for thinking that rationality requires coherence (and as a matter of fact I think rationality requires that we not be coherent). And Ramsey himself only mentions in passing that incoherent beliefs imply Dutch book vulnerability; he makes little or nothing of this as a reason for thinking rationality requires coherence. Instead, he points to certain analogies between deductive logic and the probability calculus (taken as the logic of partial belief), proposing that rationality requires coherence among partial beliefs, just as it requires consistency among full beliefs. The answer to the question, Why think rationality requires coherence? he thinks, is the same as the answer to the question, Why think rationality requires logical consistency?
So Bayesians propose coherence as a necessary condition of rational belief. But of course this is a very weak condition (although as I shall argue it is also much too strong). For example, I could be coherent but still vulnerable in a less radical way to a Dutch bookie: I might accept a series of bets which is such that, no matter what happens, I can lose but can't possibly gain, a series of bets in which at best I can break even (and at worst do worse). Blinded as I am by misplaced partisan loyalty, I am prepared to wager my entire fortune on the Lions; I am willing to pay you $1,000 for a bet that pays that very amount if the Lions win, but nothing at all if they lose. Then if the Lions win, I break even; if they lose I am ruined. To avoid this unfortunate condition, S must see to it that her beliefs are strictly coherent^{14}—that is, such that she believes no contingent proposition to the maximum degree; she must satisfy
(SC) P_{S}(A) = 1 only if A is necessarily true.
Strict coherence is of course stronger than coherence; coherence requires that I believe all necessary truths to the maximum but permits similar enthusiasm about contingent truths (indeed, I can be coherent even if I am so misguided as to believe every contingent falsehood to the max). Bayesians and their sympathizers, therefore, sometimes propose strict coherence as a further condition of rationality.^{15}
2. Conditionalization and Probability Kinematics
Even if I satisfy strict coherence, however, I am far from out of the woods. In particular, says the Bayesian, my beliefs can change in an improper, defective, irrational way. Let P_{me, t0} be my credence function at a time t_{0}. Suppose I am coherent at t_{0} (and every other time); I might still be such that P_{me, t0}(A⁄B) is high, while at the next instant t_{1} I learn that B is true but nonetheless then believe A to a low degree. For example, P_{me, t0} (Feike can swim/Feike is a Frisian lifeguard) is very high— .98, say; at t + 1 I learn that Feike is indeed a Frisian lifeguard (and nothing else relevant); but at t + 1 my degree of belief in Feike can swim falls to .01, the rest of my beliefs settling into a coherent pattern. Then I appear to be irrational, at least at t + 1, even though my beliefs are coherent then as well as at t. Here Bayesians propose a further constraint: if I am rational, my beliefs will change by conditionalization. This suggestion was already made by Ramsey:
Since an observation changes (in degree at least) my opinion about the fact observed, some of my degrees of belief after the observation are necessarily inconsistent with those I had before. We have, therefore, to explain exactly how the observation should modify my degrees of belief; obviously if p is the fact observed, my degree of belief in q after the observation should be equal to my degree of belief in q given p before, or by the multiplication law to the quotient of my degree of belief in pq by my degree of belief in p. When my degrees of belief change in this way we can say that they have been changed consistently.^{16}
We may put this requirement as follows: suppose C_{0} is my credence function at a time t_{0}; and suppose I then learn (by observation, let's say) that B is true. What should C_{1}, my credence function at the next instant t_{1}, be? Since I have learned that B is true, C_{1}(B) = 1, of course, but what about the rest of what I believe? The idea is that I should now believe a proposition A to the degree to which A was probable on B according to my old credence function; I must conform to
(Conditionalization) C_{1}(A) = C_{0}(A⁄B) = C_{0}(A&B) ⁄C_{0}(B) (where C_{0}(B) is not zero).
We can think of it like this: when I change belief by conditionalization on a proposition B, I retain all my old conditional probabilities on B, but I am now certain of B. Thus in the case where P_{me, t0} (Feike can swim/Feike is a Frisian lifeguard) is .98 and I learn that Feike is indeed a Frisian lifeguard, at t + 1 my degree of belief in Feike can swim should have been .98. In general, the classical Bayesian idea is that if I am rational, then as I go through life learning various contingent truths (that is, raising to the maximal degree my belief in those propositions), I will constantly update my other beliefs by conditionalization on what I learn. Given conditionalization, we can see a reason for thinking rationality requires that one's original credence function (one's ‘Urfunction’, we might say) be strictly coherent as opposed to coherent simpliciter: “it is required as a condition of reasonableness: one who started out with an irregular [that is, not strictly coherent] credence function (and who then learned from experience only by conditionalizing) would stubbornly refuse to believe some propositions no matter what the evidence in their favor.”^{17}
But why suppose rationality requires changing belief by conditionalization? Why must we do it that way? Here as before a Dutch book argument is available: if I don't follow the rule of conditionalization (but do follow some rule or other)^{18} in changing belief in response to what I learn, then a diachronic Dutch book can be made against me. A cunning bookie who knew my credence function at t and my method for changing belief could offer me a series of bets (at odds I consider fair) such that no matter what happens, I am bound to lose. Suppose, for example, that at t_{0} my credence function C_{0} is such that C_{0}(the Lions will get into the Super Bowl) = .5, and C_{0}(the Lions will win the Super Bowl/the Lions get into the Super Bowl) = .5. (I think it's 50–50 that they will get to the Super Bowl and 50–50 that they will win on the condition that they get into it.) Suppose further that according to my rule or strategy for changing beliefs, my credence function C_{1} at a later time t_{1} (before it's settled whether they get into the Super Bowl) will be such that C_{1}(the Lions will win the Super Bowl) = 1⁄6. You gleefully rub your hands and propose the following series of bets. First, at t_{0}, a bet conditional on the Lion's getting into the Super Bowl: you pay me $30 if the Lions get into the Super Bowl and win it; I pay you $30 if the Lions get into the Super Bowl and don't win it; if the Lions don't manage to get into the Super Bowl, the bet is called off. At t_{0} I regard this bet as fair. Second, you propose a small side bet at even money on the Lion's getting into the Super Bowl: you pay me $10 if they do and I pay you $10 if they do not; at t_{0} I also regard this bet as fair. So if the Lions don't get into the Super Bowl, I pay you $10. If they do get into the Super Bowl, then at t_{1} you propose still another bet; according to this one I pay you $50 if the Lions win and you pay me $10 if they lose; at t_{1} I will regard this bet as fair. But now, once more, I am in trouble. If the Lions don't get into the Super Bowl, the first bet is off and I lose $10 on the second. If the Lions get into the Super Bowl and win, then I win $40 on the first two bets but lose $50 on the third for a net loss of $10. Finally, if the Lions get into the Super Bowl and lose, then I win $20 on the second and third bets but lose $30 on the first, again winding up $10 poorer. So no matter what happens, you are into my pockets to the tune of $10.
This argument is a specification of an argument due to David Lewis.^{19} What Lewis shows is that if my beliefs change by a rule or strategy other than conditionalization, then a shrewd bookie who knew my credence function at t could make a diachronic Dutch book against me (more precisely, he could devise a diachronic Dutch strategy against me). The details of this argument are interesting but too far afield to pursue here. (It is worth noting, however, that the argument holds only on the condition that I change belief according to some rule;^{20} if I don't follow a rule, the Dutch bookie is stymied, and he is also stymied if I always follow a rule, but change rules every now and then, having no rules for changing rules.)
Now there is one respect in which conditionalization is not entirely realistic: on many occasions when we learn something—by observation, say—we don't come to have complete confidence in what we learn. You see that the scale reads 204; you then see that you weigh 204 pounds; but of course you realize there is some small chance that you are misreading the scale, or that it has gone awry, no longer correctly reporting your weight, or that you are hallucinating. So while you learn by observation that you now weigh 204 pounds, you don't come to believe this with maximal confidence; but it is only the case of maximal confidence that is covered by conditionalization. I might observe something by candlelight, having less than complete confidence in my observation,^{21} or hear a phrase in a noisy lecture hall (I am pretty confident he said your thought is deep and rigorous; but just possibly what he said is that it is weak and frivolous). Richard Jeffrey has proposed a natural generalization of conditionalization (“Probability Kinematics,” as he calls it) to accommodate such cases. In the simplest case, where my new probabilities arise from a change in my credence in a proposition A, my new credence C_{new} will be given by
C_{new}(B) = C_{new}(A)×C_{me}(B⁄A) + C_{new}(−A)×C_{old}(B⁄−A).
The generalization to the generic finite case is just what you would expect.^{22}
Bayesians, therefore, propose constraints on rational belief: coherence or perhaps strict coherence, and changing belief by conditionalization or probability kinematics. In his absorbing and instructive “Belief and the Will,”^{23} Bas van Fraassen suggests still another constraint:
(Reflection) P^{a}_{t}(A⁄P^{a}_{t + x}(A) = r) = r
Here P^{a}_{t} is the agent a's credence function at time t, x is any nonnegative number, and P^{a}_{t + x} (A) = r is the proposition that at time t + x the agent a will bestow degree r of credence on the proposition A. To satisfy the principle, the agent's present subjective probability for proposition A, on the supposition that his subjective probability for this proposition will equal r at some later time, must equal this same number r.^{24}
Suppose I now fully believe that in three weeks I will believe to degree .9, say, that the Lions will get into the Super Bowl; then if I conform to Reflection, my present degree of belief in that proposition must also be .9. More generally, my conditional personal probability for a proposition A, on the condition or supposition that my future credence in that proposition A will be r, will be r. Reflection, as van Fraassen observes, looks (initially, at any rate) unduly exuberant. As he points out, if I conform to it I never believe, with respect to any of what I take to be my future full beliefs, that there is any chance at all that it will be mistaken; more precisely and more strongly, there is no proposition A and future time f such that I put any credence at all in the proposition that I will fully but mistakenly believe A at f. This seems initially a bit too sanguine; given my spotty track record, shouldn't I think I might be wrong again? (In chapter 7 we shall look into this question.)
Further, if I conform to Reflection I place no credence at all in the proposition that at some future time f I will be less than certain of any proposition to which I presently afford full belief. So if I conform to Reflection, then I give zero credence to the suggestion, with respect to any proposition I now know, that at some time in the future I will no longer know it. More generally: let P(A) be my present degree of belief in A, f be any future time, P_{f}(A) my degree of belief in A at f, and d any degree of belief significantly different from P(A): the greater P(A) (the greater the degree of belief I presently afford A) the more firmly I must believe that P_{f}(A) is not equal to d.
Still further, if I conform to Reflection, then for any future time f, I am sure that my future degree of belief in some proposition A will be n at f (that is, P(P_{f}(A) = n) = 1, for some degree of belief n), only if that degree of belief equals my present degree of belief in A. In general, for any proposition A and future time f, the more sure I am that P_{f}(A) = n, the closer n will be to my present degree of belief in A. (More exactly, the more sure I am that P_{f}(A) = n, the smaller the interval about n in which P(A) is to be found; the size of that interval is a monotonically decreasing function of P(P_{f}(A)).^{25})
But why think rationality requires conformity to Reflection; why suppose conformity to Reflection a good candidate for a condition of rationality? (And what kind of rationality are we thinking of here?) Well, for one thing, a Dutch argument is once again available: if I do not conform to Reflection then I am vulnerable to a diachronic Dutch book similar in essential respects to the strategy employed by the bookie in the earlier case (p. 123), where I failed to change belief by conditionalization.^{26} Van Fraassen himself, however, does not propose this as a reason for thinking that rationality requires conformity to Reflection. In chapter 7 I shall examine his reason for thinking rationality requires satisfying Reflection; more generally, I shall outline the main kinds of rationality and ask whether any of them requires satisfaction of any of the Bayesian constraints. For now, however, we turn to the announced subject of this chapter,
II. Bayesianism and Warrant
Coherence, strict coherence, conditionalization, probability kinematics, Reflection—what shall we say about them? It is initially clear, I think, that they show little promise as severally necessary and jointly sufficient conditions for warrant (the condition or quantity, roughly, enough of which is what distinguishes knowledge from mere true belief). First, none of the proposed conditions seems necessary. To satisfy coherence (and a fortiori, strict coherence) I must believe each necessary truth—more realistically, perhaps, each necessary truth within my ken—to the same degree: the maximal degree. But clearly I can know a great deal without doing that. Either Goldbach's conjecture or its denial is a necessary truth; I don't know which and believe neither to the maximal degree; yet I know that 2 + 1 = 3 and that I live in Indiana. It is a necessary truth that arithmetic is incomplete; nevertheless I do not believe that truth as firmly as that 2 + 1 = 3 (it has none of the overwhelming obviousness of the latter). I believe Peirce's Law; it is not trivially easy to see through it, however, and I do not believe it as firmly as the most obvious tautologies. I am therefore incoherent; but that does not prevent me from knowing that I am more than seven years old. Now in these examples, we might say that the locus of the incoherence—the beliefs from which it flows, so to speak—is far distant from the propositions I said I knew. Perhaps we could hope to segregate or localize the incoherence, holding that what is required for a proposition's having warrant is only local coherence, coherence in the appropriate neighborhood of that proposition (perhaps specifying neighborhoods in terms of appropriate subalgebras of the relevant total set of propositions). But no hope in that direction. I believe both that arithmetic is incomplete and that I feel a mild pain in my left knee; I believe the former slightly less firmly than the latter. But that does not prevent me from knowing either or both of these propositions; hence, in this case the propositions known and the locus of the incoherence coincide. Indeed, I can know much even if my full beliefs are inconsistent; no doubt Frege knew where he lived even before Russell showed him that his set theoretical beliefs were inconsistent. There is therefore nothing to be said for the suggestion that strict coherence or coherence simpliciter is a necessary condition of knowledge.^{27}
Obviously the same goes for changing belief by conditionalization or by probability kinematics: I can know what my name is even if I have just changed belief in some way inconsistent with probability kinematics (and hence inconsistent with conditionalization.) Nearly every philosopher, I suppose, has changed belief in ways inconsistent with probability kinematics. If (like Frege) you have ever changed your degree of belief in a noncontingent proposition, then you have changed belief in a way inconsistent with probability kinematics; but that has little bearing on whether you had knowledge either before or after the change. Again, localization will not help: I might come to see that a proposition—for example, that there is no set of nonselfmembered sets—is necessary, thereby coming to know that very proposition, even though that proposition is also the locus of the allegedly illicit change of belief. Indeed, couldn't it be that I change credence in a proposition that is in fact necessary, and know that proposition both before and after the change? Couldn't I know some fairly recondite mathematical truth by way of testimony, and then later come to grasp a simple and elegant proof of it, this being accompanied by a small but definite increase in credence?
Similarly, suppose my credence function changes in the following fashion: I come to see (as I think of it) that one of my conditional probabilities P(A⁄B) is mistaken, so that my conditional probability for A on B changes (and this is the originating change), but there is no change in P(A) or P(B). Then my probabilities have changed in a way inconsistent with probability kinematics.^{28} But surely I could come to see that one of my conditional probabilities P(A⁄B) was inappropriate, make an appropriate change in it without changing my degree of belief in A or in B, and still know what I knew before the change. Being excessively sanguine, I believe that the probability of my getting a Nobel Prize, conditional on my finishing my book by next Christmas, is relatively high (though I think my chances of finishing by then are not very good); you persuade me that this confidence does not fit the facts; I come to a more chastened estimate of this probability, without changing my personal probability for my getting the prize or for my finishing by Christmas. Can't I know much both before and after and during this change?
Changing belief by probability kinematics, therefore, is not necessary for knowledge or warrant. Of course the same goes also for Reflection; clearly I can know something today, even if I also think I may invest full belief in a false proposition tomorrow. Indeed, I can now know that there are three pens on my desk, even if my credence in that proposition, on the supposition that 10 years from now my credence in it will be low, is high.
None of the Bayesian conditions, therefore, is necessary for warrant. It is equally obvious, I suppose, that satisfying all of them is not sufficient for it. (Of course I don't mean to suggest that Bayesians claim otherwise.) This is obvious because Bayesianism is incomplete if taken as a theory of warrant, and incomplete in at least two important ways. First: taken as an account of warrant, Bayesianism (like coherence theories generally) is what John Pollock calls a “doxastic” theory: it holds that the warrant or positive epistemic status of a belief is determined solely by the relation that belief bears to other beliefs, wholly neglecting the relation it bears to experience. Bayesianism says something about how my credence should be propagated over the rest of my beliefs when I come to hold a new belief in response to experience: that should go by way of conditionalization or Jeffrey conditioning. It says nothing, however, about how my beliefs should change in response to experience. If my beliefs do change in response to experience, then the Bayesian can tell me how my probabilities should be redistributed over the rest of what I believe; but she has nothing to say about how my beliefs should change (in response to experience) in the first place. Hence my beliefs could change in utterly bizarre ways even if I conform to all the Bayesian principles.
Here we can return to previous examples. By virtue of cognitive malfunction, I might be such that upon being appeared to redly, I form the belief that no one other than I is ever thus appeared to; this is compatible with my credence function's satisfying all the Bayesian constraints. But even if my beliefs do satisfy those constraints, the proposition Only I am ever appeared to redly will have little by way of warrant for me. Even if by some wild chance it happens to be true, it will not constitute knowledge. Alternatively, I might be captured by Alpha Centaurian cognitive scientists who run an experiment in which they propose to bring it about that my beliefs satisfy Bayesian constraints, but change with respect to my experience in wholly random ways; then many of my beliefs will have little by way of warrant despite their conformity to Bayesian principles. Again, I might be like the Epistemically Inflexible Climber (chapter 4, p. 82) whose beliefs became fixed, no longer responsive to experience, so that no matter what my experience, I continue to hold the same beliefs. If we suppose that my beliefs satisfy the Bayesian constraints when I am struck by that burst of radiation, they will satisfy them in the Jackson opera; but many of them will have no warrant then. In these cases and a thousand others my beliefs would have little or no warrant for me, even though they meet the Bayesian conditions.
Taken as a theory of warrant, therefore, Bayesianism is incomplete in that it says nothing about the sort of relation between belief and experience required by warrant. But perhaps we could take it as a partial theory of warrant, a theory having to do only with what is downstream, so to speak, from the formation of belief on the basis of experience. So taken, it could be thought of as a sort of foundationalism with nothing much to say on the question, Which propositions are properly basic? but as making suggestions as to what warrant requires by way of change in belief in response to change at the basic level. Here too, however, we are doomed to disappointment; for there is a second way in which Bayesianism (taken as a theory of warrant) is incomplete: it provides no account of evidence or evidential support. It lacks the resources to say what it is for one proposition to offer evidential support for another; hence, it offers no account of the way in which a proposition can acquire warrant by being believed on the evidential basis of another proposition that already has it. The proposition
(1) 99 out of 100 Frisian lifeguards can swim and Feike is a Frisian lifeguard
supports, is evidence for
(2) Feike can swim.
There is excellent (propositional) evidence that the earth is round; and according to the probabilistic version of the problem of evil, the existence of evil, or of certain particularly horrifying cases of it, is evidence against the existence of God. These evidence relations, furthermore, transfer warrant. If I know the evidence that the earth is round and believe that it is round on the basis of that evidence (and have no defeaters for this belief), that belief will have warrant for me. Clearly these evidence relations hold independently of my degrees of belief; and even if I am certain that (1) is true and (2) false, I still recognize that the first supports the second. But precisely this notion of evidential support is what cannot be explained in Bayesian terms.
We can see this as follows. First, we might try looking to the idea of conditional personal probability or conditional credence for a Bayesian account of the supports relation, claiming that B supports A if P(A⁄B) is sufficiently high. But first, whose credence function is at issue here? We need a subscript. So suppose we relativize the notion of support to credence functions, so that B supports A for me if and only if P_{me}(A⁄B) is sufficiently high: things still go wildly wrong. For example, on this suggestion it won't be possible for me to know a proposition—that Feike can swim, say—and also know a couple of other propositions, one of which evidentially supports it and the other of which supports its denial; if I know all three propositions, then the conditional probability of any on any will be high. But obviously I might very well know a couple of propositions, one but not the other supporting that proposition: perhaps I know (1) and also know
(3) 99/100 Frisian octogenarians can't swim and Feike is a Frisian octogenarian.
If I know both that (1) is true and (2) is false, then (embarrassingly enough) the conditional probability, for me, of the denial of (2) on (1) will be very high, and the probability of (2) on (1) very low, so that (1) supports, for me, the denial of (2). More generally: take any pair of contingent propositions such that the first offers evidential support for the second: even if I satisfy all Bayesian constraints, my personal probability for the second on the first can be as low as you please.
But perhaps this is not how the Bayesian will explain evidential support;^{29} perhaps she will say that A supports B in case P(B⁄A) > P(B)—that is, A supports B for me in case P_{me}(B⁄A) > P_{me}(B). But this too can't be right. Due to cognitive malfunction or the machinations of demon or Alpha Centaurian, I might be such that, for example, I know nothing at all about Feike's swimming ability and P_{me}(2) = .5, but am also such that P_{me}((2) ⁄(1)) = .1 and P_{me}((2) ⁄(3)) = .9. Then on the present suggestion, (1) disconfirms (2) ‘for me’ and (3) confirms it for me! But (if ‘evidentially supports for me’ [as opposed to ‘evidentially supports’ simpliciter] makes any sense at all) surely they do not.
The real problem, though, is with that subscript; evidential support is not, in this way, relative to individual noetic structures or individual credence functions. (It is not a threeplace relation among a pair of propositions and a credence function.) My Alpha Centaurian captors might cause me to reason in such a way that what is in fact the evidence for the roundness of the earth Bayesianly supports, for me, the proposition that the earth is flat. But even if they do, it is not the case then that the evidence for the earth's being round really does support ‘for me’ the proposition that it is flat—just as it is not the case that the earth is flat ‘for me’. If evidence E supports proposition H, then E supports H simpliciter, not merely relative to your credence function or mine. (1) is as such evidence for (2); the idea of its being evidence for (2) for you but not for me, if it is to be a sensible idea, can only be taken as something like the idea that you recognize that it is evidence for (2) while I do not, or that when (1) is added to the rest of what you believe, the resulting total evidence supports (2), but when added to the rest of what I believe, the resulting total evidence does not. It is therefore at the least enormously difficult to see how we could explain the supports relation in Bayesian terms.
Bayesianism has little to contribute to a proper theory of warrant. This conclusion, however, is one Bayesians can accept with equanimity; for their interest typically lies not in warrant but in something else, something they call ‘rationality’. It is time to turn to that baffling and elusive notion.