Bayesian Basicality

Suppose that you’re thinking of adopting radical probabilism,[1] or some more moderate form of Bayesianism, as your epistemology, but you’re hesitant because you think there are beliefs which are properly basic (including, perhaps, the belief in God), and you think that Bayesianism won’t make room for such beliefs. Reformed Epistemology,[2] after all, is an externalist epistemology, while Bayesianism appears to be an internalist epistemology. Here’s an obvious way to put these commitments together coherently.

As a preliminary note, I want to call the reader’s attention to the fact, of which I only just became aware while trying to articulate this idea, that Pruss has suggested that (objective) Bayesianism should be regarded as a hybrid epistemology, where belief-updating is an internalist epistemic procedure, but the particular calibration of prior probability assignments is potentially (un/)warranted given an externalist story.[3] I think this account sounds right. I have also been thinking, lately, about a way to fill out an epistemology such that it tells both an internalist and an externalist story (in other words, I’ve been thinking about how to articulate an epistemological commitment which bridges the internalist-externalist divide) and thus, if Pruss is right, it appears that Natural Law Bayesianism can commend itself to us (or to me) in light of this philosophical virtue.[4]

I am increasingly convinced that Natural Law Bayesianism is correct, but what I am about to suggest will hold for any version of objective Bayesianism (and, as we will see, perhaps even for subjective Bayesianism as well). Perhaps we should regard a belief as properly basic if and only if its prior probability is (and ought to be) set at higher than 0.5 (on a scale from 0 to 1). We can then say that we have a defeater D for some properly basic belief H iff:

  1. P(H) >0.5
  2. P(H|D) ≤ 0.5

Recall that a properly basic belief is a belief which one is rational to maintain even in the absence of inferential evidence or rational argument, so long as genuine defeaters are not forthcoming. Take inferential evidence and rational argument to be species of evidence in the Bayesian sense; something E is evidence in the Bayesian sense for hypothesis/proposition H if and only if the probability of H given E is higher than the prior probability of H (prior, that is, relative to the condition ‘E’). Thus, we might have some beliefs the appropriate assignment of whose prior probabilities is in the range 0.5 < x ≤ 1, and these beliefs we will be able to rationally maintain even in the absence of inferential evidence or rational argument.

In fact, if one has coherentist leanings (as opposed to foundationalist leanings), one could even talk about a form of proper-basicality for a coherentist using precisely this language, but just adopt a subjective Bayesianist account of the priors in place of the objective Bayesianist account (mutatis mutandis).

 

[1] See: Richard Jeffrey, “Radical Probabilism (Prospectus for a User’s manual),” in Philosophical Issues 2 (1992): 193-204.

[2] See: Peter Forrest, “The Epistemology of Religion” in The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edited by Edward N. Zalta, (June 15th, 2019): https://plato.stanford.edu/entries/religion-epistemology/#RefoEpis

[3] See: Alexander R. Pruss, “Internalism, Externalism and Bayesianism,” Alexander Pruss’ Blog, (June 15th, 2019): http://alexanderpruss.blogspot.com/2017/03/internalism-externalism-and-bayesianism.html

[4] One might think to suggest that this is more a vice than a virtue, for perhaps an account which is both internalist and externalist inherits all the problems peculiar to either side of that divide. I think this is mistaken; the only way to have an epistemology which is adequate for answering both internalist and externalist concerns is going to be an epistemology whose framework allows us to address puzzles peculiar to each front, and no argument for the indispensability of internalism or externalism will stand as an objection to such an epistemology.

A Moore Bayesian Defense

G. E. Moore famously argued, contra the skeptic, that he had a hand. What he meant by that provocatively simple rebuttal was roughly this: that as sound as one might be inclined to think any argument for skepticism, the fact remains that one should always put more credence in the belief that they have a hand (or some equally evident proposition) than that the argument for skepticism is sound. In other words, people should regard themselves as within their epistemic rights to be more confident that they have a hand than they can ever be that any one (let alone all) of the premises of an abstract philosophical argument for skepticism happen to be true. We ought (meaning any one of us ought), according to Moore (or, at least, my interpretation of Moore), to assign a higher prior probability to the proposition that “I have a hand” than to the proposition that, for instance, “in order to know that P, I have to know that I know P.”[1]

Moore’s response is often thought cute but wanting; it refuses to play by the philosophical rules which the skeptic is taking for granted and it reeks of special pleading, as though the response were really a thinly veiled philosopher’s trick. It seems intuitively attractive to us, but can we really say any more in its defense than that it sounds right?

It occurred to me rather lately that perhaps we can defend a Moorean response to skepticism with a little bit more rigour than merely hand-waving in the general direction of intuitions. Suppose, pace subjective Bayesians, that objective Bayesianism is correct (the difference between subjective Bayesianism and objective Bayesianism being, of course, that subjective Bayesians maintain that nearly any constellation of probability assignments for our priors are appropriate so long as we’re being consistent, and objective Bayesians argue that there are, instead, more (or less) appropriate constellations of probability assignments for our priors).[2] Some might think to do this by appeal to a principle of indifference (for instance, if we have no reason either to think that proposition P is true, nor any to believe P is false, then we ought to assign the prior probability 0.5 to P). I think those who subscribe to what has been called ‘Natural-Law’ Bayesianism actually have a more attractive idea; they suggest, instead, that there are certain appropriate probability assignments we naturally tend to give particular priors. This actually opens the Natural-Law Bayesian to scientific evidence in ways few (or, perhaps, no) other objective Bayesians can be (at least, qua their Bayesian commitments). For instance, one way to roughly obtain information about the approximate prior probability of some proposition, P, is to take the “arithmetic average”[3] of some sufficiently large (so as to be representative) group of human beings’ probability assignment to P. Teasing this out might be complicated, but that can be thought of as a problem for the social scientists and psychologists, rather than philosophers (there’s a division of intellectual labour for a reason). Now, suppose that Natural-Law Bayesianism provides a decision-theory for calibrating our prior-probability assignments. (indeed, we might even update our priors given incoming information about what the more natural assignment of the prior converges towards – though, as Pruss interestingly observes, this wouldn’t be a Bayesian update). It appears likely (let us not read too much into the use of the word here) that one can cash-out the fundamental Moorean claim that we really ought to regard the prior probability that we have a hand as being, while not indefeasibly high (i.e., 1), too much higher than any or all of the premises in any argument for skepticism for our Bayesian-updating procedure upon considering such an argument to leave us affirming skepticism.

Moore’s response is, in some ways, a big ask. Nevertheless, I think that Natural-Law Bayesianism gives us a way to shore up his fundamental claim if only the social science and psychological data turn out to support his supposition. One among many advantages of this is that it opens Moore’s response to skepticism up to experimental (dis)confirmation. I hope to expand on this idea in the future. My preliminary reaction to the thought, though, is that I think it’s right, and I expect that, were the evidence forthcoming, it would support Moore’s move from the perspective of Natural-Law Bayesianism.

Interestingly, this also implies that one may be being perfectly rational to agree that each premise of an argument is extremely plausible (more plausible than not), and that the conclusion follows validly from the premises, but reserve the right to reject the conclusion on the grounds that the conjunction of premises is not as plausible as at least one proposition with which their combination is incompatible.

And there we have it, a more Bayesian defense of Moore’s response to the skeptic.

[1] I borrow this phrasing, as best memory will allow, from W.L. Craig who used it during one (or several) of his lectures on the Defenders podcast. I do not now, however, have an exact citation for it. Here, though, is a useful link in which he explains what he means. https://www.reasonablefaith.org/question-answer/P80/does-knowledge-require-certainty

[2] For an excellent article on this, see: William Talbott, “Bayesian Epistemology,” in The Stanford Encyclopedia of Philosophy (Winter 2016 Edition) edited by Edward N. Zalta, accessed March 25th, 2019. https://plato.stanford.edu/archives/win2016/entries/epistemology-bayesian/.

[3] Alexander Pruss, Conciliationism and natural law epistemology, on Alexander Pruss’ Blog, accessed March 25th, 2019. https://alexanderpruss.blogspot.com/2019/02/conciliationism-and-natural-law.html