Perfect Being Theology, Mysterious Superlatives, and God’s Necessary Goodness.

I typically define theism in company with those who, under the enduring influence of St. Anselm, follow him in affirming that God is that than which nothing greater could be conceived. To update the Anselmian lingo in the preferred way of analytic theologians, God is a maximally great being, which is to say that God is the being which exemplifies the uniqualizing[1] property of exemplifying the largest set of compossible categorically great-making attributes.[2] Thus, if omnipotence is a categorically great-making property (i.e., a property which it is in every respect better to be than not), and omnipotence isn’t known to be incompatible with any categorically great-making properties, then God is probably omnipotent (which is to say, omnipotence probably belongs to the set of compossible categorically great-making properties than which no set is greater). This is obviously a shortcut (for, if some property which appears to be categorically great-making was incompatible with the largest set of consistent categorically great-making properties then it would not really be categorically great-making at all), but it is a useful one. Theists who subscribe to this theological/philosophical strategy claim that what we can coherently say about God, at least absent any appeal to revelation, is that for any categorically great-making property P, God has P if and only if P is part of the largest set of categorically great-making properties all of which are compatible with each other. Practically speaking, if omnipotence is compatible with omniscience, omnibenevolence, omnipresence, immutability, divine simplicity, aseity, et cetera, and those are all compatible with each other, then God can be safely said to have all of those properties.

One notoriously difficult problem with this ‘perfect being theology,’ as I’ve laid it out, is that particular superlative attributes are always liable to be rejected on the grounds that they are found, after all, to be incompatible with each other for some philosophically subtle reason. For example, if we found, contrary to current expectations, that omnibenevolence were incompatible with being altogether just, and those were both categorically great-making properties, then one or the other of them would not actually be a property of God (according to the perfect being theologian). So, the perfect being theologian’s approach to defining God actually makes any alleged property of God negotiable in terms of a philosophical trade-off. By applying the right kind of philosophical pressure you can in principle always get perfect being theologians to choose between God’s being immutable and divinely simple on the one hand, and omnisubjective on the other (or any other superlatives in either place). Most of the time this is a purely academic concern; practically speaking the perfect being theologian can get all of the properties the classical theist wants, using perfect being theology, without any serious difficulties. Still, the perfect being theologian operates almost as though her view of God is a hypothesis which could, at any moment, be overturned by the flood of new philosophical considerations. That may not be such a serious problem on its face; after all, the scientist treats the theory of evolution, or atomic theory, or any other theory, as though it might, at any moment, be overturned, but is increasingly confident in these theories as they prove their explanatory worth over time and in the face of multiple challenges. The perfect being theologian may think the very same thing about God as classically construed (e.g., as being omnipotent, and omniscient, et cetera), since it remains philosophically viable in the face of several serious challenges it has faced down through the centuries. A serious challenge to the strategy of the perfect being theologian exists, however, insofar as the perfect being theologian ought to admit the possibility of mysterious superlative attributes.

A mysterious superlative attribute is a categorically great-making property which is in principle out of the intellectual reach of human cognition. In other words, it represents a property which is beyond our ken, and thus unanalyzable (at least as far as we’re concerned). Suppose we have some such property X; for all we know, X is incompatible with many, all, or at least one of the superlative attributes generally ascribed to God. Even should we think that X isn’t likely to be incompatible with these properties and if it were it would, by reason of that, probably not belong to the largest set of compossible superlatives, for all we know there are other equally indiscernible mysterious properties {X1, X2…, Xn}. We have no way of telling how likely it is that there are only a handful of such mysterious superlatives, or even that there are only finitely many such properties, and it seems impossible to dismiss out of hand the possibility that any one of them might be incompatible with any or all of the non-mysterious superlatives. It isn’t hard to see why this poses such a serious challenge to the strategy of perfect being theology. Unless the perfect being theologian is able to give some very impressive reason to think i) that no mysterious superlatives exist, ii) that if they do exist there are few enough of them, and/or they are each so unlikely to be incompatible with non-mysterious superlatives, that they, taken together, are extremely unlikely to imply that any of the non-mysterious superlatives are missing from the largest set of compossible categorically great-making properties, or iii) that no mysterious superlatives are possibly incompatible with the non-mysterious superlatives, then she is in serious trouble. She will be forced to adopt her theology as a useful fiction, however well pragmatically justified. She will end up having to adopt some form of theological anti-realism analogous to (some) versions of scientific anti-realism, and for the purposes of systematic theology that simply will not do.

I’ve been contemplating this problem for a while. I once hoped that the theologian could use some argument from the nature of language to show that any concepts which in principle cannot be given an expression in at least one language possibly comprehensible to us must necessarily be lacking the semantic machinery required for incompatibility with any concept which can in principle be given expression in a language comprehensible to us. While that sounds vaguely promising, I simply have no good ideas about how to cash out that (speculative) claim. It also raises a legitimate question about what we might call quasi-mysterious superlatives (i.e., categorically great-making properties which are in principle intelligible to us, but which are in fact unintelligible to us and/or have never occurred to anybody) which I am not entirely ready to answer.

Nevertheless, it occurred to me recently that we might be able to safeguard at least one of the non-mysterious superlative attributes even in the face of the challenge posed by the possibility of mysterious superlatives which are incompatible with non-mysterious superlative attributes. It seems that God’s being the paradigm of goodness itself (goodness simpliciter – not to be confused with merely moral goodness) is a non-negotiable non-mysterious superlative attribute. In its absence, there wouldn’t even be a standard against which properties could be said to be objectively great-making. Very plausibly, one needs a paradigm of goodness in order to talk meaningfully about greatness (in the relevant sense), and if there is a maximally great being then it must be, among other things, the paradigm of goodness. Therefore, even if God (understood as the maximally great being) has mysterious superlatives which are just beyond our ken, we can know with certainty that whatever they are, they must be compatible with being goodness itself. Thus, the set of compossible categorically great-making properties must necessarily include being identical to the Good. Unless God’s nature serves as the barometer or paradigm of greatness in our ‘great-making’ sense, God cannot necessarily be a maximally great-making being. The whole coherence of perfect being theology hangs on God having the property of being the paradigm of (categorical) greatness.

Supposing this argument is successful, how comforting should its conclusion be for the perfect being theologian? It certainly doesn’t give her everything she wants, so she has plenty of work still cut out for her, but she might be able to use this as an almost Archimedean point from which to make progress. For instance, perhaps some other properties, such as moral goodness, necessarily flow out of an appropriate analysis of being the paradigm of goodness simpliciter. Perhaps, in addition, a parallel argument can be run for other properties, such as being the paradigmatic existent.[3] Ultimately, I think the potential of the arguments I’ve presented, even if successful/sound, is extremely limited. It isn’t good enough to assuage my concerns, but it does feel like a good start. If there is a fatal problem with my argument I suspect it will be caused by some kind of circularity (e.g., God being defined by greatness and greatness being defined by God), but it isn’t clear to me, at present, that there is a non-superficial problem here. Nonetheless, it is a challenge about which I shall have to think carefully in future.


[1] By ‘uniqualizing’ I mean a property which is had, if at all, by at most one being. See: Alexander R. Pruss, “A Gödelian Ontological Argument Improved Even More,” in Ontological Proofs Today 50 (2012): 204.

[2] Thomas V. Morris, “The concept of God,” in Philosophy of Religion: An Anthology, ed. Louis Pojman, Michael C. Rea (Boston: Cengage Learning, 2011): 17.

[3] Obviously, the person to read here is Vallicella; see: William F. Vallicella, A Paradigm Theory of Existence: Onto-Theology Vindicated. Vol. 89. Springer Science & Business Media, 2002.

An Amended Minimal Principle of Contradiction

The law of non-contradiction seems self-evidently true, but it has its opponents (or, at least, opponents of its being necessary (de dicto) simpliciter). W.V.O. Quine is perhaps the most well known philosopher to call the principle into question by calling analyticity itself into question in his famous essay “Two Dogmas of Empiricism,” and suggesting that, if we’re to be thoroughgoing empiricists, we ought to adopt a principle of universal revisability (that is to say, we adopt a principle according to which absolutely any of our beliefs, however indubitable to us, should be regarded as revisable in principle, including the principle of revisability). Quine imagined that our beliefs were networked together like parts of a web in that we have beliefs to which we aren’t strongly committed, which we imagine as near the periphery of the web, which are much less costly to change than the beliefs to which we are most strongly committed, which we imagine as near the center of that web. Changing parts of the web nearer to the periphery does less to change the overall structure of the network than changing beliefs at the center of the web. Evolution has, in operating upon our cognitive faculties, selected for our tendency towards epistemic conservatism.

This, he thinks, is why we don’t mind changing our peripheral beliefs (for instance, beliefs about whether there is milk in the fridge or whether a certain economic plan would better conduce to long-term increases in GDP than a competing plan) but we stubbornly hold onto our beliefs about things like mathematics, logic, and even some basic intuitive metaphysical principles (like Parmenides’ ex nihilo nihil fit). Nevertheless, indubitability notwithstanding, if all our knowledge is empirical in principle, then everything we believe is subject to revision, according to Quine. He boldly states:

… no statement is immune to revision. Revision even of the logical law of the excluded middle has been proposed as a means of simplifying quantum mechanics; and what difference is there in principle between such a shift and the shift whereby Kepler superseded Ptolemy, or Einstein Newton, or Darwin Aristotle?1

This statement is far from short-sighted on Quine’s part. Those who defend his view have suggested that even the law of non-contradiction should be regarded as revisable, especially in light of paraconsistent systems of logic in which the law of non-contradiction is neither axiomatic, nor derivable as a theorem operating within those systems. This is why Chalmers calls attention to the fact that many regard Quine’s essay “as the most important critique of the notion of the a priori, with the potential to undermine the whole program of conceptual analysis.”2 In one fell swoop Quine undermined not only Carnap’s logical positivism, but analyticity itself, and with it a host of philosophical dogmas ranging from the classical theory of concepts to almost every foundationalist epistemological system. The force and scope of his argument was breathtaking, and it continues to plague and perplex philosophers today.

More surprising still is the fact that Quine isn’t alone in thinking that every belief is revisable. Indeed, there is a significant faction of philosophers committed to naturalism and naturalized epistemology, but who think that a fully naturalized epistemology will render all knowledge empirical, and, therefore, subject to revision in principle. Michael Devitt, for instance, defines naturalism epistemologically (rather than metaphysically):

“It is overwhelmingly plausible that some knowledge is empirical, justified by experience. The attractive thesis of naturalism is that all knowledge is; there is only one way of knowing”3

Philosophical attractiveness, I suppose, is in the eye of the beholder. It should be noted, in passing, that metaphysical naturalism and epistemological naturalism are not identical. Metaphysical naturalism does not entail epistemological naturalism, and neither does epistemological naturalism entail metaphysical naturalism. I have argued elsewhere that there may not even be a coherent way to define naturalism, but at least some idea of a naturalized metaphysic can be intuitively extrapolated from science; there is, though, no intuitive way to extrapolate a naturalized epistemology from science. As Putnam puts it:

“The fact that the naturalized epistemologist is trying to reconstruct what he can of an enterprise that few philosophers of any persuasion regard as unflawed is perhaps the explanation of the fact that the naturalistic tendency in epistemology expresses itself in so many incompatible and mutually divergent ways, while the naturalistic tendency in metaphysics appears to be, and regards itself as, a unified movement.”4

Another note in passing; strictly speaking Devitt’s statement could simply entail that we do not ‘know’ any analytic truths (perhaps given some qualified conditions on knowledge), rather than that there are no analytic truths, or even that there are no knowable analytic truths. Quine, I think, is more radical insofar as he seems to suggest that there are no analytic truths at all, and at least suggests that none are possibly known. Devitt’s statement, on the other hand, would be correct even if it just contingently happened to be the case that not a single person satisfied the sufficient conditions for knowing any analytic truth.

Hilary Putnam, unfortunately writing shortly after W.V.O. Quine passed away, provided a principle which is allegedly a priori, and which, it seems, even Quine could not have regarded as revisable. Calling this the minimal principle of contradiction, he states it as:

Not every statement is both true and false”5

Putnam himself thought that this principle establishes that there is at least one incorrigible a priori truth which is believed, if at all, infallibly. Putnam shares in his own intellectual autobiography that he had objected to himself, in his notes, as follows:

“I think it is right to say that, within our present conceptual scheme, the minimal principle of contradiction is so basic that it cannot significantly be ‘explained’ at all. But that does not make it an ‘absolutely a priori truth’ in the sense of an absolutely unrevisable truth. Mathematical intuitionism, for example, represents one proposal for revising the minimal principle of contradiction: not by saying that it is false, but by denying the applicability of the classical concepts of truth and falsity at all. Of course, then there would be a new ‘minimal principle of contradiction’: for example, ‘no statement is both proved and disproved’ (where ‘proof’ is taken to be a concept which does not presuppose the classical notion of truth by the intuitionists); but this is not the minimal principle of contradiction. Every statement is subject to revision; but not in every way.”6

He writes, shortly after recounting this, that he had objected to his own objection by suggesting that “if the classical notions of truth and falsity do not have to be given up, then not every statement is both true and false.”7 This, then, had, he thought, to be absolutely unrevisable.

This minimal principle of contradiction, or some version of it, has seemed, to me, nearly indubitable, and this despite my sincerest philosophical efforts. However, as I was reflecting more deeply upon it recently I realized that it is possible to enunciate an even weaker or more minimalist (that is to say, all things being equal, more indubitable) principle. As a propaedeutic note, I observe that not everyone is agreed upon what the fundamental truth-bearers are (whether propositions, tokens, tokenings, etc.), so one’s statement, ideally, shouldn’t tacitly presuppose any particular view. Putnam’s statement seems non-committal, but I think it is possible to read some relevance into his use of the word ‘statement’ such that the skeptic may quizzaciously opine that the principle isn’t beyond contention after all. In what follows, I will use the term ‘proposition*’ to refer to any truth-bearing element in a system.

Consider that there are fuzzy logics, systems in which bivalence is denied. A fuzzy logic, briefly, is just a system in which propositions are not regarded (necessarily) as straightforwardly true or false, but as what we might think of as ‘true’ to some degree. For instance, what is the degree to which Michael is bald? How many hairs, precisely, does Michael have to have left in order to be considered one hair away from being bald? Well, it seems like for predicates like ‘bald’ there is some ambiguity about their necessary conditions. Fuzzy logic is intended to deal with that fuzziness by allowing us to assign values in a way best illustrated by example: “Michael is 0.78 bald.” That is, it is 0.78 true that Michael is bald (something like 78% true). Obviously we can always ask the fuzzy logician whether her fuzzy statement is 1.0 true (and here she either admits that fuzzy logic is embedded in something like a more conventional bivalent logic, or she winds up stuck with infinite regresses of the partiality of truths), but I digress. Let’s accept, counter-possibly, that fuzzy logics provide a viable way to deny bivalence, and thus allow us to give a principled rejection of Putnam’s principle.

Even so, I think we can amend the principle to make it stronger. Here is my proposal for an amended principle of minimal contradiction:

“Not every single proposition* has every truth value.”

I think that this is as bedrock an analytic statement as one can hope to come by. It is indubitable, incorrigible, indubitably incorrigible, and it holds true across all possible systems/logics/languages. It seems, therefore, as though it is proof-positive of analyticity in an impressively strong sense; namely, in the sense that necessity is not always model-dependent. At least one proposition* is true across all possible systems, so that it is necessary in a stronger sense than something’s merely being necessary as regarded from within some logic or system of analysis.


As a post-script, here are some principles I was thinking about as a result of the above lines of thought. First, consider the principle:

At least one proposition* has at least one truth-value.

To deny this is to deny oneself a system altogether. No logic, however esoteric or unconventional or counter-intuitive, can get off the ground without this presupposition.

Consider another one:

For any proposition* P, if we know/assume only about P that it is a proposition*, then P more probably than not has at least one truth-value.

I’m not certain about this last principle, but it does seem intuitive. The way to deny it, I suppose, would be to suggest that even if most propositions* were without truth-values, one could identify a sub-class of propositions with an extremely high probability of having a truth-value, and that will allow one to operate on an alternative assumption.

[Note: some of the following footnotes may be wrong and in need of fixing. Unfortunately I would need several of my books, currently in Oxford with a friend, to adequately check each reference. I usually try to be careful with my references, but here I make special note of my inability to do due diligence.]

1 W.V.O. Quine, “Two Dogmas of Empiricism,” in The Philosophical Review, Vol. 60, No.1 (Jan., 1951), 40.

2 David J. Chalmers, “Revisability and Conceptual Change in “Two Dogmas of Empiricism”.” The Journal of Philosophy 108, no. 8 (2011): 387.

3 Louise Antony, “A Naturalized Approach to the A Priori,Epistemology: An Anthology. Second Edition, Edited by Ernest Sosa, Jaegwon Kim, Jeremy Fantl and Matthew McGrath. (Oxford: Blackwell publishing, 2000), 1.

4 Hilary Putnam, “Why Reason can’t be Naturalized,” Epistemology: An Anthology. Second Edition, Edited by Ernest Sosa, Jaegwon Kim, Jeremy Fantl and Matthew McGrath. (Oxford: Blackwell publishing, 2000), 314.

5 Hilary Putnam, “There is at least one a priori Truth” Epistemology: An Anthology. Second Edition, Edited by Ernest Sosa, Jaegwon Kim, Jeremy Fantl and Matthew McGrath. (Blackwell: 2000): 585-594.

6 Auxier, Randall E., Douglas R. Anderson, and Lewis Edwin Hahn, eds. The Philosophy of Hilary Putnam. Vol. 34. (Open Court, 2015): 71.

7 Auxier, Randall E., Douglas R. Anderson, and Lewis Edwin Hahn, eds. The Philosophy of Hilary Putnam. Vol. 34. (Open Court, 2015): 71.


It was in an article written by Stephen Torre which I read very recently that I was introduced to a very intriguing idea; namely, that tokens, and not propositions, are the fundamental bearers of truth-values. The usual view, of course, is that propositions (whatever one thinks of them) are those things to which the categories/properties ‘true’ and ‘false’ exclusively apply. Tokens, then, merely express truths insofar as (and just in case) they express propositions which are true. On the alternative story, which Torre refers to as the “Token View,” it is tokens which are the fundamental truth-bearers. This alternative story is as indifferent to different theories of truth (eg. correspondence, coherence, pragmatist) as the usual story. Turning to Torre, we read:

“There are different views regarding what the fundamental bearers of truth are. One view is that truth applies fundamentally to tokens. On this view, the predicate ‘is true’ is properly applied only to tokens. Such a view is committed to denying that there are token-independent truths. I will refer to this view as the ‘Token View’. A rival view takes truth to apply fundamentally to propositions. On this view, tokens are true or false only derivatively: tokens express propositions and a token is true iff it expresses a true proposition. This view does allow for the existence of token-independent truths.”[1]

I think it is worth having a bit of fun thinking about what the consequences of this prima facie absurd view would be. As it turns out, the view might have some theologically welcome consequences. For instance, it seems clear that the alleged set-theoretic problems for the doctrine of omniscience are evaporated of significance; even if there is no such thing as the set of all (true) propositions, there is clearly[2] such a thing as the set of all (true) tokens, at least if tokens are created by finitely many minds with finite capacities/abilities.

Tokens, like propositions, may require facts (i.e., extra-mental and extra-linguistic truth-makers), but God could be omniscient either factually (i.e., by having direct unmediated acquaintance with the facts, rather than their representations to the discursive intellect in the form of tokens or propositions), or else God can be token-omniscient. What is it, precisely, to be token-omniscient? Let us stipulate a definition:

G is token-omniscient =def. G knows all true tokens, and believes no false tokens.

Suppose that this view is correct quarum gratia argumentum, and suppose that God’s mental activity produces tokens. In this case it looks as though an old adage of Christian theology is more literally true than it seemed at first glance: to think truly is to think God’s thoughts after Him.

Objection 1: Surely quantification over tokens isn’t problematic unless there are indefinitely many of them. However, it is difficult to imagine a finite mind tokening a truth as of yet not tokened by God, even in any logically possible world. It seems plausible to say, then, that if God tokens any truths then he tokens all truths, but the set of all truths is indefinitely large. Set-theorists have no problem quantifying over infinite sets; the problem was always with quantifying over ‘indefinite’ sets, which are not sets at all. If the set of all true tokens is indefinitely large then the problem recurs.

Response 1: Perhaps we should make a distinction analogous to the distinction between first-order propositions (propositions about the world) and second-order propositions (propositions about propositions about the world), and restrict God’s knowledge to first-order tokens.

G is first-order token-omniscient =def. G knows all true first-order tokens, and believes no false tokens.

God would, of course, still know all first-order tokens about second/third/quadruple/etc-order tokens which occur to finite minds, and that seems sufficient for omniscience.

Objection 2: suppose that (logically/explanatorily) prior to God’s creating anything, He realizes that there are no tokens, and, in realizing this (and being always first-order token omniscient), mentally produces the first-order token T1: “there are no tokens.” This is false, and (being a token), is necessarily false. God would not only not be Token-Omniscient, but wouldn’t even have (all and) only true beliefs.

Response 2: It might not be logically possible for God to token T1, but perhaps it is possible, and inevitable (given the assumptions upon which we are now working in this thought experiment), that God token T2: “There are no (other) tokens” or, rephrased more elaborately, T2’:“there are no tokens other than this one.” Perhaps to avoid self-reference paradoxes we should say of tokens, as I am inclined to say of propositions, that (unless they pick out a universal quality, such as the disjunctive property of being true or false or meaningless) they all come with a caveat de aliis implicite (i.e., with an implicit caveat that they are ‘about’ others). Such stand-alone sentences as “the set of all things I say in this sentence is imponderable” are not true, they are entirely bereft of truth-apt content! Pseudo-meaningful sentence constructions. So also, it seems to me, “this sentence is true” is meaningless, and “there are no sentences” is meaningless. [I am not sure I’m right about this; this is just a knee-jerk reaction on my part to self-reference paradoxes].

What are the downsides of this view (other, of course, than that it seems crazy)? I’m not sure I can think of any unanswerable objections to it, and that alone may make it worth pondering, at least for fun.


Edit: Ok, here’s an obvious objection to Token-Omniscience which I, for whatever reason, didn’t think of previously: suppose I token the following: “I am Tyler.” The token’s content is irreducibly bound up with the sense of the indexical ‘I’ in such a way that nobody distinct from me could recognize that token as true, even if they could have recognized the propositional content to be true. The token, per se, is unknowable to any being distinct from me. Therefore, if tokens are the fundamental truth-bearers, and any more than one being ever uses a personal pronoun to index themselves in tokening a truth, no being can be (first-order) token-omniscient. That seems like a pretty definitive defeater to token-omniscience to me.


[1] Torre, Stephan. “Truth-conditions, truth-bearers and the new B-theory of time.” Philosophical Studies 142, no. 3 (2009): 325-344.

[2] I assume that it is logically impossible to have an actually infinite set of tokens created by finitely many finite minds. This can be challenged, of course, by either insisting that there is no absurdity, contra apparentiam, in positing actual infinities, or else that the absurdities do not arise for tokens. If such suggestions are to be taken seriously, then I would have to weaken my claim here from ‘clearly’ to ‘plausibly,’ but all else would remain the same.

Difurcating ‘Knowing P’ and ‘Knowing P to be true’

William Lane Craig has argued that the difference between ‘Billy the Hippo is fat’ and ‘the proposition Billy the Hippo is fat is true’ is one of semantic ascent. One semantically ascends when moving from a claim of the first kind to a claim of the second kind, and, conversely, semantically descends when going from a claim of the second kind to a claim of the first kind. He says:

I could say “Hitler was a really bad man.” Or I could ascend semantically and say “It is true that Hitler was a really bad man.” Do you see the difference between the first order and the second level claim? And that second level claim doesn’t need to be made—I can just say “Hitler was a really bad man,” and I can make that affirmation sincerely and so forth without ascending semantically to saying, therefore there is a proposition which has the value true.[1]

I used to think this was right (though I’ve always been a little suspicious of it), but I think I’ve come across a reason to think that it may be wrong. I think that going from ‘Hitler was a really bad man’ to ‘it is true that Hitler was a really bad man’ is not merely a semantic ascent; it also involves the acquisition of new semantic content. Strictly speaking, it expresses a new proposition altogether. I will argue in what follows that one can know that Hitler is a bad man without knowing that the proposition ‘Hitler is a bad man’ is true, and one can also know that the proposition ‘Hitler is a ban man’ is true without knowing that Hitler is a bad man. Perhaps Craig could say that just because two propositions are related in this kind of ‘semantic order,’ knowing one needn’t entail knowing the other. This, in fact, is what he should say; he should deny that the two expressions have all and only the same semantic content. Maybe he does deny this (I don’t know), but I will try to argue that he (and we all) should.

One can know that Hitler is a bad man without knowing that the proposition ‘Hitler is a bad man’ is true because a necessary condition on knowledge is belief, but there is no psychologically necessary connection between knowing that Hitler has the property of being a bad man, and knowing that a proposition (about Hitler’s being a bad man) has the property of being true. After all, when one knows that the proposition “Hitler is a bad man” is true, one doesn’t necessarily believe that the proposition that “the proposition that “Hitler is a bad man” is true” is true. In fact, if that were psychologically necessary then it would follow that by knowing any one proposition one would, of psychological necessity, know infinitely many propositions, and this is absurd. Ergo, one can know that the content expressed by a proposition is true without knowing that the proposition itself has the truth-value ‘true.’

One can also know that a proposition is true without knowing its content. For instance, suppose that there were a being who only ever proclaimed true things (and suppose you were aware of this being having that quality).[2] Suppose further that this being proclaimed to you that proposition P is true. Even if you had only a vague understanding of the content of proposition P (maybe it’s some proposition about quantum mechanics, or actuarial mathematics, or epistemology, or whatever field you may be least familiar with), you could know that it is true. In fact, even if you had no way of making heads or tails of proposition P, you could know that it is true. Therefore, one can know a proposition to be true without believing in its content, and thus without ‘knowing’ its content.

There may be a problem here; we can legitimately wonder whether to know that the proposition P is true requires being properly acquainted with P, and whether to be ‘properly acquainted‘ with P requires an understanding of its content. After all, when I see a man in the distance (who is a stranger to me, but whose name is actually ‘Bill’) break a window, I may know that somebody has broken the window, but I do not know that Bill has broken the window. In the same way I can perhaps know that a proposition is true (namely whatever proposition was uttered by the infallible proclaiming being), without knowing that the proposition (proclaimed by the infallible proclaiming being) is true.

How can one be properly acquainted with a proposition, so as to be able to know whether it is true? If one maintains that one must know what the content of the proposition is in order to be properly acquainted with it, then it will turn out that it may not be possible to know that the proposition “Hitler is a bad man” is true without knowing that Hitler is a bad man. If, on the other hand, it suffices to be able to indicate or ‘pick out’ P from other propositions in some way, such as by saying ‘that proposition’ or ‘the proposition just uttered’ or, perhaps, by repeating/recreating the same combination of sounds/scribbles used to express the proposition in the first place, then obviously one can know a proposition P to be true without knowing its content.

In everyday life we find ourselves doing this all the time. We may hear a professor relate a proposition to us, for instance, and without as of yet having understood it, we know (in the sense of having a true and appropriately justified belief) that it is true. The same may happen in a court of law where an expert witness submits testimony which we can know to be true, even if we haven’t understood that to which the expert is testifying. We also often know propositions to be true, such as “E=MC2,” which few of us have any genuine understanding of. This goes to show that one needn’t be very well acquainted with a proposition in order to know that it is true. Propositional acquaintance comes in degrees, and all one needs in order to know that a proposition is true is enough acquaintance to ‘pick out’ that proposition by some means. If this is right, then to be properly acquainted will require very little, and too little to present any challenge to my argument.

Where does this leave us with respect to Craig’s claim that to move from a claim like that Hitler is a bad man, to a claim like that the proposition Hitler is a bad man is true, is to merely to ascend semantically? It will depend on what Craig means, precisely, by semantic ascent/descent; it should be rejected in light of my observation just in case it does not allow one to maintain i) that the propositional contents in any two propositions related in this ‘order’ of semantic ascent/descent are not identical with each other, and ii) that one can know a truth of either kind without knowing a truth of the other kind. In short, I am calling for a semantic bifurcation of propositions related to each other in a semantic order of ascent/descent.


[2] Thanks to Dr. Brian Leftow, who shared this thought in conversation. I had written in an essay that a being was propositionally omniscient if and only if it knew all true propositions and believed no false ones, but later in the same paper I defined propositional omniscience again as follows: a being B is propositionally omniscient if and only if for any proposition P, if P has the property of being true then B knows that it is true, and if P does not have the property of being true then B does not believe it. This clumsy mistake of mine led to more careful reflection on precisely this point; knowing that P is true is not the same as knowing P.

Foreseeing Problems with Engineering Optimized Academic Languages

I want to share some thoughts I recently had about the possibility of literally engineering languages optimized for academic research, and what consequences we might expect to follow from implementing the use of such languages as academically standard. The thought came to me in an example packaged with the intellectual accouterments of my own area of expertise (or at least area of academic focus), the philosophy of time, but the point generalizes (I think). Philosophy of time, perhaps especially as it exists within the Anglo-American or ‘Analytic’ tradition, requires key distinctions like the distinction between ‘time’ and ‘tense’ in order to get off the ground. Not all natural languages have developed in a way that allows one to make this distinction, however, and as a consequence some languages turn out to be more optimal for the study of such niche philosophical areas than others. Recently William Lane Craig pointed out[1] that French, for example, allows no room for the distinction between time and tense, since both ‘time’ and ‘tense’ are represented by one single word in French: ‘temps.’ He recalls that a French Graduate student in philosophy struggled to communicate his ideas to his academic peers:

“As a French speaker, he found it next to impossible to communicate to his colleagues his interest in tensed versus tenseless theories of time. Since in French the word for time and the word for tense is the same, namely, temps, he found himself quite a loss to how to communicate something like tenseless time. People didn’t even know what he was talking about!”[2]

This is (presumably) precisely why philosophy of time has been so stagnant in the French-speaking academic world.

As I was reflecting on this it occurred to me that some language might be maximally optimal for the study of philosophy in general, or for epistemology, or metaphysics. This will not sound unfamiliar to those who are well read in the continental tradition of philosophy (think of Heidegger and Hegel and their ilk who treat the German language itself as indispensable not just for their own philosophies, but for the world’s most pristine philosophy). However, what I have in mind is more radical than this. Given that some natural languages are better suited to certain intellectual pursuits, it seems entirely plausible (theoretically) that we could optimize a language for a discipline. I mean that we would literally engineer a language, from the ground up, to be optimized for a species of academic/intellectual pursuit. Presuming indefinitely many more advances to come in the worlds of cognitive science and linguistics I can see no reason why this suggestion is infeasible in principle (in fact, we need only presume a few more leaps and bounds forward in these areas, so we could even dispense with the ‘indefinitely many more advances to come’ optimism). Imagine having an artificial language which we specially constructed to be optimized for the study of philosophy, or for the study of psychology, or chemistry, or even linguistics itself. What would the academic world look like if we were to do this? If people in these fields all used a highly specialized language (not just technical vocabulary) then how good would it be for academia in general? Would there be such a thing as a maximally optimized language for a discipline (is there a ceiling to how optimized these languages might be, or would some disciplines have ceilings while others did not – and presuming that at least a few had ‘ceilings,’ would some be lower than others, and could we infer anything interesting from that)? Interesting questions.

I can, it is true, imagine such a project being an academic boon in certain respects, but I can just easily (perhaps more easily) imagine it being academically detrimental in other respects. For instance, if somebody who studies chemistry is (potentially) a brilliant chemist, but is no good at all with linguistics, wouldn’t they be disadvantaged if it became academically necessary for them to learn a new language to study in their chosen field? How many gifted prodigies would we disadvantage compared to the polymaths we would be advantaging? Moreover, we already have a problem with academic overspecialization and intellectual insulation; academics aren’t always good at keeping up with what is going on around them in other fields (political science, physics, social science, philosophy, musicology etc.), and it seems as though encouraging academics to speak in literally idiosyncratic languages optimized for, but peculiar to, their own fields, would only exacerbate this problem.

I can imagine specially engineered academic languages causing even deeper academic divisions. Imagine, for instance, that a political science student writes a thesis on how the specialized language of biology – say – were subliminally charged with a left/right-wing political ideology, or imagine that a gender-studies major writes a thesis on how the specialized language of physics is inherently and structurally sexist.[3] Such criticisms, which would be sure to come, would create not academic rapprochement but alienation and perhaps, in the worst case, even antagonism. In the end it isn’t clear to me just how helpful developing and implementing such specialized artificial languages would really be, and I suspect (for many of the reasons I’ve alluded to) that the payoff wouldn’t be worth the cost.

[1] William Lane Craig, “The Passage of Time,”

[2] William Lane Craig, “The Passage of Time,”

[3] For fun, see:

An Argument for Analogy

I have heard some atheists and skeptics about God’s existence claim that the Thomistic doctrine of analogy (i.e., that we can only speak about God by analogy, since we form our concept about God under the influence of empirical impressions of creatures, and so cannot possibly form a univocal concept of God’s essence as such), cannot be right because there is nothing about which we can only speak by analogy. In other words, if we can speak about anything by analogy, then we can speak about that thing univocally as well (followers of Duns Scotus will often press this point).[1] In this post I want to explore a way in which I think the multiverse hypothesis, which is popular among naturalists today, implies that there are some things, after all, about which we can speak, write and think, only by way of analogy, or at least by way of analogy alone.

There are a few different definitions of the multiverse hypothesis, but I will here take the multiverse hypothesis to be the thesis that there is an ensemble of universes, including our own, all of which have their own entirely separate spaces and times. On this hypothesis other space-times exist, but, it turns out, their spaces and times are incommensurable with our own. Everything may seem fine so far, but an interesting thing happens when we reflect more deeply on this (hypothetical) situation. It turns out that, in a rather straightforward way, the space and time of any alternative universe isn’t really what we refer to as space, or what we refer to as time. It makes no sense to ask ‘how far away’ an object in the space of another universe is, or ‘how long ago’ an event in another universe was, precisely because those universes do not share our time, or our space. It makes no sense to ask how old another universe is compared to ours, or how large another universe is compared to ours, for they cannot stand in such relations to each other. Those comparisons break down at this level because they become semantically vacuous. Such relations simply do not obtain between different universes.

This makes clear that our time is what we refer to when we talk about time, and our space is what we refer to when we talk about space. We are, when conceiving of other universes, taking our concepts of time and space and saying of a reality actually incommensurable with our own that it is ‘like this.’ That, however, is just to say that we are using our concepts of space and time analogously, and this is precisely the way in which the Thomist thinks we can, and must, speak about God. These other universes do not literally or univocally have any space, or any time, where these words are understood in their literal senses. You can satisfy this for yourself by thinking through some obvious considerations; for instance, consider that anything extended in time is, by logical necessity, earlier, later, or simultaneous with all other events in time. In the case of another universe this is not so, for anything extended in the ‘time’ of another universe is not earlier, later, or simultaneous with all other events in time. The same can be said of space, since any two (non-identical, non-overlapping) things extended in space are, necessarily, some distance apart from each other, but objects extended in different spaces are no distance apart from each other. The only way to make sense of talk of spaces and times incommensurable with our own is by analogy; we can speak about other space-times only by adopting a propositional attitude according to which we recognize our statements to be predicated by way of analogy. Our terms are inherited from the world with which we are familiar, and we are using them to speak about realities which we otherwise (than by analogy) cannot speak or think about at all. Nevertheless, the multiverse hypothesis can be both coherent and even true.

If I am right, then what this shows is that analogous predication is coherent and legitimate after all, at least if the multiverse hypothesis is a coherent hypothesis (it may not be, of course, but at least the naturalist/skeptic who takes it to be a coherent hypothesis will not be able to turn around and say that the Thomistic doctrine of analogy must be wrong because there isn’t anything about which we can speak only by analogy). Perhaps the naturalist will recoil at this point and argue that even if different universes have incommensurable times and spaces, that doesn’t mean that there is no way to predicate anything univocally about these different universes. For instance, perhaps two universes can stand in some real relation (for instance of similarity) to one another, or perhaps we can say that both exist in exactly the same sense of ‘exist.’ In response, I want to say that I am doubtful that any two universes can stand in any real relation to each other at all (I think this is ultimately a linguistic confusion), and even if many different universes could be said to exist in a univocal sense, their spaces and times considered as such could only be described and conceived of by analogy with our own. Perhaps it is not inappropriate to point out, as well, that existence is not a first-order predicate anyway (a point with which the naturalist will almost certainly agree), so that the fact that it can be applied apparently univocally shouldn’t worry us precisely because it isn’t a property. As such, it contributes absolutely nothing to the idea of the thing in question, and the doctrine of analogy maintains that it is our idea of the thing which can be formed exclusively by analogy.

Another objection may be that space and time are complex ideas which are conceptually formed by putting together combinations of simpler ideas, each of which can, as it turns out, be used univocally as applied to our universe and to others. For instance, somebody could suggest that time is nothing other than the direction of increasing entropy, and that ‘entropy,’ ‘direction’ and ‘increasing’ are concepts which can be applied univocally across different universes.[2] I think that this is wrong for a few reasons. First, ‘direction’ doesn’t seem to be univocally applied across different space-times (maybe it is, but it isn’t clear to me that it is). Second, I see no reason to think that time is defined by the direction in which entropy increases. In fact, the only reason we think of entropy increasing over time is because as time passes we observe an increase in entropy, but had it been the other way around we would have defined time as the direction in which entropy decreases, and, indeed, there are presumably some (at least possible) universes in the multiverse in which, as time goes on, entropy does decrease – and if this is even possible, given the multiverse hypothesis, then time cannot mean merely the direction in which entropy increases. If time simply meant the direction in which entropy increases then a universe in which entropy decreases over time would be physically impossible, but that, as far as I know, is not the case (perhaps someone could raise a quibble here about the second law of thermodynamics, but that is articulated precisely with the presumption that it is about our universe). Moreover, if one simply defines time as the direction in which entropy increases then I think it follows trivially that in no universe is it physically possible for entropy to decrease over time, but there is no good reason at all to accept this definition of time, and there are some very deep philosophical reasons for rejecting such a definition.[3] In any case I think our concept of time is more primitive and basic than our concept of entropy; we discovered that entropy increases as time passes, but we did not and could not have discovered the reverse.

In conclusion then, it seems to me that the naturalist faces a dialectical dilemma here. Either analogous predication is coherent and legitimate, in which case we can countenance both the doctrine of analogy and the multiverse hypothesis, or else it isn’t, in which case we cannot. If the naturalist wants to appeal to the multiverse hypothesis, even as a merely coherent hypothesis (for instance, as a possible explanation of the appearance of fine-tuning), then they will have to concede to the Thomist that we can, in principle, speak about God by analogy alone (not to be confused with the concession that we can only speak about God by analogy).


[1] See: Thomas Williams, “John Duns Scotus,” in The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edited by Edward N. Zalta,

[2] My thanks to a friend for bringing this point to my attention.

[3] For more on this topic please visit:


Natural Kinds and Informational Atomism: A presentation on Jerry Fodor’s view

The following is a written form of a presentation I had to give, for an honours metaphysics class, on Jerry Fodor’s closing chapter in his book Concepts: Where Cognitive Science Went Wrong, where he presents his view on natural kinds concepts and non-natural kinds concepts. I won’t bother trying to present the dynamics of his view(s) as a preamble; hopefully the presentation will be clear enough to stand on its own. Perhaps only a few notes: when something like a doorknob or water is spoken about, the concepts to which the word corresponds per se are written in capitals (eg. DOORKNOB), the properties per se are written as, for example, ‘doorknobhood,‘ and the things per se are just called by their names (eg. doorknob(s)). That should be enough to make what follows comprehensible, I hope.


We’re eventually going to have to swallow Informational Atomism whole. Accordingly, I’ve been doing what I can to sweeten the pill.[1]

In this closing chapter of the book, Fodor intends to give us a certain conscionable ontological story about conceptual atomism. He notes that we “aren’t actually required to believe any of what’s in this chapter or the last”[2] but intends, in them, merely to scout out the ontological geography of conceptual atomism conjoined to informational semantics; a thesis he aptly calls informational atomism. Conceptual atomism, pace the Standard Argument we saw in the previous chapter, also plausibly entails radical conceptual nativism; from the Standard Argument “it follows that you can’t have learned your primitive concepts at all. But if you have a concept that you can’t have learned, then you must have it innately.”[3] We already saw that Fodor avoided radical conceptual nativism by adopting a position which is “explicitly non-cognitivist about concept possession.”[4] According to Fodor, “having a concept is… being in a certain nomic mind–world relation… in virtue of which the concept has the content that it does.”[5] This is just the thesis of informational semantics (i.e., the thesis that “meaning is information”[6]) according to which “content is constituted by some sort of nomic, mind–world relation”[7] from which it evidently follows that “there must be laws about everything that we have concepts of”[8] including doorknobs. This, then, is the principle subject of this chapter: to tell a story about nomic regularities governing our ‘locking-to’ properties in the world, including properties like doorknobhood.

But how could there be laws about doorknobs? Doorknobs, of all things![9]

Before launching into Fodor’s account of the nomic regularities governing doorknobs, it may be useful, if not necessary, to review what it is philosophers mean by calling certain things, or species of things, ‘natural kinds.’ After all, Fodor’s explanation will tread upon the distinction between concepts of things which are natural kinds, and concepts of things which are not. The go-to paradigm case in the literature is most often the table of elements, as Alexander Bird and Emma Tobin explain:

“Chemistry provides what are taken by many to be the paradigm examples of kinds, the chemical elements, while chemical compounds, such as H2O, are also natural kinds of stuff. (Instances of a natural kind may be man-made, such as artificially synthesized ascorbic acid (vitamin C); but whether chemical kinds all of whose instances are artificial are natural kinds is open to debate. The synthetic transuranium elements, e.g. Rutherfordium, seems good candidates for natural kinds, whereas artificial molecular kinds such as Buckminsterfullerene, C60, seem less obviously natural kinds.) The standard model in quantum physics reveals many kinds of fundamental particles (electron, tau neutrino, charm quark), plus broader categories such as kinds of kind (lepton, quark) and higher kinds (fermion, boson).”[10]

The idea here is that when we identify a natural kind, we are discovering something about the way the world really is carved up, rather than conventionally cutting it up into arbitrary (though no doubt useful) categories. We are in the business of discovery rather than invention, science rather than manufacturing.

Natural kinds, per hypothesis, are the only things which ground the truth of nomic regularities (they act as the truth-makers for laws in science). Thus Fodor says “I suppose that natural kind predicates just are the ones that figure in laws.”[11] Whereas doorknobs, which are pretty obviously not natural kinds, do not, “a natural kind enters into lots of nomic connections to things other than our minds.”[12] How, then, could there be laws about such things as doorknobs? Fodor’s answer comes in two parts; first, there is only one law about doorknobs, and second that this is actually “really [a law] about us.”[13] The suggestion is that there isn’t anything “whose states are reliably connected to doorknobs qua doorknobs except our minds.”[14] This is perfectly acceptable, however, since, although doorknobs aren’t a natural kind, our minds clearly are. So, all the nomic regularities (namely, just one) which hold with respect to doorknobs, are really laws about our minds; they stipulate that our minds reliably lock to the property of doorknobhood to which our minds are (naturally?) calibrated. So, doorknobs, according to Fodor, are mind-dependent; “DOORKNOB expresses a property that things have in virtue of their effects on us.”[15]

The Auntie-esque Complaint

What consequences does this view have for Metaphysical Realism? Fodor’s answer, in a word, is ‘none.’ He provides two reasons for this. First,

(i)               (∃x)(Rx & MDx)
-There is at least one ‘x’ such that ‘x’ is Real, and ‘x’ is Mind-Dependent. So, (∀x)~(MDx ⊃~Rx)


(ii)               (∀x)(∃y)~(MDy ⊃ MDx)
-It is not the case that if at least one thing is mind-dependent, then everything is mind-dependent.

First thing’s first: Fodor insists that his commitment to ‘doorknobs’ being mind-dependent does not commit him to Idealism. Idealism, best I have ever been able to define it, is the idea that relations are ontologically prior to their relata.[16] However, this isn’t the case with Fodor’s story, since at least one of the relata (namely, the mind) in the nomic relation between doorknobhood and the mind, is ontologically prior to the relation between them. Fodor argues that doorknobs are real because minds are real, and so “there are doorknobs iff the property that minds like ours reliably lock to in consequence of experience with typical doorknobs is instantiated.”[17] To say that doorknobs aren’t real because they are mind dependent would be akin, in Fodor’s submission, to suggesting that fingers aren’t real because they are hand-dependent! Moreover, doorknobs are even ‘in the world’ since “Doorknobs are constituted by their effects on our minds, and our minds are in the world.”[18]

Fodor seems to think that it is obvious that our minds exist and that there are properties like doorknobhood, instantiated in the world,[19] to which we reliably lock, and suggests that to doubt this conjunction could only be motivated by a fear of malin génies. Evil demon(s) or no, while the first conjunct seems obvious to me too, I’m not so sure about the second; however, that was the topic of the previous chapter, so I will use my better judgment here and decide to leave well enough alone.

In any event, doorknobs are real just in case there is “simply nothing wrong with, or ontologically second-rate about, being a property that things have in virtue of their reliable effects on our minds,”[20] but is there? George Lakoff provides us with the example of Tuesdays, arguing to the effect that, at least in the case of Tuesdays, there is no property to which we lock in acquiring TUESDAY which is to be found in the world “external to and independent of human minds.”[21] Fodor complains that he isn’t sure what ‘external to human minds’ could mean, replying that “I would have thought that minds don’t have outsides for much the same sorts of reasons that they don’t have insides.”[22] Tuesdays, he suggests, may[23] be mind-dependent and tendentiously conventional, but “there are many properties that are untendentiously mind-dependent though plausibly not conventional,”[24] like, for example, doorknobs!

Second thing’s second: even though there are plenty of concepts like DOORKNOB for which the properties to which we lock are constituted by the calibration of our minds, there are plenty of concepts for which this isn’t so, like WATER. Water is, after all, a natural kind, and there are, after all, some natural kinds. In fact, “DOORKNOB isn’t [even] the general case,”[25] but WATER is. According to informational semantics, having concepts like WATER or H2O, is in either case “being locked to the property of being water; and being water is a property which is, of course, not mind-dependent. It is not a property things have in virtue of their relations to minds, ours or any others.”[26]

The introduction of natural kinds may help Fodor’s view go down easier for the metaphysical realist, but it may also introduce a problem to which Fodor will spend quite a bit of time offering a response. The problem is that even if we lock to the property ‘water’ and thus acquire WATER, and water is a natural kind, how could we possibly lock to water as a natural kind, and if we don’t, how is water’s being a natural kind any help to the story? Fodor here prefers to use the avenue of storytelling to get his ideas across, and his story is pretty familiar to most of us. It involves a snake, a garden and a gestalt shift.

Felix culpa; “from the Garden to the Laboratory”[27]

Back in the garden, in the state of epistemic innocence, Fodor imagines that we never had to draw an appearance/reality distinction, since all of our concepts were of mind-dependent properties. We simply had no concepts of natural kinds (as such). We could acquire the concept DOORKNOB without ever worrying about whether we were locking to the property of being a “Twin-doorknob.”[28] Then along came a snake, and the rest is history.

Back in the Garden, when we were Innocent, we took it for granted that there isn’t any difference between similarity for us and similarity sans phrase; between the way we carve the world up and the way that God does. [Then the snake came along and convinced us by saying:] If you want to carve Nature at the joints, if you want to know how the world seems to God, you will have to learn sometimes to distinguish between Xs and Ys even though they taste (and feel, and look, and sound, and quite generally strike you as) much the same.[29]

Thus was birthed the scientific enterprise, and with it, the notion of natural kinds. The idea here is just that the whole notion of natural kinds is bound inexorably up with scientific theory; it is bound up with an essence/appearance distinction which we make in science (at least, if one construes science as the scientific and metaphysical realist will want it construed). Science envisions ways in which we can get access to the (hidden) essence(s) of things (i.e., “the deep sources of their causal powers”[30]) like water. Things which, in other words, are natural kinds. “The moral [of the myth] is that whereas you lock to doorknobhood via a metaphysical necessity, if you want to lock to a natural kind property, you have actually to do the science.”[31]

It is, Fodor suggests, “intuitively plausible, phylogenetically, ontogenetically, and even just historically, to think of natural kind concepts as late sophistications.”[32] Natural kinds, according to Putnam (according to Fodor), “thrive best – maybe only – in an environment where conventions of deference to experts are in place.”[33] The point is that we don’t start out with natural kinds, but acquire ‘natural kind’ concepts as such through the toil and labour of scientific advancement. However, didn’t we say (or shouldn’t we say) that we had the concept WATER in the garden (which is to say, pre-scientifically)? Fodor wants, here, to make a clear distinction between merely having a concept of a thing which is a natural kind in fact, and having a concept of a natural kind as a natural kind. For instance, to have the concept ‘Giraffe’ is to have a concept of something which happens to be, as a matter of fact, a natural kind, but to have the concept of ‘Giraffe’ as a natural kind is quite another thing. It seems, prima facie, that the same problem threatens to rear its head in this matter as had to be dealt with in Chapter 5 under the heading The Pet Fish Problem. If the concept of something as a natural kind, requires the belief in a scientific theory, then mustn’t it not be primitive? Fodor’s answer appears somewhat two-faced at first:

Did Homer have natural kind concepts?

Sure, he had the concept WATER (and the like), and water is a natural kind.

But also:

Did Homer have natural kind concepts?

Of course not. He had no disposition to defer to experts about water (and the like); I expect the notion of an expert about water would have struck him as bizarre. And, of course Homer had no notion that water has a hidden essence, or a characteristic microstructure (or that anything else does); a fortiori, he had no notion that the hidden essence of water is causally responsible for its phenomenal properties.[34]

Now, before going on from here to tackle the issue of the legitimacy of this distinction, and then Fodor’s account of natural kind ‘as such’ concept acquisition, I want to interject with a drive-by criticism (time permitting). In response to Fodor’s fairy tale, I would like to offer a different epistemo-gony.[35] Suppose that in the garden we presumed that the way in which we carved up the world just was the way in which God did so; that is to say, suppose that we assumed that everything about which we had a concept was a natural kind, including Tuesdays! Suppose that upon hearing what the snake had to say we didn’t acquire natural kind concepts as such, but acquired mind-dependent concepts as such. In less cryptic language, suppose that we naturally assume that the way in which we carve up the world is the way in which the world is really carved up, and it is only when we lose our epistemic innocence that we begin to suspect that not everything is a natural kind after all. The fall doesn’t introduce us to natural kinds, it introduces us to artificial ones. Is this story true? Perhaps it is. Fodor notes, (in frustration?), that:

Much of what is currently being written about concepts—by philosophers, but also, increasingly, by psychologists—suggests that natural kind concepts are the paradigms on which we should model our accounts of concept acquisition and concept possession at large.[36]

Fodor also notes that psychologists may have reason to think the same is true of individual human development. He writes “the current fashions in developmental cognitive psychology… stress how early, and how universally, natural kind concepts are available to children.”[37] Perhaps Putnam is right that natural kind concepts thrive best or only given conventions of deference to experts, but it stands to reason that he may be right because of the fall. Even in the history of metaphysics the direction of the fall seems to be in the opposite direction, not towards natural kinds, but away from them. Galileo had to argue contra his medieval predecessors that feathers do not have some hidden essence in virtue of which they have the potency ticklishness – instead, the property of being ticklish was mind-dependent. Later Berkeley did the same for size and shape, and, in short, everything else.

Through experimentation we know that children are “clear that you can’t make a horse into a zebra just by painting on stripes,”[38] and Fodor candidly concedes that “it’s usual to summarize such findings as showing that young children are ‘essentialists’, and if you like to talk that way, so be it. My point, however, is that being an essentialist in this sense clearly does not imply having natural kind concepts.”[39] Why not? The reason, in his submission, is that “what’s further required, at a minimum, is the idea that what’s ‘inside’ (or otherwise hidden) somehow is causally responsible for how things that belong to the kind appear; for their ‘superficial signs’.”[40] Children, however, do not (seem to) have this additional commitment; “it is, of course, an empirical issue, but I don’t know of any evidence that children think that sort of thing.”[41] Surely, though, it seems plausible to think they do, and if they do (or, more modestly, insofar as it is plausible to even think they do) then Fodor’s story needs to be seriously amended. Fodor says “unlike Quine, I’m no Empiricist,”[42] but why on earth, I wonder, does he think children are? In any case, here ends my drive-by criticism.

So now all I owe you is a story about what “emerging” comes to… I’ll start with natural kind concepts and informational semantic and just let the “emerging” emerge…[43] Then I get to go sailing.[44]

Can an atomistic informational semantics really “honour that distinction”[45] we just saw between “merely having a natural kind concept and having a natural kind concept as such[?]”[46] Having the concept WATER, for instance, as a natural kind seems, prima facie, to require “also having, for example, concepts like MICROSTRUCTURE and HIDDEN ESSENCE and NATURAL KIND,” but if this is so then such concepts aren’t really atomistic/primitive after all. The pre-theoretic story about how we lock to WATER doesn’t do us any good here (it’s the very same story, after all, that Fodor used to explain how we acquire DOORKNOB); what we need, now, is a post-theoretic story. Instead of locking to the superficial (i.e., ‘empirical’) signs of water, as we do when we acquire merely the pre-theoretic concept ‘WATER,’ Fodor suggests that we lock to water “via a theory that specifies its essence.”[47] One pleasant consequence of this is that we would be locked to WATER as a natural kind not merely in all nomologically possible worlds (i.e., worlds in which, regularities being as they are, we would be appeared to WATER-ly[48] and thereby, via a scientific theory, be locked to the post-theoretic ‘WATER’), but also across all metaphysically possible worlds. He explains that “we’re locked to being water via a chemical-cum-metaphysical theory, that specifies its essence, and that is quite a different mechanism of semantic access from the ones that Homer relied on,”[49] or children, or animals. In other words, we acquire the natural kind concept WATER if and only if “we’re locked to water via a theory that specifies its essence.[50]

This story may at first blush sound as though it is creating a semantic Chinese-wall between the pre-theoretic WATER, and the post-theoretic WATER; the danger is that such a story seems to make these two concepts so distinct as to imply their incommensurability. However, Fodor deals with this difficulty with remarkable ease by arguing simply that “if you are locked to water our way, you have the concept WATER as a natural kind concept; if you are locked to concept WATER Homer’s way, you have the concept WATER, but not as a natural kind concept.”[51] Therefore, either way, you are locked to the same thing; namely, the property ‘water.’ Interestingly, this means the blind man can lock to the very same property as we who can see do, when we both lock to doorknobhood. It also means that if I were in the Matrix, and had not yet acquired the concept DOORKNOB, I might acquire it so long as I lock onto the property ‘doorknobhood‘ which, we recall, is a mind dependent property. Doorknobs really would exist in the real world, even if there existed no doorknobs outside of the Matrix I inhabit, so long as (i) my mind exists, and (ii) my mind has locked to the mind-dependent property ‘doorknobhood.’

To recapitulate, “all that’s required [for acquiring the concept WATER as a natural kind as such] is being locked to water in a way that doesn’t depend on its superficial signs,”[52] and instead depends upon locking to water qua its essence. To do this, however, our ‘locking-to’ properties which are natural kinds must be mediated by some (correct) scientific theory which has (successfully) ‘discovered’ the hidden essences of those properties; “science discovers essences, and doing science thereby links us to natural kinds as such.”[53] Thus, on Fodor’s story, “Homer did have the concept WATER (he had a concept that is nomologically linked to being water) and, of course, being water isn’t a mind-dependent property. So Homer had a concept of a natural kind. But WATER wasn’t, for Homer, a concept of a natural kind as such; and for us it is.”[54]

If you are locked to water either way, you have the concept WATER. (I suppose that God is locked to being water in still a third way; one that holds in every metaphysically possible world but isn’t theory-mediated. That’s OK with informational semantics; God can have the concept WATER too. He can’t, however, have the pretheoretic concept WATER; the one that’s locked to water only by its superficial signs. Nobody’s Perfect.)[55]

Fodor continues to insist that, here as well, “there are no concepts the possession of which is metaphysically necessary for having WATER as a natural kind concept (except WATER); all that’s required is being locked to water in a way that doesn’t depend on its superficial signs.”[56] Instead, “what you need to do to acquire a natural kind concept as a natural kind concept ab initio is: (i) construct a true theory of the hidden essence of the kind; and (ii) convince yourself of the truth of the theory.”[57]

The Luddite objection

If there is a genuine analytic-synthetic distinction, then conceptual atomism pretty obviously tears apart at the seams. I won’t here offer a full blown and proper response to Quine’s Two Dogmas, mostly because I am not presenting today on Quine’s Two Dogmas, but I will offer two versions of this ‘Luddite objection’ with Fodor as my target. First, it is self-evident to us that there are analytic truths, truths which are true come what may, and which delineate the parameters of logically possible worlds. For example, take the analytic truth ‘at least one statement is true,’ which is clearly an analytic truth (if it were false, then the statement “‘at least one statement is true’ is false” would be true). Notice that this is even weaker than Putnam’s “minimal principle of contradiction,”[58] according to which “not every statement is both true and false.”[59] Although Putnam is quite right that “for the purpose of making this point, one needs only one example,”[60] the idea here is the weaker the better. If Putnam’s example works, then it stands to reason that so does mine. The conclusion should be that we know Quine is wrong, come what may, and that we know Fodor is wrong come what may, at least insofar as his view rests on the abolition of the analytic-synthetic distinction.

I could follow this criticism up with any number of epistemological criticisms, such as that the analytic-synthetic distinction is self-evident, that it is a good candidate for being properly basic, and so on. However, I want to offer a Moorean response; just as G.E. Moore responds to the Cartesian skeptic that he is more sure that he has a hand than he is that the arguments for skepticism are sound, so we are all, in fact, in this same position with respect to Quine’s arguments. We may not know how to answer them, but we sure as heck know they are wrong, because we are more sure of the analytic-synthetic distinction than we are of the soundness of any argument concluding to the abolition of the analytic-synthetic distinction. That, I take it, is just a psychological-epistemic fact about us.

To conclude, Fodor has managed so far to give us a story, on the assumptions of conceptual atomism and informational semantics, about how there could be nomic regularities about both natural kind concepts (as such) and pre-theoretic (mind-dependent) concepts, like doorknobs. Fodor rejected meaning holism early on, precisely because meaning holism makes nomic regularities about concept acquisition impossible, and in this chapter he has attempted to cash out a theory of the laws of concept acquisition.


[1] Fodor, Jerry A. Concepts: Where cognitive science went wrong. (Clarendon Press/Oxford University Press, 1998): 162.

[2] Ibid. 161.

[3] Ibid. 124.

[4] Ibid.

[5] Ibid.

[6] Ibid. 12.

[7] Ibid. 146.

[8] Ibid.

[9] Ibid. 124.

[10] Bird, Alexander and Tobin, Emma, “Natural Kinds”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <;.

[11] Fodor, Jerry A. Concepts: Where cognitive science went wrong. (Clarendon Press/Oxford University Press, 1998): 150.

[12] Ibid. 161. My underline.*

[13] Ibid. 146.

[14] Ibid. 147.

[15] Ibid. 148.

[16] Alternatively, perhaps that phenomena are entirely independent of noumena.

[17] Ibid.

[18] Ibid. 149.

[19] If it is instantiated in the world by being instantiated in the calibration of our brains then what else could the properties be other than peculiar bundles of superficial-signs? Is that problematic? I’m not sure.

[20] Ibid. 148.

[21] Ibid.

[22] Ibid. 149.

[23] He never actually says this, so I’m reading between the lines.

[24] Ibid.

[25] Ibid. 147.

[26] Ibid. 150.

[27] Ibid. 161.

[28] Ibid. 151.

[29] Ibid.

[30] Ibid. 153.

[31] Ibid.

[32] Ibid.

[33] Ibid. 154.

[34] Ibid. 155.

[35] The Greek ‘γέγονα’ means to be begotten or born, ‘to begin’, and I mean to use it here in the same sense it carries in the word cosmogony.

[36] Ibid. 154.

[37] Ibid.

[38] Ibid.

[39] Ibid.

[40] Ibid. 154-55.

[41] Ibid. 155.

[42] Ibid. 145.

[43] Ibid. 155.

[44] Ibid.

[45] Ibid. 156.

[46] Ibid.

[47] Ibid. 157.

[48] Perhaps here it should read Science-ly & WATER-ly.

[49] Ibid.

[50] Ibid.

[51] Ibid.

[52] Ibid. 158.

[53] Ibid.

[54] Ibid. 157.

[55] Ibid. 157.

[56] Ibid. 158.

[57] Ibid. 160.

[58] Putnam, Hilary. “There is at least one a priori truth.” Erkenntnis 13, no. 1 (1978): 156.

[59] Ibid.

[60] Ibid.