“Too Many” Yabloesque Paradoxes

The Yablo Paradox (due to Stephen Yablo and Albert Visser) consists of an infinite sequence of sentences of the following form: S1: For all m > 1, Sm is false. S2: For all m > 2, Sm is false. S3: For all m > 3, Sm is false. : :

: Sn: For all m > n, Sm is false. Sn+1: For all m > n+1, Sm is false. Hence, the nth sentence in the list ‘says’ that all of the sentences below it are false.

“Too Many” Yabloesque Paradoxes

The Yablo Paradox (due to Stephen Yablo and Albert Visser) consists of an infinite sequence of sentences of the following form:

S1: *For all* m > 1, Sm is false.

S2: *For all* m > 2, Sm is false.

S3: *For all* m > 3, Sm is false.

: : :

Sn: *For all* m > n, Sm is false.

Sn+1: *For all* m > n+1, Sm is false.

Hence, the nth sentence in the list ‘says’ that all of the sentences below it are false. The sequence is genuinely paradoxical – there is no way to assign truth and falsity to each of the sentences in this list so that a sentence is true if and only if what it says is the case and a sentence is false if not. For some background on the Yablo paradox and variations on it, see my previous discussion here.

There are numerous variations on the Yablo Paradox. Many of these proceed by varying the quantifier used at the beginning of each of the sentences. For example, we obtain the Dual of the Yablo Paradox by considering an infinite sequence of sentences of the form:

Sn: *There exists an* m > n such that Sm is false.

In other words, each sentence in the Dual of the Yablo Paradox ‘says’ that at least one of the sentences below it is false. We obtain the Schlenker Unwinding (named after Philippe Schlenker) by considering an infinite sequence of sentences of the form:

Sn: *For infinitely many* m > n, Sm is false.

In other words, each sentence in the Schlenker Unwinding ‘says’ that infinitely many (but not necessarily all) of the sentences below it are false. And we obtain the Yablo Unwinding (named after C) by considering an infinite sequence of sentences of the form:

Sn: *For co-infinitely many* m > n, Sm is false.

In other words, each sentence in the Yablo Unwinding ‘says’ that all but finitely many of the sentences below it is false.

Here I want to explore another construction that results from substituting a common quantifier into the original Yablo construction. Consider what happens when we replace “for all” with “for too many”. We obtain the following sequence:

S1: *For too many of the* m > 1, Sm is false.

S2: *For too many of the* m > 2, Sm is false.

S3: *For too many of the* m > 3, Sm is false.

: : :

Sn: *For too many of the *m > n, Sm is false.

Sn+1: *For too many of the* m > n+1, Sm is false.

Hence, each sentence in the list ‘says’ that *too many* of the sentences below it are false. The first step in analyzing this construction is to consider what, exactly, we mean by saying that too many sentences are false. In the present context – an analysis of semantic paradoxes – the following seems like a natural reading:

“Too many of the sentences are false.”

Is equivalent to:

“The number of sentences that are false is (somehow) too large to be compatible with an acceptable assignment of truth and falsity to all sentences in the list.

In short, if *too many* sentences are false, then that means the list would be paradoxical.

We can now analyze this list to see if there is an acceptable assignment of truth and falsity to each sentence in the list. The first step in doing so is to note that no sentence in the list can be true:

If any sentence in the list is true, then given what it says, too many of the sentences below it would be false – that is, the collection of sentences below it that will be assigned falsity is too large to allow for an acceptable assignment of truth and falsity. So if there is an acceptable assignment of truth and falsity to the sentences in the list, then no sentence is true on that assignment.

But if no sentence on the list is true, then it follows that every sentence on the list must be false. Given what each sentence says, it also must be the case that all sentences in the list being false is, nevertheless, not too many.

So far, this seems okay. We have shown that all sentences on the list are false, and it turns out that even if all of the sentences on the list are false, this isn’t too many. But now consider the bit of reasoning in the offset passage above. The reasoning amounts to a proof of the following claim:

If one or more of the sentences in the list is true, then there are too many false sentences below it.

A new, sort of meta-level puzzle now arises. Combining this claim with our overall conclusion (i.e. all of the sentences are false, but this turns out not to be too many) we arrive at the following conflicting claims:

- If
*all*of the sentences in the list are false, then there*are not*too many false sentences in the list. - If
*only some*of the sentences in the list are false (and hence at least one is true), then there*are*too many false sentences in the list.

This seems to violate the following, I think rather obvious, principle:

Let X be a proper subset of Y (i.e. Y contains every thing contained in X, and also contains at least one thing not contained in X). Then, if there are too many things in X for some condition C to hold, then there are too many things in Y for condition C to hold.

Thus, despite the appearance of an apparently acceptable assignment of truth and falsity to each sentence in the list, the “too many” variant of the Yablo paradox seems paradoxical (or, at the very least, very puzzling) after all!

*Featured image: “Infinity” by MariCarmennd9. CC0 via Pixabay. *

The Liar paradox arises via considering the Liar sentence: L: L is not true. and then reasoning in accordance with the: T-schema: Φ is true if and only if what Φ says is the case. Along similar lines, we obtain the Montague paradox (or the paradox of the knower) by considering the following sentence: M: M is not knowable. and then reasoning in accordance with the following two claims: Factivity: If Φ is knowable then what Φ says is the case.

]]>The *Liar paradox* arises via considering the Liar sentence:

L: L is not true.

and then reasoning in accordance with the:

*T-schema*:

“Φ is true if and only if what Φ says is the case.”

Along similar lines, we obtain the *Montague paradox* (or the “paradox of the knower*“*) by considering the following sentence:

M: M is not knowable.

and then reasoning in accordance with the following two claims:

*Factivity*:

“If Φ is knowable then what Φ says is the case.”

*Necessitation*:

“If Φ is a theorem (i.e. is provable), then Φ is knowable.”

Put in very informal terms, these results show that our intuitive accounts of truth and of knowledge are inconsistent. Much work in logic has been carried out in attempting to formulate weaker accounts of truth and of knowledge that (i) are strong enough to allow these notions to do substantial work, and (ii) are not susceptible to these paradoxes (and related paradoxes, such as *Curry* and *Yablo* versions of both of the above). A bit less well known that certain strong but not altogether implausible accounts of idealized belief also lead to paradox.

The puzzles involve an idealized notion of belief (perhaps better paraphrased at “rational commitment” or “justifiable belief”), where one believes something in this sense if and only if (i) one explicitly believes it, or (ii) one is somehow committed to the claim even if one doesn’t actively believe it. Hence, on this understanding belief is closed under logical consequence – one believes all of the logical consequences of one’s beliefs. In particular, the following holds:

*B-Closure*:

“If you believe that, if Φ then Ψ, and you believe Φ, then you believe Ψ.”

Now, for such an idealized account of belief, the rule of *B-Necessitation*:

*B-Necessitation*:

“If Φ is a theorem (i.e. is provable), then Φ is believed.”

is extremely plausible – after all, presumably anything that can be proved is something that follows from things we believe (since it follows from nothing more than our axioms for belief). In addition, we will assume that our beliefs are consistent:

*B-Consistency*:

“If I believe Φ, then I do not believe that Φ is not the case.”

So far, so good. But neither the belief analogue of the *T-schema*:

*B-schema*:

“Φ is believed if and only if what Φ says is the case.”

nor the belief analogue of *Factivity*:

*B-Factivity*:

“If you believe Φ then what Φ says is the case.”

is at all plausible. After all, just because we believe something (or even that the claim in question follows from what we believe, in some sense) doesn’t mean the belief has to be true!

There are other, weaker, principles about belief, however, that are not intuitively implausible, but when combined with *B-Closure*, *B-Necessitation*, and *B-Consistency* lead to paradox. We will look at two principles – each of which captures a sense in which we cannot be wrong about what we think we don’t believe.

The first such principle we will call the *First Transparency Principle for Disbelief*:

*TPDB*1:

“If you believe that you don’t believe Φ then you don’t believe Φ.”

In other words, although many of our beliefs can be wrong, according to *TPDB*1 our beliefs about what we *do not* believe cannot be wrong. The second principle, which is a mirror image of the first, we will call the *Second Transparency Principle for Disbelief*:

*TPDB*2:

“If you don’t believe Φ then you believe that you don’t believe Φ.”

In other words, according to *TPDB*2 we are aware of (i.e. have true beliefs about) all of the facts regarding what we don’t believe.

Either of these principles, combined with *B-Closure*, *B-Necessitation*, and *B-Consistency*, lead to paradox. I will present the argument for *TPBD*1. The argument for *TPDB*2 is similar, and left to the reader (although I will give an important hint below).

Consider the sentence:

S: It is not the case that I believe S.

Now, by inspection we can understand this sentence, and thus conclude that:

(1) What S says is the case if and only if I do not believe S.

Further, (1) is something we can, via inspecting the original sentence, informally prove. (Or, if we were being more formal, and doing all of this in arithmetic enriched with a predicate “B(x)” for idealized belief, a formal version of the above would be a theorem due to Gödel’s *diagonalization lemma*.) So we can apply *B-Necessitation* to (1), obtaining:

(2) I believe that: what S says is the case if and only if I do not believe S.

Applying a version of *B-Closure*, this entails:

(3) I believe S if and only if I believe that I do not believe S.

Now, assume (for *reductio ad absurdum*) that:

(4) I believe S.

Then combining (3) and (4) and some basic logic, we obtain:

(5) I believe that I do not believe S.

Applying *TPDB*1 to (5), we get:

(6) I do not believe S.

But this contradicts (4). So lines (4) through (6) amount to a refutation of line (4), and hence a proof that:

(7) I do not believe S.

Now, (7) is clearly a theorem (we just proved it), so we can apply *B-Necessitation*, arriving at:

(8) I believe that I do not believe S.

Combining (8) and (3) leads us to:

(9) I believe S.

But this obviously contradicts (7), and we have our final contradiction.

Note that this argument does not actually use *B-Consistency* (hint for the second argument involving *TPDB*2: you will need *B-Consistency*!)

These paradoxes seem to show that, as a matter of logic, we cannot have perfectly reliable beliefs about what we don’t believe – in other words, in this idealized sense of belief, there are always things that we believe that we don’t believe, but in actuality we do believe (the failure of *TPDB*1), and things that we don’t believe, but don’t believe that we don’t believe (the failure of *TPDB*2). At least, the puzzles show this if we take them to force us to reject both *TPDB*1 and *TPDB*2 in the same way that many feel that the *Liar paradox* forces us to abandon the full *T-Schema*.

Once we’ve considered transparency principles for disbelief, it’s natural to consider corresponding principles for belief. There are two. The first is the *First Transparency Principle for Belief*:

*TPB*1:

“If you believe that you believe Φ then you believe Φ.”

In other words, according to *TPD*1 our beliefs about what we believe cannot be wrong. The second principle, again is a mirror image of the first, is the *Second Transparency Principle for Belief*:

*TPB*2:

“If you believe Φ then you believe that you believe Φ.”

In other words, according to *TPB*2 we are aware of all of the facts regarding what we believe.

Are either of these two principles, combined with *B-Closure*, *B-Necessitation*, and *B-Consistency*, paradoxical? If not, are there additional, plausible principles that would lead to paradoxes if added to these claims? I’ll leave it to the reader to explore these questions further.

A historical note: Like so many other cool puzzles and paradoxes, versions of some of these puzzles first appeared in the work of medieval logician Jean Buridan.

*Featured image credit: Water flowing by IK3. Public Domain via **Pixabay**.*

The illegitimate open-mindedness of arithmetic

We are often told that we should be open-minded. In other words, we should be open to the idea that even our most cherished, most certain, most secure, most well-justified beliefs might be wrong. But this is, in one sense, puzzling.

]]>The illegitimate open-mindedness of arithmetic

We are often told that we should be *open-minded*. In other words, we should be open to the idea that even our most cherished, most certain, most secure, most well-justified beliefs might be wrong. But this is, in one sense, puzzling. After all, aren’t those beliefs that we hold most dearly–those that we feel are best supported–exactly the one’s we should *not *feel are open to doubt? If we found ourselves able to doubt those beliefs – that is, if we are able to be open-minded about them–then they aren’t all that cherished, certain, secure, or well-justified after all!

This has led some philosophers to treat open-mindedness, not as an attitude that applies to particular beliefs, but rather as a *second-order* attitude that applies to our body of beliefs as a whole. I can’t do full justice to this sort of approach here, but the following should give one an idea of what is going on.

To make things concrete, let’s let Φ(x) be a predicate that applies to numbers, and let’s say that I have checked each number from 1 to *n* individually, and verified that it has the property expressed by Φ(x) (perhaps by a lengthy pen-and-paper computation).

In such a situation, I can (and probably should) strongly believe each of:

Φ(1), Φ(2), Φ(3),… Φ(*n*)

that is:

Φ holds of 1, Φ holds of 2, Φ holds of 3,… Φ holds of *n*.

After all, I have checked each one.

Now, on a second-order approach, open-mindedness with regard to my judgements about Φ(x)-ness doesn’t involve my having doubts about some particular number *m* between 1 and *n*. Rather, it amounts to my being open to the idea that I might have made a mistake *somewhere*, even if I don’t know *where* (and even if, further, for each *particular* number between 1 and *n*, I am certain I didn’t make a mistake *there*). In other words, open-mindedness on this account amounts to rejecting (or, at the very least, not strongly believing):

For every *m* between 1 and *n*, Φ(*m*)

Thus, if we are open-minded about our judgments regarding Φ(x), then I can be extremely confident in each of my individual judgements regarding a particular number satisfying Φ(x), but I should be far less confident in the single judgement codifying the thought that I got all of them right.

Now, in real life there are very good reasons for being open-minded in this way–after all, we are fallible, and no matter how careful we are, mistakes slip in (especially in real-life examples more complicated that our toy example above). But it turns out that formal Peano arithmetic is open-minded in a similar way, even though (unlike us mere humans) arithmetic has no good reason to be open-minded.

So let’s just assume that we are working with Peano arithmetic, or some similar system of axioms for arithmetic. The crucial facts we need for what follows are that our axioms (i) are consistent (do not allow us to prove any contradictions, (ii) are sufficiently strong (in technical jargon: they allow us to represent *recursive functions and relations*), and (iii) can be finitely described (in technical jargon: they are themselves *recursive*). Then it follows that our system of arithmetic is ω-incomplete: there is some predicate Φ(x) in the language of arithmetic such that each of:

Φ(1), Φ(2), Φ(3),… Φ(*n*), Φ(*n*+1),…

is provable from our axioms, yet:

For any number *n*, Φ(*n*).

is not provable.

In other words, for any consistent, sufficiently strong recursive set of axioms for arithmetic, there is a predicate Φ(x) such that, for each particular number *n*, we can prove the sentence that says that *n* satisfies Φ(x), but we cannot prove the single sentence that says that all numbers satisfy Φ(x). This is one of the important corollaries of Gödel’s incompleteness theorems (as well as other important results in the metatheory of arithmetic).

Note that, other than the fact that we are now talking about the infinite list of all numbers, rather than merely a finite initial segment of the natural numbers, this has exactly the same structure as our toy example of open-mindedness above (except with “strong belief” replaced with “provability”). In the original example we had a case where we *strongly believed* each of a (finite) list of sentences, but (if we are being open-minded) we do not *strongly believe* the single sentence expressing the claim that we are right about all of these particular instances. In the second example we have a case where our theory *proves* each member of a list of sentences (infinite) but does not *prove* the single sentence codifying the claim that all of these particular instances are true.

In other words, it seems very natural to understand the phenomenon of ω-incompleteness as an instance of open-mindedness in arithmetic: no matter which axioms for arithmetic we pick (so long as they are consistent and recursive) there is a predicate such that arithmetic ‘strongly believes’ (that is: proves) each instance of the form Φ(m), but does not ‘strongly believe’ (i.e. prove) the single claim expressing all of these at once (i.e. “For all *n*, Φ(*n*)”)

This is deeply puzzling, however. Human beings, as we already noted, are extremely fallible. Thus, it makes sense that, *for us*, open-mindedness is a virtue. But the standard axioms for arithmetic–Peano arithmetic–are true, and hence only allow us to prove true claims. In other words, arithmetic (unlike any human being) is *infallible*, so it has no need to be open-minded. But Peano arithmetic is nevertheless (something very much like) open-minded about what it can prove.

So, while human beings, who are fallible, often fail to be open-minded about their beliefs, arithmetic, which isn’t fallible, and thus has no reason to be open-minded, is, as a matter of mathematical necessity, open-minded about what it can prove.

*Featured Image: Formula mathematics blackboard. Public Domain via **Pixabay**.*

Arguments about (paradoxical) arguments

As regular readers know, I understand paradoxes to be a particular type of argument.

]]>Arguments about (paradoxical) arguments

As regular readers know, I understand paradoxes to be a particular type of argument. In particular, a paradox is an argument:

- That begins with
*apparently*true premises - That proceeds via
*apparently*truth-preserving reasoning - That arrives at an
*apparently*false (or otherwise unacceptable) conclusion.

Solving a paradox, then, proceeds either by arguing that one of these three appearances is illusory (i.e. a premise is not true, the reasoning is not truth-preserving, or the conclusion is not false), or by arguing that some concept involved in the argument is faulty.

There is another way that logicians sometime define paradoxes. On this alternative understanding, a paradox an argument:

- That begins with true premises
- That proceeds via truth-preserving reasoning
- That arrives at a false conclusion.

This kind of definition, which I shall call the *alternative definition*, will likely be especially attractive to dialetheists – those philosophers who believe that at least some sentences, including those central to many familiar paradoxes, are both true and false (and hence, as a corollary, they believe that some contradictions are true). The reason is simple: on this definition we can prove very easily that the conclusion to any paradoxical argument is both true and false.

Given the first two clauses in the second definition of paradoxes given above, it follows that the conclusion of every paradoxical argument is true: After all, any paradoxical argument begins with true premises, and proceeds via truth-preserving reasoning. Hence, the conclusion must be true. But the definition also entails, via the third clause, that the conclusion of any paradoxical argument is false. Hence, either no paradoxes exist, or the conclusion to any paradoxical argument is both true and false.

For present purposes, we can assume that paradoxes do, in fact, exist (after all, I have been writing this column on paradoxes since late 2014, and it would seem like a silly waste of time if I was writing about something that doesn’t exist). Hence, on the alternative understanding of what it is to be a paradox, dialetheism must be correct.

Now, the non-dialetheists in the room (or staring at the screen) might just take this to be an argument that the first definition – the one involving three occurrences of the word “apparently” – is a better definition of the phenomenon in question. And I wholeheartedly agree with this assessment. But that doesn’t mean that it isn’t worth exploring the alternative definition a bit more to see what additional puzzles we can concoct.

It’s worth noting that what follows is carried out informally – as a result, and as is to be expected, many of the inferential moves made will be rejected on various non-classical solutions to the semantic paradoxes (especially dialethic solutions!)

One question we might ask is whether we can construct paradoxical arguments in terms of this alternative understanding of the term “paradox”. Similar questions have been raised by logicians since at least the Middle Ages. One well-known example, called the pseudo-Scotus paradox, proceeds via reasoning about whether the very argument in question is truth-preserving:

__Premise: This argument is truth-preserving.__

Conclusion: Santa Claus exists.

Assume, for *reductio*, that this argument is not truth-preserving. Then there must be some way for the premise to be true and the conclusion to be false. But if the premise is true, then given what it says (i.e. that the argument is truth-preserving), the conclusion must also be true as well. So there is no way for the premise to be true and the conclusion to be false. Contradiction. Hence, the argument is, in fact, truth-preserving. But that is just what the premise says. So the argument has a true premise and is truth-preserving. Hence, the conclusion must be true as well.

Of course, this is an absurd way to prove that Santa Clause exists, and the argument will likely feel familiar to many readers, since it is a version of the Curry paradox carried out at the level of arguments (this general pattern is called the V-Curry in the technical literature).

Of course, once we have seen the general idea, we can construct all sorts of variants. Of particular interest are variants that involve our alternative notion of paradoxicality. For example, consider:

__Premise: This argument is truth-preserving but not paradoxical.__

Conclusion: This argument is paradoxical.

We can easily show that this argument must be paradoxical: Assume, again for *reductio*, that the argument is not truth-preserving. Then there must be some way for the premise to be true and the conclusion to be false. Thus, it is possible that the argument is truth-preserving (since this is required by the truth of the premise) and not paradoxical (since this is required by both the truth of the premise and by the falsity of the conclusion). But in such a scenario, the premise would also be true (since it just says it is truth-preserving and not paradoxical). Hence, since in this scenario we have a true premise in a truth-preserving argument, the conclusion must also be true. Contradiction. Thus, the argument is truth-preserving. Now, assume (again, for reduction) that the argument is not paradoxical. But then the premise would be true. Hence, since we have already shown the argument to be truth-preserving, the conclusion must be true as well. Contradiction. So the argument is paradoxical.

This argument is already troubling enough: We have shown that an argument that (i) begins with a premise claiming that the very argument in question is in the best logical standing (truth-preserving but not paradoxical) and (ii) ends with a conclusion claiming that the argument in question is in very bad standing (paradoxical), is not invalid or unsound as we might expect, but is instead paradoxical.

But things seem even more puzzling. According to the alternative definition of paradox, a paradox is supposed to have *true premises*, be truth-preserving, and have a *false conclusion*. But the reasoning above shows that the argument in question has a *false premise* (since the argument is, in fact, paradoxical) and a *true conclusion*! Clearly, something has gone dreadfully wrong!

*Featured image: Kandinsky, Jaune Rouge Bleu. Public domain via Wikimedia Commons.*

A directed graph is a pair

A directed graph is a pair <*N*, *E*> where *N* is any collection or set of objects (the nodes of the graph) and *E* is a relation on *N* (the edges). Intuitively speaking, we can think of a directed graph in terms of a dot-and-arrow diagram, where the nodes are represented as dots, and the edges are represented as arrows. For example, in the following figure we have a graph that consists of three nodes–*A*, *B*, and *C*, and four edges: one from *A* to *A*, one from *A* to *B*, one from *B* to *C*, and one from *C* to *B*.

Note that with directed graphs we distinguish between those cases where a node has an arrow from itself to itself and those cases where it does not, and we also take into account the direction of the edge–that is, the edge from *B* to *C* is distinct from the edge from *C* to *B* (we do, however, represent cases where we have arrows going in both directions with a single line with two “arrowheads”).

In the diagram above, the nodes might represent Alice, Betty, and Carla, and the relation *E* might be “loves.” Thus, the diagram represents Alice loving both herself and Betty (and no one else), Betty loving Carla (and no one else), and Carla loving Betty (and no one else).

Assume we have a collection of objects *N* (our nodes) and a relation *E* such that for any two objects in *N*, the relation *E* might or might not hold of them, and in particular, *E* might or might not hold between an object in *N* and itself. Now, consider the graph where the collection of nodes is *N* and the collection of edges (which we will also call *E*) contains an edge between two nodes *n*_{1} and *n*_{2} just in case the relation *E* holds between *n*_{1} and *n*_{2} (in that order). Now, given any such structure, we can arrive at our puzzle by considering the following question:

Given such a situation, modelled by a directed graph <*N*, *E*>, can we construct a new directed graph <*N**, *E**> where *N** = *N* È {*r*} (where *r* is not already in *N*) and, for any *n*_{1}, *n*_{2} in *N**, there is an edge in *E* between *n*_{1} and *n*_{2} if and only if:

*n*_{1}, *n*_{2} are in *N* and there is an edge in *E* between *n*_{1} and *n*_{2}.

*n*_{1} = r and there is no edge between *n*_{2} and itself.

In other words, given any directed graph, can we add a single additional node to the graph, and some additional edges to the graph, such that there is an edge between the new node *r* and any node *n* in *N** if and only if there is no edge from *n* to itself?

The answer, of course, is no. If we were successful, then we would have a directed graph <*N**, *E**> where:

For any *n* in *N**, there is an edge in *E**from *r* to *n* if and only if there is no edge in *E** from *n* to *n*.

Substituting *r* for *n* gives us a contradiction, however:

There is an edge in *E** from *r* to *r* if and only if there is no edge in *E** from *r* to *r*.

This pattern is a general one underlying a number of paradoxes – some familiar, some less so. For example:

*The Barber Paradox*:

*N* = the collection of men and there is an edge between two nodes *n*_{1} and *n*_{2} if and only if *n*_{1} shaves *n*_{2}. The new node *r* is the barber who shaves all and only those who do not shave themselves.

*The Russell Paradox*:

*N* = the collection of sets and there is an edge between two nodes *n*_{1} and *n*_{2} if and only if *n*_{2} is a member of *n*_{1}. The new node *r* is the set of all sets that are not members of themselves.

*The Impossible Painting Paradox*:

*N* = the collection of painting and there is an edge between two nodes *n*_{1} and *n*_{2} if and only if *n*_{1} is a painting that depicts *n*_{2}. The new node *r* is the painting that depicts all and only those paintings that do not depict themselves.

*The Hyperlink Paradox*:

*N* = the collection of websites and there is an edge between two nodes *n*_{1} and *n*_{2} if and only if *n*_{1} hyperlinks to *n*_{2}. The new node *r* is the website that links to all and only those websites that do not link to themselves.

*The Lover of Self-loathers Paradox*:

*N* = the collection of people and there is an edge between two nodes *n*_{1} and *n*_{2} if and only if *n*_{1} loves *n*_{2}. The new node *r* is the lover of self-loathers – a person who loves all and only those people who do not love themselves.

*The Anti-Cannibalism Predator Paradox*:

*N* = the collection of species, there is an edge between two nodes *n*_{1} and *n*_{2} if and only if members of species *n*_{1} eat members of *n*_{2}. The new node *r* is the anti-cannibal predator species, members of which eat all and only members of those species that don’t eat members of their own species.

The *Hyperlink Paradox* is, as far as I can tell, due to Øystein Linnebo, and the *Lover of Self-Loathers Paradox* and the *Anti-Cannibalism Predator Paradox* are new. Now that you’ve seen the pattern, you can have fun constructing your own paradoxical notions!

*Featured image: Partial view of the Mandelbrot set by Wolfgang Beyer. CC-BY-SA 3.0 via Wikimedia Commons. *

What is the biggest whole number that you can write down or describe uniquely? Well, there isn’t one, if we allow ourselves to idealize a bit. Just write down “1”, then “2”, then… you’ll never find a last one.

]]>What is the biggest whole number that you can write down or describe uniquely? Well, there isn’t one, if we allow ourselves to idealize a bit. Just write down “1”, then “2”, then… you’ll never find a last one.

Of course, in real life you’ll die before you get to any really *big* numbers that way. So here’s a more interesting way of asking the question: what is the biggest whole number that you can uniquely describe on a standard sheet of paper (single spaced, 12 point type, etc.) or, more fitting, perhaps, in a single blog post?

In 2007 two philosophy professors – Adam Elga (Princeton) and Agustin Rayo (MIT) – asked essentially this question when they competed against each other in the *Big Number Duel*. The contest consisted of Elga and Rayo taking turns describing a whole number, where each number had to be larger than the number described previously. There were three additional rules:

- Any unusual notation had to be explained.
- No primitive semantic vocabulary was allowed (i.e. “the smallest number not mentioned up to now.”)
- Each new answer had to involve some new notion – it couldn’t be reachable in principle using methods that appeared in previous answers (hence after the second turn you can’t just add 1 to the previous answer)

Elga began with “1”, Rayo countered with a string of “1”s, Elga then erased bits of some of those “1”s to turn them into factorials, and they raced off into land of large whole numbers. Rayo eventually won with this description:

The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^{100}) symbols.

A more detailed description of the *Duel*, along with some technical details about Rayo’s description, can be found here.

Fans of paradox will recognize that Rayo’s winning move was inspired by the Berry paradox:

The least number that cannot be described in less than twenty syllables.

This expression leads to paradox since it seems to name the least number that cannot be described in less than twenty syllables, and to do so using less than twenty syllables! Rayo’s description, however, is not paradoxical, since although it uses far fewer than a googol symbols to describe the number in English, this doesn’t contradict the fact that, in the expressively much less efficient language of set theory, the number cannot be described in fewer than a googol symbols.

The number picked out by Rayo’s description has come to be called, appropriately enough, Rayo’s number. And it is big – *really* big. But can we come up with short descriptions of even bigger numbers?

Notice that Rayo’s construction implicitly provides us with a description of a function:

*F*(*n*) = The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than *n* symbols.

Rayo’s number is then just *F*(10^{100}). So one way to answer the question would be to construct a function *G*(*n*) such that *G*(*n*) grows more quickly than *F*(*n*). Here’s one way to do it.

First, we’ll define a two place function *H*(*m*, *n*) as follows. We’ll just let *H*(0, 0) be 0. Now:

*H(0, n)* = The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than *n* symbols.

So *H*(0, *n*) is just the Rayo function, and *H*(0, 10^{100}) is Rayo’s number. But now we let:

*H(m, n)* = The least number that cannot be uniquely described by an expression of first-order set theory supplemented with constant symbols for:

*H*(*m*-1, *n*), *H*(*m*-2, *n*),… *H*(1, *n*), *H*(0, *n*)

that contains no more than *n* symbols.

In other words, *H*(1, 10^{100}) is the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out Rayo’s number. Note that, in this new theory, Rayo’s number can now be described very briefly, in terms of this new constant! So *H*(1, 10^{100}) will be *much* larger than Rayo’s number.

But then we can consider *H*(2, 10^{100}), which is the least the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out Rayo’s number and a second constant symbol that picks out *H*(1, 10^{100}). This number is *much*, *much* bigger than *H*(1, 10^{100})!

And then we have *H*(3, 10^{100}), which is the least the least number that cannot be described in first-order set theory supplemented with a constant symbol that picks out *H*(0, 10^{100}), a second constant symbol that picks out *H*(1, 10^{100}) and a third constant symbol that picks out *H*(2, 10^{100}). This number is *much*, *much*, *much* bigger than *H*(2, 10^{100})!

And so on…

We can now get our quickly growing unary function *G*(*n*) by just identifying *m* and *n*:

*G*(*n*) = *H*(*n*, *n*).

And finally, our big, huge, enormous, number is:

*G*(10^{100})

*G*(10^{100}) is the least number that cannot be described in first-order set theory supplemented with googol-many constant symbols – one for each of *H*(0, 10^{100}), *H*(1, 10^{100}), … *H*(10^{100}-1, 10^{100}).

This number really is big. Can you come up with a bigger one?

*Featured image: “Infant Stars in Orion” Public domain via Wikimedia Commons. *

The logic of unreliable narrators

In fiction, an unreliable narrator is a narrator whose credibility is in doubt – in other words, a proper reading of a narrative with an unreliable narrator requires that the audience question the accuracy of the narrator’s representation of the story, and take seriously the idea that what actually happens in the story – what is fictionally true in the narrative – is different from what is being said or shown to them.

]]>The logic of unreliable narrators

In fiction, an unreliable narrator is a narrator whose credibility is in doubt – in other words, a proper reading of a narrative with an unreliable narrator requires that the audience question the accuracy of the narrator’s representation of the story, and take seriously the idea that what actually happens in the story – what is fictionally true in the narrative – is different from what is being said or shown to them. Unreliable narrators are common in fiction. Notable examples include Agatha Christie’s *The Murder of Roger Ackroyd*, Ken Kesey’s *One Flew Over the Cuckoo’s Nest*, Akira Kurosawa’s *Rash**ômon*, and Ron Howard’s *A Beautiful Mind*.

There are all sorts of interesting philosophical questions one might ask about unreliable narrators and how they function as a storytelling device. Here, however, I am going to point out some purely logical features of unreliable narrators.

Presumably, although the full account is no doubt more complex, one of the primary factors that determines whether a narrator is reliable, and to what extent, is the ratio between the number of (fictionally) true claims made by the narrator to the total number of claims made by the author. All else being equal, the higher this ratio, the more reliable the narrator is. Now, consider two stories. The first story – *S*_{1} – involves the narrator making *n* claims for some number *n*:

*S*_{1} = {*C*_{1}, *C*_{2}, *C*_{3}… *C*_{n}}

And let’s assume that, for some number *m* ≤ *n*, *m* of these claims are true. So the relevant ration is *m*/*n*. The second story – *S*_{2} – is exactly like the first except for the addition of one more claim by the narrator: the claim that he or she is unreliable, which we shall call *U*:

*U* = “I am an unreliable narrator”

Hence:

*S*_{2} = {*C*_{1}, *C*_{2}, *C*_{3}… *C*_{n}, *U*}

Now, there are two possibilities. Either *U* is true, or *U* is false. If *U* is true, then the ratio of truths to falsehoods is (*m*+1)/(*n*+1). But, for any positive finite numbers *m* and *n* where *m* ≤ *n*:

(*m*+1)/(*n*+1) > m/n

So, although the narrator of *S*_{2} might well be unreliable, he or she is more reliable than the narrator of *S*_{1} which fails to contain the admission of unreliability *U*. Note that this also implies that the narrator of *S*_{1} must have been unreliable as well.

If *U* is false, however, then then the ratio of truths to falsehoods is (*m*)/(*n*+1). But, for any positive finite numbers *m* and *n* where *m* ≤ *n*:

*m*/*n* > (*m*)/(*n*+1)

So, although the narrator of *S*_{2} might well be reliable, he or she is less reliable than the narrator of *S*_{1} which fails to contain the admission of unreliability *U*.

The latter fact – that a reliable narrator claiming to be unreliable in fact makes them less reliable – is perhaps unsurprising and uninteresting, the fact that an unreliable narrator admitting their unreliability makes them more reliable is more interesting. Before examining this fact further, however, it is worth noting that there is a Truth-teller like phenomenon in the vicinity as well.

Consider a story where the narrator, in addition to narrating the story, also claims to be a reliable narrator (perhaps, along time-honored traditions, by beginning the story with “Everything I am about to tell you is true”). Via computations similar to the above, if the narrator of a story not containing a claim to reliability of this sort is generally reliable then the narrator of a story otherwise identical but supplemented with such a claim to reliability is even more reliable, and if the narrator of a story not containing a claim to reliability of this sort is generally unreliable, then the narrator of a story otherwise identical but supplemented with such a claim is even less reliable.

Now, although it involves fictional truth (i.e. what claims we ought to make-believe to be true when consuming a fiction) rather than actual truth, at this point this puzzle looks like nothing more than a variant of the Liar paradox and the Truth Teller. But there is a secondary puzzle that arises once we have noted the Liar-like behavior of “I am an unreliable narrator.”

Whether or not a narrator is reliable, and more generally, the extent to which a narrator is reliable, is typically not something the author of a work announces at a press conference or prints on the cover of a book or DVD, but is instead something that the reader or viewer of a work has to decipher for him-or-herself from clues included in the story. On the face of it, one piece of evidence that we might think to be definitive in this regard is an admission of unreliability by the narrator him-or-herself. But, as we have seen, such an admission in fact makes the narrator more reliable, rather than less, if the narrator is in fact generally unreliable. In short, the sort of claim that we might, on the face of it, take to be good evidence of the presence of an unreliable narrator turns out to be much less useful than we might have first thought.

On the other hand, the results given above do suggest a sort of informal decision procedure for determining whether or not the narrator of a work is generally reliable or not. When confronted with a story where the evidence seems indeterminate with regard to whether, and to what extent, we should “believe” the narrator, we can just imagine a story that is similar except that the narrator claims to be reliable. If the narrator of the original story was generally reliable, the narrator of this new story will be even more reliable, and if the narrator of the original story was generally unreliable, then the narrator of this new story will be even more unreliable. Presumably, the more pronounced reliability, or unreliability, in the new story will be easier to detect than the original degree of reliability or unreliability in the original story was. If there still isn’t enough evidence to decide, then simply add another claim to reliability on the part of the narrator. And if this isn’t enough, add another one. Presumably at some point the reliability, or unreliability, of the narrator will become so extreme that it will be impossible not to spot, in which case the narrator of the original story will be generally reliable or generally unreliable if and only if the narrator of this new expanded story is (although not, of course, to the same degree).

Now, clearly the recipe just given is absurd – this algorithm for detecting whether or not a narrator is reliable or not just won’t work. But it strikes me as a little bit difficult to say exactly where it has gone wrong.

*Featured image: Book by Kaboompics // Karolina, Public Domain via Pexels.*

The idea that many, if not most, people exhibit physical signs – tells – when they lie is an old idea – one that has been extensively studied by psychologists, and is of obvious practical interest to fields as otherwise disparate as gambling and law enforcement. Some of the tells that indicate someone is lying include:

]]>The idea that many, if not most, people exhibit physical signs – *tells* – when they lie is an old idea – one that has been extensively studied by psychologists, and is of obvious practical interest to fields as otherwise disparate as gambling and law enforcement. Some of the tells that indicate someone is lying include:

- Pauses in speech.
- Providing too much information.
- Breathing heavily.
- Covering one’s face.
- Excessive finger pointing.
- Throat clearing.
- Not blinking.
- Swallowing.
- Shuffling feet.
- Tugging on ears.
- Licking lips.
- Cleaning glasses.
- Grooming hair.

The psychological research on tells is interesting and important, and knowledge of tells also makes high-stakes poker that much more fun to watch. But what hasn’t been appreciated until now is the fact that philosophy, and in particular, the study of paradoxes, has something to offer with respect to our understanding of tells.

Now, in real life, tells are general and not absolute – in other words, people are generally more likely to exhibit one or more of the behaviors listed above (or other tells) when they are lying than when they are telling the truth. There is no evidence that there are any absolute tells – in other words, there is no evidence that any of these symptoms is such that a person will exhibit that symptom if, and only if, he is telling a lie. And for good reason, since we can prove that an absolute tell is impossible.

Let’s be a bit more precise. First, it is worth recalling a distinction that has come up in previous posts in this column that is relevant here: the distinction between telling a lie – that is, making an assertion that one either believes to be false, or is intended to deceive the listener, or both – and the mere assertion of a falsehood, which need involve neither the speaker believing that the claim is false nor the speaker intending to deceive anyone. Although discussions of tells in psychology and elsewhere are rarely explicit about this, presumably a tell indicates that the speaker is lying, not that he or she is merely asserting a falsehood.

Second, we need to say a bit more about what we mean by “absolute tell”: A physical symptom (such as covering one’s face) is an *absolute tell* for a person if and only if the person in question will exhibit that symptom when lying, and not exhibit that symptom when not lying. Thus, an absolute tell for a person is a completely reliable indicator of whether that person is lying or not.

We can now prove that absolute tells are impossible, since the existence of an absolute tell leads to paradox. Imagine that behavior *X* is an absolute tell for person *P*. Then ask person *P* to say:

“I am exhibiting *X* right now”.

Let’s assume that we are in a friendly laboratory environment, where the speaker has every reason to follow your directions if possible, rather than a police interrogation room, poker table, or some other environment where the speaker might have reasons not to cooperate. Further, let’s assume that the room is generously equipped with mirrors, so that both you and the speaker are immediately aware of the speaker’s physical behavior, and in particular of whether or not they exhibit symptom *X*.

Now, one of two things will happen:

- Person
*P*will find themselves unable to utter the sentence in question. - Person
*P*will utter the sentence.

Option (1), however, would be even more mysterious than the existence of absolute tells, since it is utterly unclear why the existence of a uniform relationship between one’s speech and one’s physical behavior (i.e. a tell) would imply constraints on what one can say. After all, it’s a simple sentence, and simple to say. So let’s set aside (1) and concentrate on (2).

Now, when person *P* utters the sentence in question, either they exhibit behavior *X* or they don’t. Further, given the set-up, they will also know whether or not they exhibited *X*, and you will know whether they exhibited *X*, and so on. So, we have two further cases:

- Person
*P*exhibits symptom*X*. - Person
*P*does not exhibit*X*.

In either case, however, if *X* is an absolute tell then we obtain a contradiction.

In case (1), person *P* exhibits *X*, knows that they exhibit X, and says that they exhibit *X*. In addition, you know that they exhibited *X*, and they know you know. So person *P* can’t be asserting something that they believe to be false, and they can’t be intending to deceive you. Hence they are not lying. But if *X* were an absolute tell for person *P*, then they should not be exhibiting *X*, since they are not lying. Contradiction.

In case (2), person *P* is not exhibiting *X*, knows that they are not exhibiting *X*, yet says that they are exhibiting *X*. If person *P* is not exhibiting *X*, then they are not lying, so they must believe their assertion and must not be intending to deceive you when uttering it. But they know that they are not exhibiting *X*, so they can’t believe they are. Again, a contradiction.

So we’ve made some philosophical progress: Absolute tells are impossible!

Note: The exact details of this argument might differ depending on the exact details of one’s philosophical account of what it is to tell a lie – see here. But some version will work on any reasonable account of what it is to tell a lie.

The impossibility of absolute tells is a purely logico-philosophical matter, depending solely on the conceptual analysis carried out above, and is independent of any empirical inquiry. But this impossibility result suggests some related questions in empirical psychology that are worth wondering about.

First, what would happen if we carried out the above scenario with subjects who had very reliable, even if not perfect (i.e. absolute), tells? In other words, what would happen if we took a bunch of subjects who exhibited some characteristic behavior *X* almost every time they lied, and didn’t exhibit *X* almost every time they didn’t lie, and then asked them to assert the sentence above. Now we know they would either not exhibit their tell and hence be lying, or exhibit their tell and not be lying. But which one? Would one outcome be more common than the other?

Second, it is worth noting that most people are not aware of the tells that (reliably, even if not perfectly) indicate when they are lying. Thus, it is worth asking whether informing subjects of their tells – that is, telling them that they generally exhibit symptom *X* when they are lying, and don’t exhibit it when they are not lying – before carrying out the experiment sketched above would affect the results. In other words, would a person’s knowing that *X* is a reliable tell for them affect whether or not they would exhibit that tell when forced to say the sentence in question?

These are interesting questions, but I’ll leave it up to the psychologists to determine their answers.

*Featured Image Credit: ‘Abstract background wallpaper’ by tommyvideo via Pixabay. CC0 Public Domain.*

A person-less variant of the Bernadete paradox

Before looking at the person-less variant of the Bernedete paradox, lets review the original: Imagine that Alice is walking towards a point – call it A – and will continue walking past A unless something prevents her from progressing further.

]]>A person-less variant of the Bernadete paradox

Before looking at the person-less variant of the Bernadete paradox, lets review the original:

Imagine that Alice is walking towards a point – call it *A* – and will continue walking past *A* unless something prevents her from progressing further.

There is also an infinite series of gods, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each god in the series intends to erect a magical barrier preventing Alice from progressing further if Alice reaches a certain point (and each god will do nothing otherwise):

(1) *G*_{1} will erect a barrier at exactly ½ meter past *A* if Alice reaches that point.

(2) *G*_{2} will erect a barrier at exactly ¼ meter past *A* if Alice reaches that point.

(3) *G*_{3} will erect a barrier at exactly ^{1}/_{8} meter past *A* if Alice reaches that point.

And so on.

Note that the possible barriers get arbitrarily close to *A*. Now, what happens when Alice approaches *A*?

Alice’s forward progress will be mysteriously halted at *A*, but no barriers will have been erected by any of the gods, and so there is no explanation for Alice’s inability to move forward. Proof: Imagine that Alice did travel past *A*. Then she would have had to go some finite distance past *A*. But, for any such distance, there is a god far enough along in the list who would have thrown up a barrier before Alice reached that point. So Alice can’t reach that point after all. Thus, Alice has to halt at *A*. But, since Alice doesn’t travel past *A*, none of the gods actually do anything.

Some responses to this paradox argue that the Gods have individually consistent, but jointly inconsistent intentions, and hence cannot actually promise to do what they promise to do. Other responses have suggested that the fusion of the individual intentions of the gods, or some similarly complex construction, is what blocks Alice’s path, even though no individual God actually erects a barrier. But it turns out that we can construct a version of the paradox that seems immune to both strategies.

Image that *A*, *B*, and *C* are points lying exactly one meter from the next, in a straight line (in that order). A particle *p* leaves point *A*, and begins travelling towards point *B* at exactly one second before midnight. The particle *p* is travelling at exactly one meter per second. The particle *p* will pass through *B* (at exactly midnight) and continue on towards *C* unless something prevents it from progressing further.

There is also an infinite series of force-field generators, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each force-field generator in the series will erect an impenetrable force field at a certain point between *A* and *B*, and at a certain time. In particular:

(1) *G*_{1} will generate a force-field at exactly ½ meter past *B* at ¼ second past midnight, and take the force-field down at exactly 1 second past midnight.

(2) *G*_{2} will generate a force-field at exactly ¼ meter past *B* at exactly ^{1}/_{8} second past midnight, and take the force-field down at exactly ^{1}/_{2} second past midnight.

(3) *G*_{3} will generate a force-field at exactly ^{1}/_{8} meter past *B* at exactly ^{1}/_{16} second past midnight, and take the force-field down at exactly ^{1}/_{4} second past midnight.

And so on. In short, for each natural number *n*:

(n) *G*_{n} will generate a force-field at exactly ^{1}/_{2}^{n} meter past *B* at exactly ^{1}/_{2}^{n+1 }second past midnight, and take the force-field down at exactly ^{1}/_{2}^{n-1 }second past midnight.

Now, what happens when *p* approaches *B*?

Particle *p*’s forward progress will be mysteriously halted at *B*, but *p* will not have impacted any of the barriers, and so there is no explanation for *p*’s inability to move forward. Proof: Imagine that particle *p* did travel to some point *x* past *B*. Let *n* be the largest whole number such that ^{1}/_{2}^{n} is less than *x*. Then *p* would have travelled at a constant speed between the point ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during the period from ½^{n+2} second past midnight and ^{1}/_{2}^{n} second past midnight. But there is a force-field at ^{1}/_{2}^{n+1 }meter past *B* for this entire duration, so *p* cannot move uniformly from ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during this period. Thus, *p* is halted at *B*. But *p* does not make contact with any of the force-fields, since the distance between the *m*^{th} force-field and *p* (when it stops at *B*) is ^{1}/_{2}^{m} meters, and the *m*^{th} force-field does not appear until ^{1}/_{2}^{m+1} second after the particle halts at *B*.

Notice that since there are no gods (or anyone else) in this version of the puzzle, no solution relying on facts about intentions will apply here. More generally, unlike the original puzzle, in this set-up the force-fields are generated at the appropriate places and times regardless of how the particle behaves – there are no instructions or outcomes that are dependent upon the particle’s behavior. In addition, arguing that, even though no individual force-field stops the particle, the fusion or union of the force-fields does stop the particle will be tricky, since although at any point during the first ½ second after midnight two different force-fields will exist, there is no time at which all of the force-fields exist.

Thanks go to the students in my Fall 2016 Paradoxes and Infinity course for the inspiration for this puzzle!

*Featured image credit: Photo by Nicolas Raymond, CC BY 2.0 via Flickr.*

Let us say that a sentence is periphrastic if and only if there is a single word in that sentence such that we can remove the word and the result (i) is grammatical, and (ii) has the same truth value as the original sentence.

]]>Let us say that a sentence is *periphrastic* if and only if there is a single word in that sentence such that we can remove the word and the result (i) is grammatical, and (ii) has the same truth value as the original sentence. For example:

[1] Roy murdered someone.

is periphrastic, since it is equivalent to:

[2] Roy murdered.

Thus, a sentence is not periphrastic if, for any word in the sentence, the result of removing the word is not grammatical, or the result of removing that word has a different truth value.

It should be noted that I am introducing “periphrastic” as a technical term here, and its use as defined above is different from (but connected to) its meaning in everyday English (further, the meaning here is significantly different from its technical meaning in grammar).

The notion of a sentence being periphrastic in this sense is simple, and at first glance we might think that we will always be able to determine whether a sentence is periphrastic merely by checking all of the sentences that can be obtained by removing one word. But like many other simple notions such as truth and knowability, it leads to puzzles – in this case, puzzles very similar to the truth-teller:

This sentence is true.

Consider the following sentence:

[3] I am periphrastic.

(We assume here that “I” is an informal way for a sentence to refer to itself).

Now, [3] is either periphrastic or not. But there seems to be no way to determine which it is.

If [3] is periphrastic, then there must be some word that we can remove from [3] such that the result is grammatical and true (since if [3] is periphrastic then, since that is what it says, it is true). The only way to remove a single word from [3] and obtain a grammatical result is:

[4] I am.

This sentence states that [4] exists, and is clearly true. Thus, the claim that [3] is periphrastic (and hence true) is completely consistent.

If [3] is not periphrastic, however, then the result of removing any word must either be ungrammatical or true (since [3] says that it is periphrastic, and thus in this case [3] is false). But, again, the only way to remove a single word from [3] and obtain a grammatical result is again [4], which is true. Thus, the claim that [3] is not periphrastic (and hence false) is completely consistent.

There would seem to be no other evidence that could settle the matter. Thus, even though it is obvious that [3] is either periphrastic or it is not, determining which seems impossible.

Interestingly, although most notions that allow for the construction of a truth-teller type puzzle also admit of a Liar-like paradoxical construction, I have failed to find an example of a paradoxical sentence that involves the idea of sentences being periphrastic. The obvious candidate to look at is:

[5] I am not periphrastic.

But there is nothing paradoxical about [5] – it is perfectly consistent to assume that [5] is false, and hence (contrary to what it says) periphrastic, since:

[6] I am not.

is a sentence obtained from [5] via the deletion of a single word that then has the same truth value as [5] – they are both false.

Perhaps my readers can do better. Is there a clearly paradoxical sentence that can be constructed using the notion of a sentence being periphrastic?

*Featured image: Pieces Of The Puzzle by Hans. Public domain via Pixabay.*

Paradoxes logical and literary

For many months now this column has been examining logical/mathematical paradoxes. Strictly speaking, a paradox is a kind of argument. In literary theory, some sentences are also called paradoxes, but the meaning of the term is significantly different.

]]>Paradoxes logical and literary

For many months now this column has been examining logical/mathematical paradoxes. Strictly speaking, a paradox is a kind of argument – for example, in some of my academic work I define paradoxes as follows:

A paradox is an argument that:

- Begins with premises that seem uncontroversially true.
- Proceeds via reasoning that seems uncontroversially valid.
- Arrives at a conclusion that is a contradiction, is false, or is otherwise absurd, inappropriate, or unacceptable.

Often, however, such as in the case of the Liar Sentence:

“This sentence is not true.”

there is a central claim that seems to be the root of the paradox, and in such cases we often talk as if the sentence itself is the paradox, rather than the argument. Let’s adopt this informal usage here. Thus, on this looser way of speaking, sentences that cannot be true and cannot be false are paradoxical. We’ll call the kind of sentences just described “philosophically paradoxical”, or paradoxical_{P}.

In literary theory, some sentences are also called paradoxes, but the meaning of the term is significantly different. If one is in an English department, rather than a philosophy department, and one claims that George Orwell’s claim from *Animal Farm* that:

“All animals are equal, but some animals are more equal than others.”

is a paradox, then one is not claiming that this sentence is neither true nor false, or that one can derive a contradiction or absurdity from this claim. Rather, the claim being made is something like this: the sentence in question involves a misleading juxtaposition of concepts and ideas that leads to an unexpected truth. Although this will obscure some of the subtleties, for our purposes we can simplify this as: A sentence is paradoxical (in the literary sense) if and only if it appears to be false, nonsensical, or otherwise problematic, but in fact hides a deeper truth – in short, if it is surprisingly true. Let’s call the kind of sentences just described literarily paradoxical, or paradoxical_{L}.

It is worth making the following observations, which underlie most of what follows, explicit: If a sentence is paradoxical_{P}, then it cannot be either true or false. Any sentence that is paradoxical_{L}, however, must be true, even if it initially appears to be false.

Now, at first glance we might think that this second, literary notion of paradoxicality has little to offer the logician – after all, it is a literary notion, not a logical on. But the opposite is in fact true. We can see this by considering the Literary Liar:

“This sentence is not a paradox_{L}”

The Literary Liar is either uncontroversially true, and hence is neither kind of paradox, or it is a paradox_{P} (and hence neither true nor false), but not a paradox_{L}.

First, the Literary Liar cannot be false. If it were false, then it would not be a paradox_{L}, since any sentence that is a paradox_{L} must be true. But it says that it is not a paradox_{L}, so this would mean that what it says is the case, and hence it would be true. Contradiction.

So the Literary Liar must be true. But now we have two options: either the Literary Liar appears false, nonsensical, or otherwise problematic, or it does not.

Why might we suspect that the Literary Liar is false, nonsensical, or otherwise problematic? Well, paraphrasing loosely, the Literary Liar says something like:

“This sentence is not surprisingly true.”

One might think that this appears, at first glance, to be close enough in content to the Liar Sentence to strongly suggest that similar self-reference induced problems will arise here as well.

If (for whatever reasons) we expect the Literary Liar to be false, nonsensical, or otherwise problematic, then, since it is in fact true, it is a paradox_{L}. But it says that it is not a paradox_{L}. So if it is true then it is not a paradox_{L}. Contradiction, and we have our paradox (of the philosophical variety).

If, however, the Literary Liar does not appear to be false, nonsensical, or otherwise problematic at first glance, then there is no paradox. The Literary Liar, in this situation, would be true, but not surprisingly so, and hence neither a paradox_{P} nor a paradox_{L}.

To sum up: If the Literary Liar is not a philosophical paradox, then it is true, but not surprisingly so. Surprising, no?

This example highlights another important fact: Whether or not a sentence is a paradox_{L} depends on our expectations – that is, on whether or not we expect it to be false, nonsensical, or otherwise problematic. Paradoxicality_{P} does not depend on our expectations in this manner.

In addition to examining constructions involving the literary notion of paradoxicality, we can combine these two notions to obtain some interesting puzzles. For example, consider the sentence:

“This sentence is a paradox, but is not a paradox.”

On the face of it, this sentence looks plainly false. In fact, however, given the two readings of “paradox” we have to hand, the sentence is ambiguous, and on one reading could be true!

There are four ways that we can disambiguate the sentence in question, depending on how we label the two occurrences of “paradox” with our subscripts “P” and “L”:

- This sentence is a paradox
_{P}, but is not a paradox_{P}. - This sentence is a paradox
_{L}, but is not a paradox_{L}. - This sentence is a paradox
_{P}, but is not a paradox_{L}. - This sentence is a paradox
_{L}, but is not a paradox_{P}.

Readings (i) and (ii) seem, on the face of it, to be plainly false: a sentence cannot be both a paradox_{P} and not a paradox_{P}, nor can it be both a paradox_{L} and not a paradox_{L} (dialetheists: please forgive my brushing over some subtleties in case (i) – doing so allows us to get to some less subtle but much more fun issues in the other cases).

Reading (iii) also seems false: It can’t be true, of course, because then by the first conjunct it would also have to be a paradox_{P}, but a paradox_{P} is a sentence that can’t be true and can’t be false. But on reading (iii) the sentence in question can be false, since in such a case it wouldn’t be a paradox_{P} and hence what it says would not be the case. Since it can consistently be false, it isn’t a paradox_{P}, and so is in fact false.

Reading (iv) is the most interesting, however, since it seems to work a bit like the sentence known as the Truth Teller:

“This sentence is true”

The Truth Teller, intuitively, is indeterminate: If it is true, then what it says is the case, which is exactly what is required of a true sentence. If it is false, then what it says is not the case, which is exactly what is required of a false sentence. Thus, it could be true and it could be false, and there seems to be no additional information that can determine which it is.

Similarly, on reading (iv) our sentence might be true and it might be false. First, notice that the sentence, without our disambiguating subscripts, certainly appears to be false – it has the form “*P* and not *P*”. So, if it is true, then the fact that it is true is surprising. So if it is true, then it is a paradox_{L}. And if it is true then it is certainly not a paradox_{P}. Thus, if it is true, then what it says is the case, which is exactly what is required of true sentence.

Second, if the sentence is false, then it is not a paradox_{L}, so it is certainly not the case that it is a paradox_{L} but not a paradox_{P}. So what it says is not the case, which is exactly what is required of a false sentence.

The reader is encouraged to consider the following slight variation of this sentence to see what difference inclusion of “unsurprising” makes:

“Unsurprisingly, this sentence is a paradox_{L}, but is not a paradox_{P}.

Some interesting issues with regard to the logic of “surprising” arise, consideration of which I will leave to the reader.

*Featured image: Abstract by geralt. Public domain via Pixabay.*

Imagine that we have a black and white monitor, a black and white camera, and a computer. We hook up the camera and monitor to the computer, and we write a program where, for some medium-ish shade of grey G.

]]>Imagine that we have a black and white monitor, a black and white camera, and a computer. We hook up the camera and monitor to the computer, and we write a program where, for some medium-ish shade of grey *G*:

- The computer tells the monitor to show a completely white screen if the average shade of the scene the camera is recording is darker than
*G*. - The computer tells the monitor to show a completely black screen if the average shade of the scene the camera is recording is no darker than
*G*.

You walk around point the camera and all goes swimmingly, until you get the bright idea to point the camera at the monitor. Then what happens?

There are two ways to answer this question: First, if we pretend that we lived in a world where electricity and light travelled instantaneously (that is, that the speed of light was infinite, so that light would leave the monitor and be detected by the camera at the same moment), then we would have a paradox. If the camera is pointed at the monitor then, at any point in time, the monitor will display a completely white screen if and only if the average shade of the scene the camera is recording is darker than *G* if and only if the average shade of the image depicted in the monitor is darker than *G* if and only if the monitor is not displaying a completely white screen (since, by the set-up, the screen always displays either a completely white screen or a completely black screen).

In reality, however, both light and electricity take a small amount of time to travel from one point to the next. As a result, we don’t have a paradox or impossible situation. Instead, if we actually set up a camera and monitor as described, the monitor will flicker – that is, it will alternate between a completely white screen and a completely black screen, and the time between switches between black and white will depend on how long it takes for light to leave the screen and be detected by the camera, and how long it takes for the electrical signals to travel from camera to computer, and how long it takes the computer to carry out the computations in the program, and how long it takes the signal to pass from the computer to the monitor.

Now, let’s complicate things a bit: Imagine that you now have an infinite stack of monitors, computers, and cameras. So there is camera #1, which feeds an image to computer #1, which then tells monitor #1 what to display, and then camera #2, which feed an image to computer #2, which then tells monitor #2 what to display, and then camera #3, which…

Now, for each camera #*n*, if we just point it at monitor #*n* (and each computer runs the same simple program as before) then we just have infinitely many instances of the earlier, simpler puzzle. But what happens if we do things a bit differently? Instead of pointing camera #1 at monitor #1, lets assume we set it up in such a way that the camera can ‘see’ monitor #2, and monitor #3, and monitor #4, and so on. Similarly, camera #2 is pointed to it can see monitor #3, and monitor #4, and monitor #5, and so on. More generally, for each whole number *n*, camera #*n* is positioned so it can see all monitors whose number is greater than *n*.

Now, let’s also change the program a bit:

- Computer #1 tells monitor #1 to show a completely white screen if all of the monitors it can see are darker than
*G*. - Computer #1 tells monitor #1 to show a completely black screen if at least one of the monitors it can see is no darker than
*G*.

And similarly for computer #2, monitor #2, and camera #2. More generally:

- Computer #
*n*tells monitor #*n*to show a completely white screen if all of the monitors whose numbers are greater than*n*are darker than*G*. - Computer #
*n*tells monitor #*n*to show a completely black screen if at least one of the monitors whose number is greater than*n*is no darker than*G*.

Regular readers of this blog will not be surprised that this is a television-variant of the Yablo paradox (obviously my favorite), just as the initial, single-camera version was a television-variant of the Liar paradox. Thus, if the speed of light and electricity were infinite, and hence the signals from camera to computer to monitor travelled instantaneously, we would again have a paradox.

Of course, if we were able to set up infinitely many televisions, cameras, and computers in this way, then we wouldn’t rip a hole in reality. Rather, the tiny but nevertheless real lag produced by the time that it takes for the signal to travel would result in the screens flickering from black to white and back again, as the camera detected different shades and the computer thus sent different instructions.

There is an interesting phenomenon here, however, in addition to this merely providing us with a novel presentation of familiar paradoxes. Let’s assume that it takes a precise fixed amount of time for the image to travel from the relevant monitors to each camera, then be sent to the computer, processed, and then the command sent to the monitor telling it what shade to display. Let’s call this time a *antinosecond* (parasecond was already taken). So, in the single-television setup with which we began, if we begin by having the television display an all white screen, then after an antinosecond it will switch to all black, then after another antinosecond it will switch to all white, and so on.

But what happens with the Yablo version, with infinitely many cameras, computers, and televisions? At first glance, you might think that it will depend on how you set it up initially – that is, on which monitors are showing a white screen and which are showing a black screen when you turn on the cameras, computers, and get things rolling. But it turns out that initial thought would be wrong:

**Theorem:** No matter what state the monitors start in, after a finite number of antinoseconds they will begin alternating between two states – all of them simultaneously showing a white screen, and all of them simultaneously showing a black screen.

I’ll give readers a couple days to ponder this before I post the argument in the comments.

In addition, there is a (surprisingly small) number such that we are guaranteed that the screens will be alternating between all simultaneously showing black and all simultaneously showing white, no matter how we set up the shades the screens are displaying at the start. Thus:

**Bonus Question:** What is the maximum number of antinoseconds before the screens are all the same shade (i.e. either all black or all white)?

**Note:** The television version of the Liar paradox is due to David Cole, a professor at the University of Minnesota – Duluth. Thanks are owed to David for allowing me to discuss it here. The Yablo variant is, as far as I know, novel.

*Featured image: Google Earth on multiple monitors by Runner1928. Public domain via Wikimedia Commons. *

One of the most famous, and most widely discussed, paradoxes is the Liar paradox. The Liar sentence is true if and only if it is false, and thus can be neither (unless it can be both). The variants of the Liar that I want to consider in this instalment arise by taking the implicit temporal aspect of the word “is” in the Liar paradox seriously.

]]>One of the most famous, and most widely discussed, paradoxes is the Liar paradox, which arises when we consider the status of the Liar sentence:

This sentence is false.

The Liar sentence is true if and only if it is false, and thus can be neither (unless it can be both).

The variants of the Liar that I want to consider in this instalment arise by taking the implicit temporal aspect of the word “is” in the Liar paradox seriously. In other words, we can understand the Liar sentence as saying of itself that it is true at this very moment. Thus, the Liar is equivalent to:

This sentence is currently, at this very moment false.

But what if we replace the present-tense “is” with future or past-tense verbs such as “will be”, “was”, and the like?

Before considering such constructions, we need to be a bit clear about how we are going to understand various tensed expressions. Informally, if I say “It will always be the case that P”, I might be claiming that P is true right now and will continue to be true at every point in time in the future, or I might merely be claiming that P will will be true at every point in time after the present one, but claiming nothing whatsoever regarding whether or not P is true at the present moment. Here we will assume the latter understanding. More generally:

“P will be true” means that P holds at some point in time after the present moment.

“P will always be true” mean that P holds at every point in time after the present moment.

“P was true” means that P holds at some point in time before the present moment.

“P was always be the case” means that P holds at every point in time before the present moment.

Similar equivalences hold for sentences of the form “P will be false.”, etc. Finally, we will assume that there is no first or last point in time: for any moment, there is at least one moment before that one and at least one moment after that one.

Now, let’s consider the following self-referential sentences about the future:

[1]: This sentence will be false.

[2]: This sentence will always be false.

Loosely put, [1] is true at a time if and only if it is false at some later time, and [2] is true at a time if and only if it is false at every later time. Both of these sentences are paradoxical:

Assume (for *reductio*) that [1] is false at some time t_{1}. Then it is not the case that [1] will ever be false at any point in time after t_{1}. So at every point in time after t_{1}, [1] is true. Let t_{2} be any point in time after t_{1}. Then [1] is true at every point after t_{2}. So [1] is false at t_{2}. But t_{2} is after t_{1}, so [1] is also true at t_{2}. Contradiction. So [1] cannot be false at t_{1}, and must be true at t_{1}. Moment t_{1} was arbitrary, however. Hence, [1] is true at every moment in time. But then [1] is true at every time after t_{1}. But then [1] is false at t_{1}, and so once again we have a contradiction.

Assume (for *reductio*) that [2] is true at some time t_{1}. Then [2] will be false at any point in time after t_{1}. Let t_{2} be any point in time after t_{1}. Then [2] is false at every point after t_{2}. So [2] is true at t_{2}. But t_{2} is after t_{1}, so [2] is also false at t_{2}. Contradiction. So [2] cannot be true at t_{1}, and must be false at t_{1}. Moment t_{1} was arbitrary, however. Hence, [2] is false at every moment in time. But then [2] is false at every time after t_{1}. But then [2] is true at t_{1}, and we have our contradiction.

Similar arguments (obtained by simple modifications of the arguments just given) show that both of:

[3]: This sentence was false.

[4]: This sentence was always false.

are paradoxical.

These variants on the Liar paradox are interesting for the following reason: although they look similar to the present-tense Liar, the reasoning to a contradiction does not look like the simple argument typically used to generate a contradiction from the Liar sentence. Instead, the argument looks much more like the reasoning typically used to show that the apparently non-circular but equally paradoxical Yablo paradox:

S_{1}: For all n > 1, S_{n} if false.

S_{2}: For all n > 2, S_{n} is false.

: :

S_{n}: For all n > m, S_{n} is false.

_{Sm+1}: For all n > n+1, S_{n} is false.

: :

is paradoxical. None of the temporal Liars [1] through [4] involves an infinitely descending, non-circular sequence of sentences, however, so at first glance it might be puzzling why the argument for a contradiction in each case looks more like the Yablo paradox reasoning and less like the Liar reasoning. After all, these temporal Liar paradoxes seem to be circular in exactly the same manner as is the Liar sentence.

The reason why the temporal Liars are more Yablo-like than we might have initially expected, however, is easy to identity: Although each temporal Liar paradox only involves a single sentence, bringing in (infinitely many) different points in time via the use of tensed verbs means that we must consider the truth-value of this single sentence at all past, or at all future, times rather than merely considering what (single, univocal) truth value it has in the present. The Yablo paradox involves infinitely many distinct sentences, where for each sentence we need to consider what truth-value it might have in the present. The temporal paradoxes involve a single sentence, but in assessing them we need to consider what truth-value is might have at each of infinitely many different points in time.

I’ll conclude by providing the reader with a few additional examples to explore. In particular, note that we can combine more than one temporal operator such as “will be” and “always was” and obtain more complicated temporal Liar sentences such as:

[5]: It will be the case that this sentence was always false.

[6]: It always was the case that this sentence will be false.

Of course, lots of other combinations are possible. I’ll leave it to the reader to determine whether some, all, or none of these more complicated constructions are paradoxical.

*Featured image: Blue spheres, by Splitshire. CC0 public domain via Pexels.*

The consistency of inconsistency claims

A theory is inconsistent if we can prove a contradiction using basic logic and the principles of that theory. Consistency is a much weaker condition that truth: if a theory T is true, then T consistent, since a true theory only allows us to prove true claims, and contradictions are not true. There are, however, infinitely many different consistent theories that we can construct.

]]>The consistency of inconsistency claims

A theory is *inconsistent* if we can prove a contradiction using basic logic and the principles of that theory. Consistency is a much weaker condition that truth: if a theory *T* is true, then *T* consistent, since a true theory only allows us to prove true claims, and contradictions are not true. There are, however, infinitely many different consistent theories that we can construct using, for example, the language of basic arithmetic, and many of these are false. That is, they do not accurately describe the world, but are consistent nonetheless (one way of understanding such theories is that they truly describe some structure similar to, but distinct from, the standard natural number structure).

In 1931 Kurt Gödel published one of the most important and most celebrated results in 20^{th} century mathematics: the incompleteness of arithmetic. Gödel’s work, however, actually contains two distinct incompleteness theorems. The first can be stated a bit loosely as follows:

*First Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then there is a sentence “P” in the language of arithmetic such that neither “P” nor “not: P” is provable in *T*.

A few terminological points: To say that a theory is *recursively axiomatizable* means, again loosely put, that there is an algorithm that allows us to decide, of any statement in the language, whether it is an axiom of the theory or not. Explicating what, exactly, is meant by saying a theory is *sufficiently strong* is a bit trickier, but it suffices for our purposes to note that a theory is sufficiently strong if it is at least as strong as standard theories of arithmetic, and by noting further that this isn’t actually very strong at all: the vast majority of mathematical and scientific theories studied in standard undergraduate courses are sufficiently strong in this sense. Thus, we can understand Gödel’s first incompleteness theorem as placing a limitation on how ‘good’ a scientific or mathematical theory *T* in a language *L* can be: if *T* is consistent, and if *T* is sufficiently strong, then there is a sentence *S* in language *L* such that *T* does not prove that *S* is true, but it also doesn’t prove that *S* is false.

The first incompleteness theorem has received a lot of attention in the philosophical and mathematical literature, appearing in arguments purporting to show that human minds are not equivalent to computers, or that mathematical truth is somehow ineffable, and the theorem has even been claimed as evidence that God exists. But here I want to draw attention to a less well-known, and very weird, consequence of Gödel’s other result, the second incompleteness theorem.

First, a final bit of terminology. Given any theory *T*, we will represent the claim that *T* is consistent as “Con(*T*)”. It is worth emphasizing that, if *T* is a theory expressed in language *L*, and *T* is sufficiently strong in the sense discussed above, then “Con(*T*)” is a sentence in the language *L* (for the cognoscenti: “Con(*T*)” is a very complex statement of arithmetic that is *equivalent* to the claim that *T* is consistent)! Now, Gödel’s second incompleteness theorem, loosely put, is as follows:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then *T* does not prove “Con(*T*)”.

For our purposes, it will be easier to use an equivalent, but somewhat differently formulated, version of the theorem:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then the theory:

*T* + not: Con(*T*)

* *is consistent.

In other words, if *T* is a consistent, sufficiently strong theory, then the theory that says everything that *T* says, but also includes the (false) claim that *T* is inconsistent is nevertheless consistent (although obviously not true!) It is important to note in what follows that the second incompleteness theorem does not guarantee that a consistent theory *T* does not prove “not: Con(*T*)”. In fact, as we shall see, some consistent (but false) theories allow us to prove that they are not consistent even though they are!

We are now (finally!) in a position to state the main result of this post:

*Theorem*: There exists a consistent theory *T* such that:

*T* + Con(*T*)

is inconsistent, yet:

*T* + not: Con(*T*)

is consistent.

In other words, there is a consistent theory *T* such that adding the __true__ claim “*T* is consistent” to *T* results in a contradiction, yet adding the __false__ claim “*T* is inconsistent” to *T* results in a (false but) consistent theory.

Here is the proof: Let *T*_{1} be any consistent, sufficiently strong theory (e.g. Peano arithmetic). So, by Gödel’s second incompleteness theorem:

*T*_{2} = *T*_{1} + not: Con(*T*_{1})

is a consistent theory. Hence “Con(*T*_{2})” is true. Now, consider the following theories:

*(i) T*_{2} + not: Con(*T*_{2})

*(ii) T*_{2} + Con(*T*_{2})

Since, as we have already seen, *T*_{2} is consistent, it follows, again, by the second incompleteness theorem, that the first theory:

*T*_{2} + not: Con(*T*_{2})

is consistent. But now consider the second theory (ii). This theory includes the claim that *T*_{2} does not prove a contradiction – that is, it contains “Con(*T*_{2})”. But it also contains every claim that *T*_{2} contains. And *T*_{2} contains the claim that *T*_{1} __does__ prove a contradiction – that is, it contains “not: Con(*T*_{1})”. But if *T*_{1} proves a contradiction, then *T*_{2} proves a contradiction (since everything contained in *T*_{1} is also contained in *T*_{2}). Further, any sufficiently strong theory is strong enough to show this, and hence, *T*_{2} proves “not: Con(*T*_{2})”. Thus, the second theory:

*T*_{2} + Con(*T*_{2})

is inconsistent, since it proves both “Con(*T*_{2})” and “not: Con(*T*_{2})”. QED.

Thus, there exist consistent theories, such as *T*_{2} above, such that adding the (true) claim that that theory is consistent to that theory results in inconsistency, while adding the (false) claim that the theory is inconsistent results in a consistent theory. It is worth noting that part of the trick is that the theory *T*_{2} we used in the proof is itself consistent but not true.

This, in turn, suggests the following: in some situations, when faced with a theory *T* where we believe *T* to be consistent, but where we are unsure as to whether *T* is true, it might be safer to add “not: Con(*T*)” to *T* than it is to add “Con(*T*)” to *T*. Given that the majority of our scientific theories are likely to be consistent, but many will turn out to be false as they are overturned by newer, better, theories, this then suggests that sometimes we might be better off believing that our scientific theories are inconsistent than believing that they are consistent (if we take a stand on their consistency at all). But how can this be right?

*Featured image credit: Random mathematical formulæ illustrating the field of pure mathematics. Public domain via Wikimedia Commons.*

Imagine that you are an extremely talented, and extremely ambitious, shepherd, and an equally talented and equally ambitious carpenter. You decide that you want to explore what enclosures, or fences, you can build, and which groups of objects, or flocks, you can shepherd around so that they are collected together inside one of these fences.

]]>Imagine that you are an extremely talented, and extremely ambitious, shepherd, and an equally talented and equally ambitious carpenter. You decide that you want to explore what enclosures, or ‘fences*‘*, you can build, and which groups of objects, or ‘flocks*‘*, you can shepherd around so that they are collected together inside one of these fences.

As you build fence after fence, and form flock after flock, you begin reflecting on how your fences, and the flocks enclosed within them, work. Three things about building fences stand out:

First, you discover that you can not only collect together everyday objects to form flocks by building fences around them, but you can also form flocks of flocks by building large fences around smaller fences. For example, if you have built a fence around St Paul Minnesota, and you have built another fence around Minneapolis, Minnesota, then you can build a third larger fence that encircles both the fence around Minneapolis and the fence around St Paul, and a new flock is formed that contains both of the original city-flocks.

Second, you discover that you can subdivide flocks into smaller flocks. For example, if you have built a fence around Minneapolis, then you can build a smaller fence separating South Minneapolis from the rest of Minneapolis. And in fact you can do this for any flock and any sub-collection of objects in the flock: If X is some collection of objects contained in a flock you have already formed by building a fence, you can build a second fence inside the confines of the first fence that separates the objects in X from the other objects that are in the flock but not in X, to form a new, smaller flock that contains exactly the objects in X. You can even do this when X is empty, by building a fence so small that it contains nothing at all within its borders.

Third, you discover that fences don’t have a privileged side. Of course, sometimes fences look different on one side than another. But this is only an aesthetic concern–from the perspective of using fences to separate objects into flocks, there is no difference between one side of a fence and the other. As a result, if you build a fence that separates some collection of objects X from all other objects (the non-Xs) and forms the Xs into a flock, then that same fence also collects together the non-Xs and forms them into a flock.

Reflecting further on these three discoveries, however, you suddenly discover something disconcerting: building fences is impossible!

Occasionally you have doubts about this third discovery. But anytime you do, you imagine a fence built along the equator. You then ask yourself: would such a fence be a fence around all of the objects in the Northern Hemisphere, or a fence around all of the objects in the Southern Hemisphere? The only reasonable answer is that such a fence would be both, and what holds for the equator-fence should hold for all fences whatsoever.

Reflecting further on these three discoveries, however, you suddenly discover something disconcerting: building fences is impossible! Imagine that you build a fence–any fence–enclosing some collection of objects X. Then, by the second discovery, you could build a second fence that collected all of the objects in the first flock that are not identical to themselves–since there are no such objects, this would be a small fence forming a flock that had no objects whatsoever in it (what we might call the *empty flock*). But by the third discovery, if this fence separates all of the objects in the empty flock from every other object–that is, all objects whatsoever–then this same fence fences in a second flock that contains every object whatsoever (what we might call the *universal flock*). According to the first discovery, flocks are themselves objects that can be collected together and used to form other flocks. In particular, if the universal flock collects together all objects whatsoever, then the universal flock is in fact one of the objects that is contained in the universal flock. Thus, some flocks are in fact contained in themselves! Hence, by the second discovery, we can build a fence around exactly those objects contained in the universal flock that are themselves flocks that are not contained in themselves. But now we need only ask: Is this final flock contained in itself or not? By definition, it is if and only if it isn’t, and we are faced with a contradiction.

Now, at this point we could argue about whether it is the principles of carpentry, or the principles of shepherding, or both that are to blame. But this would be silly. The puzzle just described is a variant of a familiar set-theoretic paradox that has nothing to do with either carpentry or shepherding: The Russell paradox. We have formulated the puzzle a bit differently than is normally done, however. Instead of merely laying down a comprehension principle that states, loosely put, that for any condition C there is a set that contains exactly the objects that satisfy C, and then constructing the contradictory Russell set from there, we instead arrived at the paradox via combining two independently plausible set-theoretic principles:

- Given any set S, and any subcollection of objects X all of which are members of S, there exists a set that contains exactly those objects in X.
- Given any set S, there exists a set that contains exactly those objects that are not in S.

The first principle is an informal version of the Axiom of Separation, which is one of the standard axioms of the widely studied and widely accepted theory of sets known as Zermelo Fraenkel Set Theory with Choice (or ZFC). The second principle is the Axiom of Complement, which is not an axiom or theorem of ZFC (since if it were we could derive a version of the paradox above). But it is an axiom of other, alternative set theories, such as W.V.O. Quine’s New Foundations (NF).

Now, obviously we can’t have both Separation and Complement as axioms of our set theory. Most of us trained in mainstream contemporary mathematics have been taught that there is something natural and almost inevitable about rejecting Complement and retaining Separation in the face of the Russell paradox. But that seems wrong to me. As I have tried to show with the fence-building story told above, both of these axioms have a rather strong appeal grounded on basic intuitions one might have about collecting objects. Perhaps we–that is, the philosophical and mathematical community–have been too quick to opt for Separation (and hence ZFC). Separation does seem intuitively obvious, but then so does Complement–or at least it does to me.

So what exactly are we to do?

*Featured image credit: ‘Sheep’, by Hans. Public domain via Pixabay.*

The Liar paradox is often informally described in terms of someone uttering the sentence: I am lying right now. If we equate lying with merely uttering a falsehood, then this is (roughly speaking) equivalent to a somewhat more formal, more precise version of the paradox that arises by considering a sentence like: "This sentence is false".

]]>The Liar paradox is often informally described in terms of someone uttering the sentence:

I am lying right now.

If we equate lying with merely uttering a falsehood, then this is (roughly speaking) equivalent to a somewhat more formal, more precise version of the paradox that arises by considering a sentence like:

This sentence is false.

If we accuse someone of lying, however, we don’t typically mean that someone merely told a falsehood. For example, if someone tells you that the Earth is hollow because they truly believe that to be the case, we wouldn’t typically call that person a liar. Instead, we would be more likely to accuse them merely of getting things wrong. In short, what seems important about lying is not the falsity of the utterance, but rather the intent to deceive.

Thus, we might adopt the following definition of lying (the subscript is to distinguish this understanding from a second understanding of lying I will introduce below):

S is lying_{1} when she utters P if and only if:

- P is false.
- S believes that P is false.
- S intends the listener to believe that P is true.

Notice that condition (3) is required since there might be all sorts of reasons other than deception for uttering a claim that we believe is true. We might be reasoning hypothetically, or discussing a fiction, or reading a mistaken historical text aloud, or engaging in a multitude of other uses of language. But, for the remainder of this post, let’s restrict our attention to situations where the speaker is making an utterance in order to convince the audience that the utterance in question is true. In such cases, we can simplify our definition to:

S is lying_{1} when she utters P if and only if:

- P is false.
- S believes that P is false.

Given this somewhat more sophisticated account of lying, we can now ask the obvious question: is a straightforward utterance of the lying_{1} variant of the familiar Liar paradox sentence paradoxical?:

L_{1}: I am lying_{1} right now.

First, note that L_{1} can’t be true: If L_{1} were true, then the speaker would have to be lying_{1} when uttering L_{1} But the speaker can only be lying_{1} when uttering L_{1} if L_{1} is false. So if L_{1} is true then L_{1} is false. Contradiction. So L_{1} can’t be true.

Thus, L_{1} must be false. If L_{1} is false, then the speaker must not be lying_{1} when uttering L_{1}. The speaker is lying_{1} if and only if L_{1} is false and she believes that L_{1} is false. But L_{1} is false, so the speaker must not believe that L_{1} is false.

We arrive at the following interesting conclusion: If the speaker believes that L_{1} is false, then any utterance of L_{1} by the speaker is paradoxical – it reduces to a variant of more familiar versions of the Liar paradox. If, however, the speaker does not believe that L_{1} is false then L_{1} is not paradoxical but false. Notice that the non-paradoxicality of L_{1} does not require that the speaker get things *wrong*. She does not have to mistakenly believe that L_{1} is true. Instead, she might merely have no opinion about the truth-value of L_{1}. Finally, if the speaker herself carries out the first piece of reasoning, and comes to believe that L_{1} is false (e.g. if the speaker is taken to be logically omniscient), then L_{1} is no longer false, but is instead a genuine paradox. So an assertion of “I am lying right now” – where lying is understood as asserting a falsehood that one believes is a falsehood – is not prima facie paradoxical (although it becomes paradoxical if we assume the utterer believes everything provable).

It is worth noting, however, that, we sometimes accuse someone of lying even if they say something true. For example, if someone truly believes that the Earth is hollow, but tells you that it is not in order to hide the existence of some (mistakenly believed-in) underground society from you, it seems right to say that the person has lied even though what they said is true. In short, lying might not require uttering a falsehood, but might intend be merely an utterance that is intended to deceive. Given this idea, we can consider another, somewhat simpler definition of lying:

S is lying_{2} when she utters P if and only if:

- S believes that P is false.
- S intends the listener to believe that P is true.

Condition (3) is required for the same reasons as before, but as before we can restrict our attention to situations where the speaker is making an utterance in order to convince the audience that the utterance in question is true, arriving at the following simplified account:

S is lying_{2} when she utters P if and only if:

- S believes that P is false.

Again, the obvious question: is a straightforward utterance of the lying_{2} variant of the familiar Liar paradox sentence paradoxical?:

L_{2}: I am lying_{2} right now.

The answer is easy, since L_{2} is just equivalent, given our definition of “lying_{2}”, to a familiar puzzle regarding self-reference and belief:

I believe that this sentence is false.

This sentence is not paradoxical regardless of the speaker’s beliefs: if the speaker believes that L_{2} is false, then L_{2} is true, and the speaker is lying_{2} (but not lying_{1}), and if the speaker does not believe that L_{2} is false, then L_{2} is false, and hence the speaker is not lying_{2} (nor is she lying_{1}). In short, any utterance of L_{2} is either a case where L_{2} is false, or a case where the speaker believes L_{2} is false, but not both.

Another way of putting all of this is as follows: when uttering L_{2}, the speaker believes they are not lying_{2} (i.e. believes the negation of L_{2}) if and only if they are in fact lying_{2}. Hence, on the lying_{2} understanding, whether or not one is lying is not something one can always know, even though whether one is lying depends solely on what one believes. This is not quite a paradox, but is puzzling nonetheless.

*Featured image credit: Smoke, by Carsten Schertzer. CC-BY-2.0 via Flickr.*

While most of you probably don’t believe in Santa Claus (and some of you of course never did!), you might not be aware that Santa Claus isn’t just imaginary, he is impossible! In order to show that the very concept of Santa Claus is riddled with incoherence, we first need to consult the canonical sources to determine what properties and powers this mystical man in red is supposed to have.

]]>While most of you probably don’t believe in Santa Claus (and some of you of course never did!), you might not be aware that Santa Claus isn’t just imaginary, he is impossible!

In order to show that the very concept of Santa Claus is riddled with incoherence, we first need to consult the canonical sources to determine what properties and powers this mystical man in red is supposed to have. John Frederick Coots and Haven Gillespie tell us, in the 1934 classic “Santa Claus is Coming to Town,” that:

He sees you when you’re sleeping.

He knows when you’re awake.

He knows if you’ve been bad or good.

So be good for goodness sake!

But can Santa always know if you’ve been naughty or nice?

First of all, it is worth making a rather simple observation: If one tells a lie, then one is being naughty, and if one is telling the truth, then one is being nice (unless one is doing something else naughty at the same time, a possibility we shall explicitly rule out below). After all, my mother, an expert on the subject, told me many times that lying is naughty and truth-telling is nice, by both her own lights and by Santa’s. You wouldn’t call my mother a liar, would you?

Now, consider Paranoid Paul. Paul, who is constantly worried about whether he has been nice enough to get presents from Santa, at some point utters:

Santa knows I’m being naughty right now!

Assume, further, that Paul is not doing anything else that could be legitimately assessed as naughty or nice. Now, the questions are these: Is Paul being naughty or nice? And can Santa know which?

Clearly, Paul isn’t being naughty: If he is being naughty, then Santa would know that he is being naughty, via the magical powers attributed to Santa in the aforementioned carol. But if Santa knows that Paul is being naughty, then what Paul said is true, so Paul isn’t lying. But since he isn’t doing anything else that could be assessed as naughty, Paul isn’t being naughty after all.

But equally clearly, Paranoid Paul isn’t being nice: If he is being nice, then Santa would know he is being nice. But that would imply that Santa doesn’t know that Paul is being naughty, by the *Principle of Christmas Non-Contradiction*:

PCNC: No single action is simultaneously naughty and nice.

But then Paul is lying, since he said that Santa does know that he’s being naughty. And lying is naughty, so Paul isn’t being nice after all.

Of course, there is nothing to prevent Paranoid Paul from uttering the utterance in question. And, if he does so, then surely he is either being naughty or being nice – what other Christmas-relevant moral categories are there? (We might call this the *Principle of Christmas Bivalence!*) So the problem must lie in the mysterious magical powers attributed to Santa Claus in the song. Thus, Santa Claus can’t exist.

This Christmas revelation is probably shocking enough to most of you. But I’m afraid it gets worse.

It is well-known that Santa Claus gives presents to children on Christmas night. What is most important for our purposes are the two strict rules that govern Santa Claus’s Christmas gift-giving. The first of these we might call the *Niceness Rule*:

*Nice*: If a child has been nice (overall), then he or she will receive the toys and gifts he or she desires (within reason).

And the second we can call the *Naughtiness Rule*:

*Naughty*: If a child has been naughty (overall), then he or she will receive coal (and nothing else).

Following these rules is an essential part of what it is to be Santa Claus – these rules codify his place and purpose in the universe. Thus, they are non-negotiable: Santa does not, and cannot, break them.

This Christmas revelation is probably shocking enough to most of you. But I’m afraid it gets worse.

Let’s again consider Paranoid Paul, who as usual is worrying about his status with respect to the naughtiness/niceness metric. Assume further that, at one minute before midnight on December 24^{th} (the well-known deadline for Santa’s final yearly naughty/nice judgments) Paul’s actions over the past year have, unbeknownst to him, fallen precisely on the line separating the overall naughty and the overall nice. He only has time for one more action, and if it is nice, then he will get the presents he want, and if it is naughty then he will only get coal. Paul, who is aware that his behavior over the past year has been less than exemplary, utters:

I’m going to get coal for Christmas this year!

Such an utterance prevents Santa Claus from giving anything – coal or goodies – to Paranoid Paul.

Santa can’t give Paranoid Paul toys and gifts. If Santa gives Paul toys and gifts, then he can’t give him coal. But this means that Paul told a lie, which would push him over into overall naughtiness. But then Santa should have given him coal, not goodies.

But Santa also can’t give Paul coal. If he gives Paul coal, then Paul was telling the truth. This would push Paul into overall niceness. But then Santa should have given him toys and presents, not coal.

Thus, we once again see that the very concept of Santa Claus is outright inconsistent. And since Santa Claus is an integral part of Christmas, this means that Christmas itself is incoherent, and hence must not exist.

And that’s how the Grinch proved that there’s no Christmas!

I hope everyone who reads this column (regardless of which, if any, winter holiday you celebrate) has a wonderful month and a safe winter holiday! See everyone next year, and thanks!

*Featured image credit: Christmas present. Public domain via Pixabay.*

Paradox and self-evident sentences

According to philosophical lore many sentences are self-evident. A self-evident sentence wears its semantic status on its sleeve: a self-evident truth is a true sentence whose truth strikes us immediately, without the need for any argument or evidence, once we understand what the sentence means.

]]>Paradox and self-evident sentences

According to philosophical lore many sentences are self-evident. A self-evident sentence wears its semantic status on its sleeve: a self-evident truth is a true sentence whose truth strikes us immediately, without the need for any argument or evidence, once we understand what the sentence means (and similarly, a self-evident falsehood wears its falsity on its sleeve in a similar manner). Some paradigm examples of self-evident truths, according to those who believe in such things at least, include the law of non-contradiction:

No sentence is both true and false at the same time.

which was championed as self-evidently true by Aristotle, and:

1+1 = 2

Note that if a claim is self-evidently true, then its negation is self-evidently false.

Now, it seems like we have good reasons for the following claim to seem at least initially plausible:

No self-referential statement is self-evident (whether true, false, or otherwise).

One thing that becomes somewhat obvious once we look at self-referential sentences like the Liar paradox:

This sentence is false.

And the self-referential sentences discussed here, here, and here is that determining whether a particular self-referential sentence is true or false (or paradoxical, etc.) usually involves a lot of work, typically in the form of careful and complicated deductive reasoning.

Surprisingly, however, we can show that some self-referential sentences are self-evident. In particular, we will look at a self-referential sentence that is self-evidently false.

Of course, anyone who has read even a single installment in this series will likely guess that some sort of trick is coming up. Thus, in order to highlight exactly what is weird, and what is logically interesting, about the apparently self-evidently false self-referential sentence that we are going to construct, let’s first look at a more well-known self-referential puzzle.

The puzzle in question is the paradox of the knower (also known as the Montague paradox). Consider the following self-referential sentence:

This sentence is known to be false.

Now, we can easily prove that this sentence, which I shall call the ‘Knower’ is false: Assume that the Knower is true. Then what it says must be the case. It says that it is known to be false. So the Knower must be known to be false. But knowledge is what philosophers call *factive*: for any sentence P, if you know that P is the case, then P must be the case. So, since the Knower is known to be false, then the Knower must be false. But then, the Knower is both true and false. Contradiction. So the Knower is false. QED.

Further, a little trial and error will show that you can’t give a simple proof like the one above to show that the Knower is true. If you assume it is false, all you can conclude is that it is false, but we don’t know that it is false, and that is not contradictory at all.

But wait! Two paragraphs earlier we gave a proof that the Knower is false. Proofs generate knowledge, however: if you read through that paragraph and were paying attention (and I hope you were!) then you *know* that the Knower is false. So the Knower is known to be false, since you know it to be so. But that’s just what the Knower says. So it is true after all. *Now* we have a genuine paradox!

Notice, however, that the two pieces of reasoning that we used to generate the paradox – the reasoning used to conclude that the Knower is false, and the reasoning used to conclude that the Knower is true – are of very different types. The first bit of reasoning is just a straightforward deduction about the sentence we are calling the Knower (well, as straightforward as such reasoning about self-referential sentences gets). The second bit of reasoning is different, however: in order to conclude that the Knower is true, we didn’t reason directly about the sentence we are calling the Knower, but instead carried out a second bit of reasoning about the first bit of reasoning.

In other words, we have a proof that plays two roles: First, it shows that the Knower is false, since its conclusion just is that the Knower is false. Second, it shows that the Knower is true, since our recognition of the existence of such a proof is enough to ensure that we have knowledge of the truth of the Knower.

Something like this is also going on in the example to which we now turn: the *paradox of self-evidence*. Consider the following sentence:

This sentence is false, but not self-evidently false.

Let’s call this sentence the Self-evidencer. Now, we can prove that the Self-evidencer is self-evidently false.

First, we prove that the Self-evidencer is false: assume that the Self-evidencer is true. Then what it says must be the case. It says that it is false, but not self-evidently false. So the Self-evidencer is false, but it is not self-evidently false. But this means that the Self-evidencer is both true and false. Contradiction. So the Self-evidencer is false.

Now, we can prove that it is self-evidently false: given the previous paragraph, we know that the Self-evidencer is false. So what it says must not be the case. The Self-evidencer says that it is false, but not evidently so. So it must not be the case that the Self-evidencer is both false and not self-evidently false. So (by a basic logical law known as the DeMorgan law) either the Self-evidencer is not false, or it is self-evidently false. But the Self-evidencer is false. So it must also be self-evidently false. QED.

Again, like the Knower, there is no obvious contradiction or paradox lurking in the above argument – we have merely proven that the Self-evidencer is self-evidently false, similarly to how we might prove that the following sentence is true:

This sentence is either true or false.

But herein lies the problem. It seems like the only way that we can come to know that the Self-evidencer is self-evidently false is via a complicated bit of reasoning like the one we just gave. It seems unlikely that anyone will think that the falsity of the Self-evidencer is obvious, or forces itself on us, immediately once we understand the sentence.

Thus, we again have a proof that plays two roles. On the one hand, it seems to provide us with knowledge that the Self-evidencer is self-evidently false, since that is its conclusion. On the other hand, however, the fact that we can only come to this knowledge via this rather complicated proof (or some bit of reasoning equivalent to it) seems to be indirect evidence that the Self-evidencer is not self-evident after all. Contradiction.

*Featured image: Paradox by Brett Jordan. CC BY 2.0 via Flickr.*

In a 1929 lecture, Martin Heidegger argued that the following claim is true: Nothing nothings. In German: “Das Nichts nichtet”. Years later Rudolph Carnap ridiculed this statement as the worst sort of meaningless metaphysical nonsense in an essay titled “Overcoming of Metaphysics Through Logical Analysis of Language”. But is this positivistic attitude reasonable?

]]>In a 1929 lecture, Martin Heidegger argued that the following claim is true: “Nothing nothings.” In German: “Das Nichts nichtet”. Years later Rudolph Carnap ridiculed this statement as the worst sort of meaningless metaphysical nonsense in an essay titled “Overcoming of Metaphysics Through Logical Analysis of Language”. But is this positivistic attitude reasonable? Is the sentence as nonsensical as Carnap claimed?

In this essay I want to examine Heidegger’s claim that nothing nothings. I will argue that there are at least two ways to read the claim, and on either reading the claim comes out as true (at least, given certain common and plausible assumptions regarding the underlying logic). In addition, the truth of a slight modification of the claim hinges on the outcome of a metaphysical debate currently raging in the philosophical literature.

Before arguing for any of this, however, the following caveat is important to note: I am *not* claiming that any of the claims or interpretations given below were, or even should have been, held by Heidegger. In short, I am *not* interested (at least for the purposes of this essay) in sorting out in detail why *Heidegger* believed that “Nothing nothings” is true. Rather, I am interested in whether *we* should believe that this sentence is true, and if so, why.

I will divide the task of understanding the sentence “Nothing nothings” into two parts. The first and simpler part is to determine how to understand a sentence of the form “Nothing *F*s” where *F* is some arbitrary predicate. “Nothing *F*s” (or, equivalently, “Nothing is *F*”) is, from a logical perspective, equivalent to the following claim:

(1) It is not the case that there exists an object *x* such that *x* is *F*.

So far, so good. The second, and somewhat more complicated, task is to sort out how we should understand a sentence of the form “*t* nothings”, where *t* is some arbitrary name. First, we shall assume that “*t* nothings” is equivalent to “*t* is nothing”. Then the question becomes this: do we read the “is” in “*t* is nothing” as the “is” of identity, or the “is” or predication? On the “is” of identity reading, “*t* is nothing” becomes something like:

(2) *t* is not identical to anything.

Or, even more simply:

(3) *t* does not exist.

On the “is” of predication reading, “*t* is nothing” becomes something like:

(4) *t* does not have any properties holding of it.

Now, in order to better understand “Nothing nothings”, we need only combine the recipe illustrated in (1) for statements of the form “Nothing is *F*” with the recipe in (3) and (4) for statements of the form “*t* is nothing”. Thus, “Nothing nothings”, on the “is” of identity reading, is just:

(5) It is not the case that there exists an object *x* such that *x* does not exist.

This statement is easily formalized in the standard classical first-order logic taught to undergraduates, and is a logical truth. Thus, not only is “Nothing nothings” true on this reading, it is true as a matter of logic alone.

Things are slightly more complicated on the “is” of predication reading. If we combine the recipes illustrated by (1) and (4) above, we get the following:

(6) It is not the case that there exists an object *x* such that *x* has no properties holding of it.

Since it involves generalizing over properties rather than merely generalizing over objects, formalizing this statement requires what is known as second-order quantification. The logical status of second-order quantification is a matter of some philosophical debate. Nevertheless, those logicians who do accept second-order quantification as legitimate and logical almost unanimously accept that sentence (6) is a logical truth, and even those that don’t think second-order quantification is logic proper typically accept that (6) is true (and perhaps even necessarily true).

Thus, Heidegger’s claim seems straightforwardly true (at least, on these ways of understanding it, which as I noted at the outset, might not be the way that Heidegger understood it). But what happens if we modify the statement slightly, inserting the word “possibly” and obtaining:

(7) Nothing possibly nothings.

On the “is” of predication reading, this becomes:

(8) It is not the case that there exists an object *x* such that it is possible that *x* has no properties holding of it.

In other words, “Nothing possibly nothings”, on the “is” of predication reading, amounts to the claim that every object that exists not only has some properties that hold of it, but in addition *must* have properties holding of it (i.e. it is impossible that no properties hold of it). This claim is a bit obscure, but is accepted by most logicians who work on systems containing both second-order quantification and modal notions like “necessity”, “possibility”, and “impossibility”.

It is the “is” of identity reading of “Nothing possibly nothings” that is really interesting, however. On this reading, “Nothing possibly nothings” becomes something like:

(9) It is not the case that there exists an object *x* such that it is possible that *x* didn’t exist.

This is equivalent to the slightly less cumbersome:

(10) Everything that actually exists, necessarily exists.

This statement expresses a metaphysical view known as *necessitism*: the view that every object that exists at all exists necessarily. According to necessitism it is impossible that you, or that chair, or this blog post, could have failed to exist (i.e. there is no possible way that the world could have turned out where you, or that chair, or this blog post didn’t exist).

Necessitism has been recently defended by Timothy Williamson, in *Modal Logic as Metaphysics*. While his defense of necessitism is subtle and interesting, the view is extremely counterintuitive, and thus remains a subject of much contention within metaphysics and the philosophy of logic. In short, although “Nothing nothings” seems, at least on the readings given above, uncontroversially true, whether “Nothing possibly nothings” is true remains an exciting open question in philosophical research.

*Image credit: The Scream by Edvard Munch. Public domain via WikiArt.*

Imagine that, on a Tuesday night, shortly before going to bed one night, your roommate says “I promise to only utter truths tomorrow.” The next day, your roommate spends the entire day uttering unproblematic truths like: 1 + 1 = 2.

]]>Imagine that, on a Tuesday night, shortly before going to bed one night, your roommate says “I promise to only utter truths tomorrow.” The next day, your roommate spends the entire day uttering unproblematic truths like:

- 1 + 1 = 2.
- The grass is green.
- The sky is blue.

She continues on, in this vein, until going to bed. As she is about to fall asleep (and we assume she goes to bed before midnight), she proudly pronounces:

I kept my promise.

The question is this: Has she?

Your roommate’s pronouncement has a similar logical form to the truth-teller:

This sentence is true.

Unlike the Liar paradox:

This sentence is false.

which is true if false, and false if true, the truth-teller is true if true, and false if false. So it is indeterminate between the two truth-value assignments – it could be either one, and no inconsistency, incoherence, or any other sort of problem arises either way.

Likewise, your roommate’s pronouncement is, logically speaking, indeterminate. If we assume that is true, then in fact every one of her pronouncements on Wednesday was true, and hence she kept her promise. If, however, we assume it is false, then in fact it is not the case that each of her pronouncements on Wednesday was true, and so she failed to keep her promise.

But there seems (in my mind, at least) to be a strong intuitive push to attribute truth to your roommate’s assertion. In other words, it would be totally perverse to respond to your roommate’s assertion along the lines of:

Well – perhaps not. Maybe you are lying right now!

But why is this? After all, nothing in the logic of the situation seems to privilege an assignment of truth to your roommate’s assertion over an assignment of falsity.

It is worth examining how other, superficially similar situations work. First, note that if your roommate had, on Tuesday, promised to only tell truths on Wednesday, and then said true things throughout the course of the day on Wednesday, and then, just before falling asleep, finished off the day with:

I didn’t keep my promise.

then we have a paradox. If the last sentence is true, then every sentence uttered by your roommate is true, so she kept her promise, so the last sentence must be false. But if the last sentence is false, then your roommate did not utter only truths on Wednesday, so she didn’t keep her promise, so the last sentence must be true.

Similarly, if you roommate, on Tuesday, promised to only utter falsehoods on Wednesdsay, and then on Wednesday said only false things like:

- 1 + 1 = 3.
- Grass is blue.
- The sky is green.

during the day and finished up the day with:

I kept my promise.

then we, again, have a paradox.

The final case, however, is the most interesting. Imagine that your roommate, on Tuesday night, promised to only state falsehoods on Wednesday, and then made only false claims during the next day, and finished up the day with:

I didn’t keep my promise.

Here there is no paradox. If the final claim is true, then she didn’t keep her promise, because at least one of her assertions (in fact, this last one) is not false. If the claim is false, however, then she did keep her promise, but is lying to us about it.

So this fourth case, like the one we started with, is indeterminate: there seems to be nothing in the logic of the situation that privileges either assignment – truth or falsity – to the final claim made by your roommate.

But what is interesting in this case is that (again, at least in my mind) there is no intuitive ‘push’ to assign truth to your roommate’s claim, rather than falsity. This makes it rather different from the first case. The question is why it is so different.

The answer, I think, lies in looking at more than the logic of the sentences in question. In particular, it lies in noticing that, when we are communicating linguistically, we are implicitly agreeing to live up to certain expectations, and interpreters construct their understandings of our utterances (and their truth values) in part in virtue of these same expectations.

The first such default expectation on the part of interpreters is that we will tell the truth (at least most of the time). Thus, listeners are justified in adopting something like the following principle, when interpreting our utterances.

*Truth-telling Principle*:

All else being equal, I should attempt to interpret speakers as if they are telling the truth.

Of course, things are not always equal. But the upshot of this principle is this: when confronted with an apparently sincere utterance that has two interpretations, where on one interpretation the utterance is true, and on the other interpretation the utterance is false, and where there are no other reasons to prefer one of these understandings of the utterance over the other, listeners should opt for the interpretation that makes the assertion true, since this accords with the expectation that speakers generally tell the truth.

The second expectation is that we will keep our promises (again, at least most of the time), and hence we have a second principle:

*Promise-keeping Principle*:

All else being equal, I should attempt to interpret speakers as if they are keeping their promises.

Again, things are not always equal. But the upshot of this principle is this: when confronted with an apparently sincere utterance that has two interpretations, where on one interpretation the speaker is keeping a promise, and on the other interpretation the speaker is failing to keep that same promise, and where there are no other reasons to prefer one of these understandings of the utterance over the other, listeners should opt for the interpretation where the speaker is keeping her promise, since this accords with the expectation that speakers generally tell keep their promises.

We can now easily explain the intuitive difference between the first and fourth case described above. In the first case we have two choices: either your roommate is telling the truth (and hence also keeping her promise), or your roommate is lying (and hence failing to keep her promise). Both of the two principles described above weigh in favor of the first option, however, so we are strongly motivated to assign truth to your roommate’s utterance.

In the second case, however, things don’t work out so uniformly. Again, we have two choices: either your roommate is telling the truth and hence failing to keep her promise, or your roommate is lying and hence keeping her promise. What we called the Truth-telling Principle weighs in favor of the first option, but Promise-keeping Principle weighs in favor of the second option. The two principles conflict, and hence we remain undecided between the two options, with no intuitive preference for one over the other.

*Image credit: “Truth”, by Daveblog. CC BY 2.0 via Flickr.*

A 'Liar cycle' is a finite sequence of sentences where each sentence in the sequence except the last says that the next sentence is false, and where the final sentence in the sequence says that the first sentence is false.

]]>A *Liar cycle* is a finite sequence of sentences where each sentence in the sequence except the last says that the next sentence is false, and where the final sentence in the sequence says that the first sentence is false. Thus, the 2-Liar cycle (also known as the *No-No paradox* or the *Open Pair*) is:

_{C1}: Sentence C_{2} is false.

_{C2}: Sentence C_{1} is false.

And the 3-Liar cycle is:

_{C1}: Sentence C_{2} is false.

_{C2}: Sentence C_{3} is false.

_{C3}: Sentence C_{1} is false.

The Liar paradox itself is just the 1-Liar cyle (where the Liar sentence plays the role of both the first sentence and the last sentence in the sequence of length one):

_{C1}: Sentence C_{1} is false.

We can prove, for any finite number *n*, that if *n* is odd then there is no stable assignment of truth and falsity to each sentence – that is, that the sequence is paradoxical, and if *n* is even then there are exactly two distinct stable assignments of truth and falsity (the trick is noticing that any stable assignment will alternate between true sentences and false sentences).

The closely related Curry paradox arises by considering a conditional statement (an “if… then…” statement) that says that its own truth implies that some completely unrelated sentence holds. Here we are assuming that the conditional in question is what logicians call a *material conditional*: an “if… then…” statement that is false if and only if the antecedent (the “if…” bit) is true and the consequent (the “then…” bit) is false, and is true otherwise. Here is a typical Curry conditional:

C_{1}: If C_{1} is true, then Santa Claus exists.

We can use the Curry conditional above, plus straightforward platitudes about truth (i.e. that a sentence is true if and only if what it says is the case) to prove that Santa Claus exists:

*Proof*: Assume (for *reductio ad absurdum*) that the Curry conditional is false. Then the antecedent of the Curry conditional is true (and the consequent false). The antecedent of the Curry conditional says that the Curry conditional is true. Since the antecedent is true, what it says must be the case. Hence the Curry conditional is true, contradicting the assumption with which we began.

Thus, the Curry conditional cannot be false, so it must be true. But if the Curry conditional is true, then what it says must be the case. The Curry conditional says that, if the Curry conditional is true, then Santa Claus exists. So if the Curry conditional is true, then Santa Claus exists. But we already established that the Curry Conditional is true. Hence Santa Claus exists. QED.

Interestingly, Curry cycles__ have not, to my knowledge, been investigated until now. A __*Curry cycle* is a finite sequence of conditionals where each conditional in the sequence except the last says that if the next conditional is true, then some clearly false sentence holds, and where the final conditional in the sequence says that if the first conditional is true, then some clearly false sentence holds. The following is an example of the 2-Curry cycle:

“Either Santa Claus exists, or the Easter Bunny exists, or the Great Pumpkin exists.”

C_{1}: If conditional C_{2} is true then Santa Claus exists.

C_{2}: if conditional C_{1} is true then the Easter Bunny exists.

And the following is a 3-Curry cycle:

C_{1}: If conditional C_{2} is true then Santa Claus exists.

C_{2}: If conditional C_{3} is true then the Easter Bunny exists.

C_{3}: If conditional C_{1} is true then the Great Pumpkin exists.

The Curry paradox itself is of course just the 1-Curry.

Now, if *n* is a finite even number, then (similar to Liar cycles) the *n*-Curry cycle is not paradoxical (where here a paradox arises if we are forced to accept as true one of the clearly false consequents). In fact, each such cycle has two distinct stable truth value assignments where all the consequents are false (hint: every other conditional is true).

Things get more interesting when we look at Curry cycles of odd length, however. These are paradoxical, but in a certain sense not *as* paradoxical as one might think. One might guess that the 3-Curry cycle above would allow us to prove that Santa Claus exists, *and* prove that the Easter Bunny exists, *and* prove that the Great Pumpkin exists. But we can’t prove *any* of these. What we can prove, however, is:

Either Santa Claus exists, or the Easter Bunny exists, or the Great Pumpkin exists.

*Proof*: Assume that the offset claim above is false. So “Santa Claus exists” is false, and “The Easter Bunny exists is false”, and “The Great Pumpkin exists” is false. We will show that this assumption leads to a contradiction (and hence that the offset claim above must be true after all). Now, either the conditional C_{1} is true, or it is false.

*Case *1: The conditional C_{1} is true. The antecedent of conditional C_{3} says that conditional C_{1} is true, so the antecedent of conditional C_{3} is true. Thus, the conditional C_{3} has a true antecedent and false consequent, so the conditional C_{3} is false. The antecedent of conditional C_{2} says that conditional C_{3} is true, so the antecedent of conditional C_{2} is false. Thus, the conditional C_{2} has a false antecedent and false consequent, so the conditional C_{2} is true. The antecedent of conditional C_{1} says that conditional C_{2} is true, so the antecedent of conditional C_{1} is true. Thus, the conditional C_{1} has a true antecedent and false consequent, so the conditional C_{1} is false. This contradicts our initial assumption that C_{1} was true.

*Case* 2: Similar to Case 1, and left to the reader (it helps to draw a little 3 x 3 grid, to keep track of the truth values of antecedents, consequents, and conditionals). QED.

We can’t do better than this, though, and similar results hold for longer odd-length Curry cycles. In short, odd-length Curry cycles are paradoxical in that they entail that some clearly false claim is true, but if the cycle contains three or more conditionals (with three or more distinct consequents) then we can’t tell which of the clearly false claims is the one that, according to the paradox, must be true.

*Featured image credit: ‘Squares, circles, and lines, oh my!’ Photo by kennymatic, CC BY 2.0 via Flickr.*

A Yabloesque variant of the Bernardete Paradox

Here I want to present a novel version of a paradox first formulated by José Bernardete in the 1960s – one that makes its connections to the Yablo paradox explicit by building in the latter puzzle as a ‘part’. This is not the first time connections between Yablo’s and Bernardete’s puzzles have been noted (in fact, Yablo himself has discussed such links). But the version given here makes these connections particularly explicit.

]]>A Yabloesque variant of the Bernardete Paradox

Here I want to present a novel version of a paradox first formulated by José Bernardete in the 1960s – one that makes its connections to the Yablo paradox explicit by building in the latter puzzle as a ‘part’. This is not the first time connections between Yablo’s and Bernardete’s puzzles have been noted (in fact, Yablo himself has discussed such links). But the version given below makes these connections particularly explicit.

First, we should look at Bernardete’s original. Imagine that Alice is walking towards a point – call it A – and will continue walking past A unless something prevents her from progressing further. There is also an infinite series of gods, which we shall call G_{1}, G_{2}, G_{3}, and so on. Each god in the series intends to erect a magical barrier preventing Alice from progressing further if Alice reaches a certain point (and each god will do nothing otherwise):

(1) G_{1} will erect a barrier at exactly ½ meter past A if Alice reaches that point.

(2) G_{2} will erect a barrier at exactly ¼ meter past A if Alice reaches that point.

(3) G_{3} will erect a barrier at exactly ^{1}/_{8} meter past A if Alice reaches that point.

And so on.

Note that the possible barriers get arbitrarily close to A. Now, what happens when Alice approaches A?

Alice’s forward progress will be mysteriously halted at A, but no barriers will have been erected by any of the gods, and so there is no explanation for Alice’s inability to move forward (other than the un-acted-on intentions of the gods, which isn’t much of an explanation). Proof: Imagine that Alice did travel past A. Then she would have had to go some finite distance past A. But, for any such distance, there is a god far enough along in the list who would have thrown up a barrier before Alice reached that point. So Alice can’t reach that point after all. Thus, Alice has to halt at A. But since Alice doesn’t travel past A, none of the gods actually do anything.

Now let’s change the puzzle a bit. Imagine that Alice is an expert logician enjoying her morning walk (which, as usual, passes through point A). Alice will continue walking unless she hears someone utter a paradoxical sentence or set of sentences. Hearing a paradox is paralyzing to Alice, however. Upon hearing such a thing, she will instantly stop in her tracks (and she is able to detect paradoxes instantaneously, the minute they are uttered). Finally, Alice walks in total silence, never uttering a word.

As before, we also have an infinite series of gods G_{1}, G_{2}, G_{3}, … and each god intends to act in a particular way if Alice reaches a certain point on the path past A. But now they are not erecting barriers, but are instead merely making utterances:

(1) G_{1} will say:

“Everything the other gods have said so far is false.”

if Alice makes it ½ meter past A.

(2) G_{2} will say:

“Everything the other gods have said so far is false.”

if Alice makes it ¼ meter past A.

(3) G_{3} will say:

“Everything the other gods have said so far is false.”

if Alice makes it ^{1}/_{8} meter past A.

And so on.

In short, each god in the series will accuse all of the other gods who have already spoken of being liars, if Alice makes it far enough. Now, what happens when Alice approaches A?

Again, Alice’s forward progress will be halted at A: Imagine that Alice did travel past A. Then she would have had to go some finite distance past A. But, for any such distance, there is a god far enough along in the list (in fact, infinitely many of them) who would have said:

“Everything the other gods have said so far is false.”

before Alice reached that point. Let G_{m} be any one of the gods whose point Alice has passed. Notice that if Alice passed god G_{m}, then she also passed all of G_{m+1}, G_{m+2}, G_{m+3}… Now, G_{m} uttered:

“Everything the other gods have said so far is false.”

when Alice passed the appropriate point (that is, when Alice has reached ^{1}/(_{2}^{m}) meters past A). But before that each of the gods whose number is greater than m (i.e. G_{m+1}, G_{m+2}, G_{m+3},…) will have already said the same thing about the gods who spoke before them. As a result, G_{m}’s utterance can be neither true nor false.

Assume that G_{m}’s utterance is true. G_{m}’s utterance amounts to his saying that each of G_{m+1}, G_{m+2}, G_{m+3},… was lying when they made their respective utterances. So each of G_{m+1}, G_{m+2}, G_{m+3},… must in fact be lying. But then each of G_{m+2}, G_{m+3}, G_{m+4},… must be lying. But G_{m+1}’s assertion that:

“Everything the other gods have said so far is false.”

is equivalent to saying that each of G_{m+2}, G_{m+3}, G_{m+4},… is lying. So G_{m+1} is telling the truth. Contradiction, so G_{m} cannot be telling the truth.

Thus, G_{m} utterance must be false. But we can run the same argument given in the previous paragraph on G_{m+1}, G_{m+2}, G_{m+3},… just as easily as on G_{m} (after all, if Alice passed the point at which G_{m} makes his utterance, then she also passed all the points corresponding to G_{m+1}, G_{m+2}, G_{m+3},…). Thus, all of G_{m+1}, G_{m+2}, G_{m+3},… are lying as well. But then G_{m}’s assertion that:

“Everything the other gods have said so far is false.”

is true after all. Contradiction again.

Note: The reader who finds the previous two paragraphs difficult may want to consult my previous discussion of the Yablo paradox.

Thus, Alice cannot walk any distance past A, no matter how short, since doing so would mean she would have to pass a point at which a paradox had already been uttered. So she halts when she reaches A. But, since she doesn’t pass A, no one (neither Alice nor any of the gods) has said anything. So what, exactly, stopped Alice?

]]>The Liar paradox arises when we consider the following declarative sentence: This sentence is false. Given some initially intuitive platitudes about truth, the Liar sentence is true if and only if it is false. Thus, the Liar sentence can’t be true, and can’t be false, violating out intuition that all declarative sentences are either true or false (and not both). There are many variants of the Liar paradox. For example, we can formulate relatively straightforward examples of interrogative Liar paradoxes, such as the following Liar question: Is the answer to this question “no”?

]]>The Liar paradox — a paradox that has been debated for hundreds of years — arises when we consider the following *declarative* sentence:

“This sentence is false.”

Given some initially intuitive platitudes about truth, the Liar sentence is true if and only if it is false. Thus, the Liar sentence can’t be true, and can’t be false, violating out intuition that all declarative sentences are either true or false (and not both).

There are many variants of the Liar paradox. For example, we can formulate relatively straightforward examples of *interrogative* Liar paradoxes, such as the following Liar *question*:

“Is the answer to this question ‘no’?”

If the correct answer to this question is “yes”, then the correct answer to the question is “no”, and vice versa. Thus the Liar question is a yes-or-no question that we cannot correctly answer with either “yes” or “no”.

Interestingly, I couldn’t think of any clear examples of *exclamatory* variants of the Liar paradox. The closest might be something like the following Liar exclamation:

“Boo to this very exclamation!”

If one (sincerely) utters this sentence, then one is simultaneously exhibiting a positive attitude towards the Liar exclamation (via making the utterance in the first place) and a negative attitude towards the exclamation (via the content of the utterance). But I am far from sure that this leads to a genuine contradiction or absurdity.

There are, however, a number of very interesting *imperative* versions of the Liar. The simplest is the following, popularized in the videogame *Portal* 2:

“New Mission: Refuse this Mission!”

Accepting the mission requires refusing the mission, and refusing the mission requires accepting it. Thus, the mission is one we cannot accept, yet cannot refuse.

Let’s consider a slightly different kind of interrogative paradox: *algorithmic paradoxes*. Now, an algorithm is just a set of instructions for effectively carrying out a particular procedure (often, but not always, a computation). One of the most informative and useful ways of representing algorithms is via flowcharts. But if we aren’t careful, we can formulate flowcharts that provide instructions for procedures that are impossible to implement.

Consider the flowchart in Figure 1. It has two states: the start state, where we begin, and the halt state, which indicates that we have completed the procedure in question when (if?) we reach it. Of course, not every flowchart represents a procedure that always halts. Some procedures get caught up in infinite loops, where we repeat the same set of instructions over and over without end. In particular, the fact that a flowchart contains a halt state doesn’t always guarantee that the procedure represented by the flowchart will always halt.

The start state in Figure 1 contains a question, and arrows leading from it to other (not necessarily distinct) states corresponding to each possible answer to that question. In this case the question is a yes-or-no question, and there are thus two such arrows.

So Figure 1 seems to be a perfectly good flowchart. The question to ask now, however, is this: Can we successfully carry out the instructions codified in this flowchart?

The answer is “no”.

Here’s the reasoning: Assume you start (as one should) in the start state. Now, when carrying out the first step of the procedure represented by this flowchart, you must either choose the “yes” arrow, and arrive at the halt state, or choose the “no” arrow and arrive back at the start state. Let’s consider each case in turn:

If you choose the “yes” arrow and arrive at the halt state, then the procedure will halt. So the correct answer to the question in the start state is “no”, since carrying out the algorithm only took finitely many steps (in fact, only one). But then you should not have chosen the “yes” arrow in the first place, since you weren’t trapped in an infinite loop after all. So you didn’t carry out the algorithm correctly.

If you choose the “no” arrow and arrive back at the start state, then you need to ask the same question once again in order to determine which arrow you should choose next. But you are in exactly the same situation now as you were when you carried out the first step of the procedure. So if the right answer to the question was “no” at step one, it is still “no” at the step two. But then you once again end up back at the start state, and in the same situation as before, and so you should choose the “no” arrow a third time (and a fourth time, and a fifth time, … *ad infinitum*). So you never reach the halt state, and thus you are trapped in an infinite loop. As a result, the correct answer to the question is “yes”, and you should not have chosen the “no” arrow in the first place. So you didn’t carry out the algorithm correctly.

Thus, the Liar flowchart represents an algorithm that is impossible to correctly implement.

Like many paradoxes, once we have seen one version it is easy to construct interesting paradoxical or puzzling variants. Once such variant, closely related to the No-No Paradox (go on – look it up!), is the construction given in Figure 2, which involves two separate flowcharts.

In this case we can carry out both algorithms correctly, but never in the same way. If we choose the “yes” arrow when implementing the first flowchart, then we must choose the “no” arrow over and over again when implementing the second flowchart, and if we choose the “no” arrow (over and over again) when implementing the first flowchart then we must choose the “yes” arrow when implementing the second flowchart. So we have two identical flowcharts, but we can only correctly carry out both procedures correctly by doing one thing in one case, and something completely different in the other.

Not a full-blown paradox, perhaps, but certainly puzzling.

*Featured image credit: “Ripples”. CC0 Public Domain via Pixabay.*

The paradox of generalizations about generalizations

A generalization is a claim of the form: (1) All A’s are B’s. A generalization about generalizations is thus a claim of the form: (2) All generalizations are B. Some generalizations about generalizations are true. For example: (3) All generalizations are generalizations. And some generalizations about generalizations are false. For example: (4) All generalizations are false. In order to see that (4) is false, we could just note that (3) is a counterexample to (4).

]]>The paradox of generalizations about generalizations

A generalization is a claim of the form:

(1) All A’s are B’s.

A generalization about generalizations is thus a claim of the form:

(2) All generalizations are B.

Some generalizations about generalizations are true. For example:

(3) All generalizations are generalizations.

and some generalizations about generalizations are false. For example:

(4) All generalizations are false.

In order to see that (4) is false, we could just note that (3) is a counterexample to (4). The following argument is a bit more interesting, however.

Proof: Assume that sentence (4) is true. Then, given what (4) says, all generalizations are false. But (4) is a generalization, so (4) must be false, making (4) both true and false. Contradiction. So (4) can’t be true, and hence must be false.

(4) has an interesting property, however. Although, as we have seen, not all generalizations are false, and hence (4) fails to be true, it is itself a false generalization, and hence is a *supporting instance* of (4). In other words, since (4) is false, the existence of (4) provides some small amount of positive evidence in favor of the truth of (4), even if, in the end, our proof of (4)’s falsity trumps this small bit of defeasible evidence.

Thus, we can introduce the following terminology. A generalization about generalizations of the form:

(5) All generalizations are B.

is a self-supporting generalization if it in fact has property B – that is, if it has the property that it ascribes to all generalizations (regardless of whether all other generalizations have this property, and thus regardless of whether the generalization in question is itself true or false). In short, a generalization about generalizations is self-supporting if it has the property that it says all generalizations have.

It is easy to show that any true generalization about generalizations will be self-supporting (proof left to the reader). But false generalizations might be self-supporting, like (4) above, or they might not. For example:

(6) All generalizations are true.

is false, since (4) is false, and hence a counterexample to (6). But it is not self-supporting, since it would have to be true to be self-supporting.

To obtain the paradox promised in the title of this post, we need only consider:

(7) All generalizations are not self-supporting.

Note that we could express this a bit more colloquially as “No generalizations are self-supporting.”

Now, (7) is clearly false, since we have already seen an instance of a self-supporting generalization. But is (7) self-supporting? As the reader has no doubt guessed, there is no coherent answer to this question:

Proof of Contradiction: (7) is either self-supporting, or it is not self-supporting.

Case 1: Assume that (7) is self-supporting. If (7) is self-supporting, then it has the property that (7) says all generalizations have. (7) says that all generalizations are not self-supporting. So (7) is not self-supporting after all. Contradiction.

Case 2: Assume that (7) is not self-supporting. But (7) says that all generalizations are not self-supporting. So (7) has the property that (7) says all propositions have. So (7) is self-supporting after all. Contradiction.

Note that this paradox, unlike the Liar paradox (“This sentence is false”) does not involve any problems with regard to determining the truth-value of (7). As we have already noted, we can straightforwardly observe that (7) is false. The paradox arises instead with regard to whether (7) has the rather more esoteric property of being self-supporting. It turns out that (7) has this property if and only if it doesn’t.

]]>The impossibility of perfect forgeries?

Imagine that Banksy, (or J.S.G. Boggs, or some other artist whose name starts with “B”, and who is known for making fake money) creates a perfectly accurate counterfeit dollar bill – that is, he creates a piece of paper that is indistinguishable from actual dollar bills visually, chemically, and in every other relevant physical way. Imagine, further, that our artist looks at his creation and realizes that he has succeeded in creating a perfect forgery. There doesn’t seem to be anything mysterious about such a scenario at first glance – creating a perfect forgery.

]]>The impossibility of perfect forgeries?

Imagine that Banksy, (or J.S.G. Boggs, or some other artist whose name starts with “B”, and who is known for making fake money) creates a perfectly accurate counterfeit dollar bill – that is, he creates a piece of paper that is indistinguishable from actual dollar bills visually, chemically, and in every other relevant physical way. Imagine, further, that our artist looks at his creation and realizes that he has succeeded in creating a perfect forgery. There doesn’t seem to be anything mysterious about such a scenario at first glance – creating a perfect forgery, and knowing one has done so, although extremely difficult (and legally controversial), seems perfectly possible. But is it?

In order for an object to be a perfect forgery, it seems like two criteria must be met. First of all, the object must be a forgery – that is, the object cannot be a genuine instance of the category in question. In this case, our object, which we shall call X, must not be an actual dollar bill:

(1) X is not a dollar bill.

Second, the object must be perfect (as a forgery) – that is, it can’t be distinguished from actual instances of the category in question. We can express this thought as follows:

(2) We cannot know that X is not a dollar bill.

Now, there is nothing that prevents both (1) and (2) from being simultaneously true of some object X (say, our imagined fake dollar bill). But there is an obstacle that seemingly prevents us from knowing that both (1) and (2) are true – that is, from knowing that X is a perfect forgery.

Imagine that we know that (1) is true, and in addition we know that (2) is true. In other words, the following claims hold:

(3) We know that X is not a dollar bill.

(4) We know that we cannot know that X is not a dollar bill.

Knowledge is factive – in other words, if we know a claim is true, then that claim must, in fact, be true. Applying this to the case at hand, this means that claim (4) entails claim (2). But claim (2) and claim (3) are incompatible with each other: (2) says we cannot know that X isn’t a dollar, while (3) says we know it isn’t. Thus, (3) and (4) can’t both be true, since if they were, then a contradiction would also be true (and contradictions can’t be true).

Thus, we have proven that, although perfect forgeries might well be possible, we can never know, of a particular object, that it is a perfect forgery. But an important question remains: If this is right, then what, exactly, is going on in the story with which we began? How is it that our imagined artist doesn’t know that he has created a perfect forgery?

In order to answer this question, it will help to flesh out the story a bit more. So, once again imagine that our artist creates the piece of paper that is visually, chemically, and in every other physical way indistinguishable from a real dollar bill. Call this Stage 1. Now, after admiring his work for a while, imagine that the artist then pulls eight genuine, mint-condition dollar bills out of his wallet, throws them on the table, and then places the forgery he created into the pile, shuffling and mixing until he can no longer identify which of the pieces of paper is the one he created, and which are the ones created by the Mint. Let’s call this Stage 2. How do Stage 1 and Stage 2 differ?

At Stage 1 we do not, strictly speaking, have a case of a perfect forgery. Although the piece of paper the artist created is physically indistinguishable from a dollar bill, the artist can nevertheless know it is not a dollar bill because he knows that he created this particular object. In other words, at Stage 1 he can tell that the forgery is a forgery because he knows the history, and in particular the origin, of the object in question.

Stage 2 is different, however. Now the fake is a perfect forgery, since it still isn’t a dollar, but we can’t know that it isn’t a dollar, since we can no longer distinguish it from the genuine dollars in the pile. So in some sense we know that the fake dollar in the pile is a perfect forgery. But we can’t point to any particular piece of paper and know that it, rather than one of the other eight pieces of paper, is the perfect forgery. In other words, in Stage 2 the following is true:

- We know there is an object in the pile that is a perfect forgery.

But the following, initially similar looking claim, is false:

- There is an object in the pile that we know is a perfect forgery.

We can sum all this up as follows: We can know that perfect forgeries exist – that is, we can know claims of the form “One of those is a perfect forgery”. But we can’t know, of a particular object, that it is a perfect forgery – that is, we can never know claims of the form “That is a perfect forgery”. And it is this latter sort of claim – that we know, of a particular object, that it is a perfect forgery – that leads to the contradiction.

]]>