Imagine that we have a black and white monitor, a black and white camera, and a computer. We hook up the camera and monitor to the computer, and we write a program where, for some medium-ish shade of grey *G*:

- The computer tells the monitor to show a completely white screen if the average shade of the scene the camera is recording is darker than
*G*. - The computer tells the monitor to show a completely black screen if the average shade of the scene the camera is recording is no darker than
*G*.

You walk around point the camera and all goes swimmingly, until you get the bright idea to point the camera at the monitor. Then what happens?

There are two ways to answer this question: First, if we pretend that we lived in a world where electricity and light travelled instantaneously (that is, that the speed of light was infinite, so that light would leave the monitor and be detected by the camera at the same moment), then we would have a paradox. If the camera is pointed at the monitor then, at any point in time, the monitor will display a completely white screen if and only if the average shade of the scene the camera is recording is darker than *G* if and only if the average shade of the image depicted in the monitor is darker than *G* if and only if the monitor is not displaying a completely white screen (since, by the set-up, the screen always displays either a completely white screen or a completely black screen).

In reality, however, both light and electricity take a small amount of time to travel from one point to the next. As a result, we don’t have a paradox or impossible situation. Instead, if we actually set up a camera and monitor as described, the monitor will flicker – that is, it will alternate between a completely white screen and a completely black screen, and the time between switches between black and white will depend on how long it takes for light to leave the screen and be detected by the camera, and how long it takes for the electrical signals to travel from camera to computer, and how long it takes the computer to carry out the computations in the program, and how long it takes the signal to pass from the computer to the monitor.

Now, let’s complicate things a bit: Imagine that you now have an infinite stack of monitors, computers, and cameras. So there is camera #1, which feeds an image to computer #1, which then tells monitor #1 what to display, and then camera #2, which feed an image to computer #2, which then tells monitor #2 what to display, and then camera #3, which…

Now, for each camera #*n*, if we just point it at monitor #*n* (and each computer runs the same simple program as before) then we just have infinitely many instances of the earlier, simpler puzzle. But what happens if we do things a bit differently? Instead of pointing camera #1 at monitor #1, lets assume we set it up in such a way that the camera can ‘see’ monitor #2, and monitor #3, and monitor #4, and so on. Similarly, camera #2 is pointed to it can see monitor #3, and monitor #4, and monitor #5, and so on. More generally, for each whole number *n*, camera #*n* is positioned so it can see all monitors whose number is greater than *n*.

Now, let’s also change the program a bit:

- Computer #1 tells monitor #1 to show a completely white screen if all of the monitors it can see are darker than
*G*. - Computer #1 tells monitor #1 to show a completely black screen if at least one of the monitors it can see is no darker than
*G*.

And similarly for computer #2, monitor #2, and camera #2. More generally:

- Computer #
*n*tells monitor #*n*to show a completely white screen if all of the monitors whose numbers are greater than*n*are darker than*G*. - Computer #
*n*tells monitor #*n*to show a completely black screen if at least one of the monitors whose number is greater than*n*is no darker than*G*.

Regular readers of this blog will not be surprised that this is a television-variant of the Yablo paradox (obviously my favorite), just as the initial, single-camera version was a television-variant of the Liar paradox. Thus, if the speed of light and electricity were infinite, and hence the signals from camera to computer to monitor travelled instantaneously, we would again have a paradox.

Of course, if we were able to set up infinitely many televisions, cameras, and computers in this way, then we wouldn’t rip a hole in reality. Rather, the tiny but nevertheless real lag produced by the time that it takes for the signal to travel would result in the screens flickering from black to white and back again, as the camera detected different shades and the computer thus sent different instructions.

There is an interesting phenomenon here, however, in addition to this merely providing us with a novel presentation of familiar paradoxes. Let’s assume that it takes a precise fixed amount of time for the image to travel from the relevant monitors to each camera, then be sent to the computer, processed, and then the command sent to the monitor telling it what shade to display. Let’s call this time a *antinosecond* (parasecond was already taken). So, in the single-television setup with which we began, if we begin by having the television display an all white screen, then after an antinosecond it will switch to all black, then after another antinosecond it will switch to all white, and so on.

But what happens with the Yablo version, with infinitely many cameras, computers, and televisions? At first glance, you might think that it will depend on how you set it up initially – that is, on which monitors are showing a white screen and which are showing a black screen when you turn on the cameras, computers, and get things rolling. But it turns out that initial thought would be wrong:

**Theorem:** No matter what state the monitors start in, after a finite number of antinoseconds they will begin alternating between two states – all of them simultaneously showing a white screen, and all of them simultaneously showing a black screen.

I’ll give readers a couple days to ponder this before I post the argument in the comments.

In addition, there is a (surprisingly small) number such that we are guaranteed that the screens will be alternating between all simultaneously showing black and all simultaneously showing white, no matter how we set up the shades the screens are displaying at the start. Thus:

**Bonus Question:** What is the maximum number of antinoseconds before the screens are all the same shade (i.e. either all black or all white)?

**Note:** The television version of the Liar paradox is due to David Cole, a professor at the University of Minnesota – Duluth. Thanks are owed to David for allowing me to discuss it here. The Yablo variant is, as far as I know, novel.

*Featured image: Google Earth on multiple monitors by Runner1928. Public domain via Wikimedia Commons. *

There are actual paradoxes in logic, due to inference rules applied where they inapplicable for some subtle reason. There are no actual physical paradoxes … something or another will happen.

Getting an infinite number of screens and monitors to all be equidistant from each other would seem to present certain geometrical difficulties

Cute problem, I couldn’t resist writing up the solution so don’t read if you want figure it out for yourself:

2 cases:

1 There exists a minimum monitor n such that for all i>n i is identical to all monitor’s greater than i, i.e. if i is black all monitor’s greater than n are black.

2 There does not exist such an n.

In case 2 all monitors go black after 1 antinosecond since they can all see at least 1 white monitor.

In case 1 we use induction. After 1 antinosecond monitor n goes black if all the greater monitors are white and white if all the greater monitors are black. Since all the other monitors greater than n will also flicker all but the first n-1 monitors are identical.

Than by induction we’re done.

You write:

“If we pretend that we lived in a world where electricity and light travelled instantaneously (that is, that the speed of light was infinite, so that light would leave the monitor and be detected by the camera at the same moment), then we would have a paradox.”

So is this a way of proving apriori the finiteness of the speed of light?

Let’s assume first monitor’s number is 1.

If there is no smallest n such that all monitors whose numbers are equal to or greater than n are darker than G, all monitors will show a black screen after 1 antinosecond, and after another antinosecond a white, and so on. Then the simultaneity is there after 1 antinosecond from the start. In the special case where all the monitors start off with a white screen the simultaneity is already there from the start.

If there is a smallest n such that all monitors whose numbers are equal to or greater than n are darker than G, there are two cases:

If n = 2, all monitors whose numbers are equal to or greater than n – 1 will show a white screen after 1 antinosecond, while all monitors whose numbers are smaller than n – 1 will show a black screen. After another antinosecond, all monitors will show a black screen, and after another antinosecond a white screen, and so on. Then the simultaneity is there after 2 antinoseconds from the start.

So it takes at most 2 antinoseconds before the screens are all the same shade.