Oxford University Press's
Academic Insights for the Thinking World

Turing’s Grand Unification

By Cristopher Moore and Stephan Mertens

Many of the central moments in science have been unifications: realizations that seemingly disparate phenomena are all aspects of one underlying structure. Newton showed that the same laws of motion and gravity govern apples and planets, creating the first explanatory framework that joins the terrestrial to the celestial. Maxwell showed that a single field can explain electricity, magnetism, and light. Darwin realized that natural selection shapes all forms of life. And Einstein demonstrated that space and time are shadows of a single, four-dimensional spacetime.

This quest for unity drives us to this day, as we hunt for a Grand Unified Theory that combines gravity with quantum mechanics. But while it is less well-known, computer science had its own grand unification in 1936, thanks to Alan Turing.

At the dawn of the 20th century, mathematicians and logicians were focused on the axiomatic underpinnings of mathematics. Shaken by paradoxes — like Russell’s set of all sets that don’t contain themselves (does it or doesn’t it?) — they wanted to re-build mathematics from the ground up, creating a foundation free of paradox. This stimulated a great deal of interest in axiomatic systems, their power to establish truth, and the difficulty of finding proofs or disproofs of open questions in mathematics.

In 1928, David Hilbert posed the Entscheidungsproblem, asking whether there is a “mechanical procedure” that can decide whether or not any given mathematical statement is true — say the Twin Prime Conjecture, that there are an infinite number of pairs of primes that differ by two. Such a procedure would complete the mathematical adventure, providing a general method to determine the truth or falsehood of any statement. To us, this sounds horribly final, but to Hilbert it was a glorious dream.

But what exactly is a “mechanical procedure”? Or, in modern terms, an algorithm? Intuitively, it is a procedure that can be carried out according to a fixed computer program, like a recipe followed by a dutiful cook. But what kinds of steps can this computer perform? What kind of information does it have access to, and how is it allowed to transform this information?

Several models of computation had been proposed, with different attitudes towards what it means to compute. The recursive functions build functions from simpler ones using rules like composition and induction, starting with “atomic” functions like x+1 that we can take for granted. Another model, Church’s λ-calculus, repeatedly transforms a string of symbols by substitutions and rearrangements until only the answer remains.

Each of these models has its charms. Indeed, each one lives on in today’s programming languages. Recursive functions are much like subroutines and loops in C and Java, and the λ-calculus is at the heart of functional programming languages like Lisp and Scheme. Every computer science student knows that we can translate from each of these languages to the others, and that while their styles are radically different, they can ultimately perform the same tasks. But in 1936, it was far from obvious that these models are equivalent, or that either one is capable of everything we might reasonably call a computation. Why should a handful of ways to define functions in terms of simpler ones, or a particular kind of symbol substitution, be enough to carry out any conceivable computation?

Turing settled this issue with a model that is both mathematically precise and intuitively complete. He began by imagining a human computer, carrying out a procedure with pencil and paper. If we had to, he argued, we could boil any such procedure down into a series of steps, each of which reads and writes a single symbol. We don’t need to remember much ourselves, since we can use notes on the paper to keep track of what to do next. And although it might be inconvenient, a one-dimensional roll of paper is enough to write down anything we might need to read later.

At that point, we have a Turing machine: a controller with a finite number of internal states, and a tape on which it can read and write symbols in a finite alphabet. Nothing has been left out. Any reasonable attempt to augment the Turing machine with two-dimensional tapes, multiple controllers, etc. can be simulated by the original model.

Famously, Turing then showed that there are problems that no Turing machine can solve. The Halting Problem asks whether a given Turing machine will ever halt; in modern terms, whether a program will return an answer, or “hang” and run forever. If there were a machine that could answer this question, we could ask it about itself, demanding that it predict its own behavior. We then add a twist, making it halt if it will hang, and hang if it will halt. The only escape is to accept that no such machine exists in the first place.

This also gives a nice proof of Gödel’s Theorem that there are unprovable truths, or to be more precise, that no axiomatic system can prove all mathematical truths. (Unless it also “proves” some falsehoods, in which case we can’t trust it!) For if every truth of the form “this Turing machine will never halt” had a finite proof, we could solve the Halting Problem by doing two things in parallel: running the machine to see if it halts, while simultaneously looking for proofs that it won’t. Thus, no axiomatic system can prove every truth of this form.

Turing showed that his machines are exactly as powerful as the recursive functions and the λ-calculus, unifying all three models under a single definition of computation. What we can compute doesn’t depend on the details of our computers. They can be serial or parallel, classical or quantum. One kind of computer might be much more efficient than another, but given enough time, each one can simulate all the others. The belief that these models capture everything that could reasonably be called an algorithm, or a mechanical procedure, is called the Church-Turing Thesis.

Turing drew the line between what is computable and what is not, if we have an unbounded amount of time and memory. Since his death, computer science has focused on what we can compute if these resources are limited. For instance, the P vs. NP question asks whether there are problems where we can check a solution in polynomial time (as a function of problem size) but where actually finding a solution is much harder. For instance, mathematical proofs can be checked in time roughly proportional to their length — that’s the whole point of formal proofs — but they seem difficult to find. Despite our strong intuition that finding things is harder than checking them, we have been unable to prove that P ≠ NP, and it remains the outstanding open question of the field.

The fact that Turing didn’t live to see how computer science grew and flowered — that he wasn’t there to play a role in its development, as he did at its foundations – is one of the tragedies of the 20th century. He received, too late, an apology from the British government for the persecution that led to his death. Let’s wish him a happy birthday, and raise a glass to his short but brilliant life.

Cristopher Moore and Stephan Mertens are the authors of The Nature of Computation. Cristopher Moore is a professor at Santa Fe Institute and previously, a professor in the Computer Science Department and Department of Physics and Astronomy, University of New Mexico. Stephan Mertens is a a theoretical physicist at the Institute of Theoretical Physics, Otto-von-Guericke University, Magdeburg, and external professor at the Santa Fe Institute.

OUPblog is celebrating Alan Turing’s 100th birthday with blog posts from our authors all this week. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, and “Alan Turing’s Cryptographic Legacy” by Keith M. Martin. Look for “Computers as authors and the Turing Test” by Kees van Deemter tomorrow.

Subscribe to the OUPblog via email or RSS.
Subscribe to only articles on mathematics on the OUPblog via email or RSS.
View more about this book on the

Read More in…

Recent Comments

There are currently no comments.

Leave a Comment

Your email address will not be published. Required fields are marked *