By Paul Cockshott
This year is being widely celebrated as the Turing centenary. He is being hailed as the inventor of the computer, which perhaps overstates things, and as the founder of computing science, which is more to the point. It can be argued that his role in the actual production of the first generation computers, whilst real, was not vital. In 1946 he designed the Automatic Computing Engine (ACE), a very advanced design of computer for its day, but because of its challenging scale, initially only a cut down version (the Pilot ACE) was built (and can now be seen in the Science Museum). From 1952 to 1955, the Pilot ACE was the fastest computer in the world and it went on to be sucessfully commercialised as the Deuce. In engineering terms though, none of the distinctive features of Turing’s ACE survive in today’s computer designs. The independent work of Zuse in Germany or Atanasoff in the US indicates that electronic computers were a technology waiting to be discovered across the industrial world.
What distinguished Turing from the other pioneer computer designers was his much greater philosophical contribution. Turing thought deeply about what computation is, what its limits are, and what it tells us about the nature of intelligence and thought itself.
Turing’s 1936 paper on the computable real numbers marks the epistemological break between idealism and a materialism in mathematics. Prior to Turing it was hard to get away from the idea that through mathematical reason, the human mind gained access to a higher domain of Platonic truths. Turing’s first proposal for a universal computing machine is based on an implicit rejection of this view. His machine is intended to model what a human mathematician does when calculating or reasoning, and by showing what limits this machine encounters, he identifies constraints which bind mathematical reasoning in general (whether done by humans or machines).
From the beginning, he emphasises the limited scope of our mental abilities and our dependence on artificial aids — pencil and paper for example — to handle large problems. We have, he asserted, only a finite number of ‘states of mind’ that we can be in when doing calculation. We have in our memories a certain stock of what he calls ‘rules of thumb’ that can be applied to a problem. Our vision only allows us to see a limited number of mathematical symbols at a time and we can only write down one symbol of a growing formula or growing number at a time. The emphasis here, even when he looks at the human mathematician, is on the mundane, the material, the constraining.
In his later essays on artificial intelligence Turing doesn’t countenance any special pleading for human reason. He argues with his famous Turing Test that the same criteria that we use to impute intelligence and consciousness to other human beings could in principle be used to impute them to machines (provided that these machines communicate in a way that we can not distinguish from human behaviour). In his essay ‘Computing Machinery and Intelligence,’ he confronts the objection that machines can never do anything new, only what they are programmed to do. “A better variant of the objection says that a machine can never ‘take us by surprise’…. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.”
Turing starts a philosophical tradition of grounding mathematics on the material and hence ultimately on what can be allowed by the laws of physics. The truth of mathematics become truths like those of any other science — statements about sets of possible configurations of matter. So the truths of arithmetic are predictions about the behaviour of actual physical calculating systems, whether these be children with chalks and slates or microprocessors. In this view it makes no more sense to view mathematical abstractions as Platonic ideals than it does to posit the existence of ideal doors and cups of which actual doors and cups are partial manifestations. Mathematics then becomes a technology of modeling one part of the material world with another. In Deutch’s formulation of the Turing Principle, any finite physical system can be simulated to an arbitrary degree of accuracy by a universal Turing machine.
Paul Cockshott is a computer scientist and political economist working at the University of Glasgow. His most recent books are Computation and its Limits (with Mackenzie and Michaelson) and Arguments for Socialism (with Zachariah). His research includes programming languages and parallelism, hypercomputing and computability, image processing, and experimental computers.
OUPblog is celebrating Alan Turing’s 100th birthday with blog posts from our authors all this week. Read the previous post in our Turing series: “Maurice Wilkes on Alan Turing” by Peter J. Bentley. Look for “Alan Turing’s Cryptographic Legacy” by Keith M. Martin, “Turing’s Grand Unification” by Cristopher Moore and Stephan Mertens, and “Computers as authors and the Turing Test” by Kees van Deemter later this week.