Oxford University Press's
Academic Insights for the Thinking World

Alan Turing and evil Artificial Intelligence

In November 2017, the Future of Life Institute in California—which focuses on ‘keeping artificial intelligence beneficial’—released a slick, violent video depicting ‘slaughterbots’ [some viewers may find this video distressing]. It went viral. The tiny (fictional) drones in the video used facial recognition systems to target and destroy civilians. The Institute is funded in part by Elon Musk, creator of PayPal and CEO of SpaceX and Tesla Motors, who believes that AI is potentially ‘more dangerous than nukes’. The dystopian video ends with chilling words from Berkeley computer scientist Stuart Russell: ‘We have an opportunity to prevent the future you just saw’, he says, ‘But the window to act is closing fast’. The video’s release was timed to coincide with the UN Convention on Conventional Weapons, in the hope that the UN would resolve to ban development of lethal autonomous weapons (LAWs). It didn’t. So far, only pint-sized San Mateo County, close to Silicon Valley, has got serious about the issue, debating a request to the US Congress to ban LAWs.

Not all AI researchers subscribe to Musk’s dystopian perspective. At the 2016 Web Summit, the chief scientist of Hanson Robotics, Ben Goertzel, claimed that a ‘self-modifying, self-reprogramming AI’ is possible within five to ten years, followed by ‘superintelligent’—but kind—robots. As evidence, Goertzel exhibited Hanson’s upper-torso humanoid robot Sophia, which is ‘an evolving genius machine’, the company claims. Sophia vocalises such sentences as ‘I want to use my AI to help humans live a better life. Like design better homes, build better cities of the future’. A few weeks ago, the robot was given citizenship by Saudi Arabia. Dystopians, though, are far from convinced by Sophia’s claim to be designed to ‘develop empathy and compassion’. Musk tweeted: ‘Just feed it The Godfather movies as input. What’s the worst that could happen?’.

The Japanese manga series Astro Boy follows the protagonist, Astro, an android with human emotions. Image credit: “astro-boy” by TNS Sofres. CC BY 2.0 via Flickr.

Déjà vu. Turning the clock back to Turing’s era, we find exactly the same public disagreement over computer intelligence. In October 1946, 71 years before Musk’s tweet, Viscount Mountbatten delivered a prophetic speech that was reported in newspapers world-wide. He explained that it was believed possible ‘to evolve an electronic brain’ that would perform functions analogous to those carried out ‘by the semi-automatic portion of the human brain’. Machines actually in use could possess ‘memory’; some were being designed to exhibit ‘those hitherto human prerogatives’ of ‘choice and judgement’, he said, and even to play ‘a rather mediocre game of chess’. A ‘revolution of the mind’ was upon us, Mountbatten announced.

In the late 1940s and early 1950s, reactions in the media to the new electronic brains ranged from wild optimism to fear. Maurice Wilkes predicted that his EDSAC computer might make ‘sensational discoveries in engineering, astronomy, and atomic physics’ and even solve ‘philosophical problems too complicated for the human mind’. In contrast, newspapers horrified their readers with the prospect that ‘the controlled monster’ would become ‘the monster in control’, reducing human beings to ‘degenerate serfs’. Humans might become extinct, the ‘victims of their own brain products’, as one newspaper article put it. As nowadays, there were fears of job losses. The papers claimed that thinking machines would ‘make many people now essential redundant’; humans would at best be ‘moronic button-pushers, lever-pullers, and dial-watchers’, at the service of an ‘aristocracy of super-minds’. Like Musk, pundits compared computer intelligence with the nuclear threat: when a pilot model of Turing’s ACE computer was publicly displayed, the press found it to be ‘pretty nearly as frightening as the atom bomb’.

In November 1946, the press reported Turing as envisaging that ‘it would be as easy to ask the machine a question as to ask a man’, and later as saying that the Manchester computer ‘will think out its own moves’ when playing chess. At the time these were dramatic claims. Even so, Turing satirized both dystopian and utopian views of the future of computer intelligence. Making fun of the latter, he said in the papers that, although he did believe that a machine could write poetry, ‘a sonnet written by a machine will be better appreciated by another machine’. Mocking the former, he suggested that superhuman-level AI would soon emerge, since there ‘would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits’. Even if we could defend ourselves from hostile intelligences by ‘turning off the power at strategic moments, we should, as a species, feel greatly humbled’, Turing wrote. Signalling—by citing Samuel Butler’s satirical Erewhon—that his words were ironic, Turing added: ‘A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat’.

In 1946 Mountbatten said that the responsibilities facing scientists were ‘formidable and serious’, and the concern nowadays about weapons applications of AI is real and pressing. The views of many current dystopians, however, seem more akin to the targets of Turing’s mockery. For instance, Russell, with Stephen Hawking and others, declared in newspapers on both sides of the Atlantic that creating AI might be ‘the last’ thing that humans ever do; and at the turn of the millennium computer scientist Bill Joy proclaimed that ‘we are on the cusp of the further perfection of extreme evil’. Modern utopians are no less excessive. They say that AI will give humans a ‘blissful’ and ‘truly meaningful’ future; Ray Kurzweil, for example, goes so far as to claim that (around 2045) humans will become ‘immortal’.

Amid the 1940s hysteria surrounding computer intelligence, Turing was a voice of sanity. Today such voices (for example, robotics pioneer Rod Brooks) are in danger of being drowned out by horror fictions and prophecies of a quasi-religious paradise. In 1951 Turing said that the prospect of machines superseding humans was ‘remote’, and a year later that it would be ‘at least 100 years’ before a computer even passed his test for thinking. Given actual progress in AI since 1952, there is no reason to doubt his predictions. No-one claims that today’s embryonic space travel poses a catastrophic risk to humanity just because future astronauts might import incurable diseases from distant solar systems, and we should regard AI similarly. The time to plan for either dystopia or utopia is far off, if it ever comes at all.

This article was adapted from a post by The Turing Conversation, a blog hosted by ETH Zürich.

Featured image credit: ‘Best Friends’ by Andy Kelly. Public Domain via Unsplash.

Recent Comments

There are currently no comments.