We have been telling stories about machines with minds for almost three thousand years. In the Iliad, written around 800 BCE, Homer describes the oldest known AI: “golden handmaidens” created by Hephaestus, the disabled god of metalworking. They “seemed like living maidens” with “intelligence… voice and vigour”, and “bustled about supporting their master.” In the Odyssey, Homer also gave us the first autonomous vehicles — the self-sailing ships that take Odysseus home to Ithaca, navigating “by thought,” along with biomimetic robots — a pair of silver and gold watchdogs which guard a palace not with teeth and claws but with their “intelligent minds.”
Such stories have been told continually ever since. They come in a wide range of forms: myths and legends, apocryphal tales, film and fiction, and serious-minded speculations about the future. Today more than ever, intelligent machines are staples of blockbusters and bestsellers, from Star Wars and Westworld to Ian McEwan’s Machines Like Me. Currently, we might call such machines “AI” — artificial intelligence — a term coined in 1955. But they have had many other names, all of which have different nuances, including “automaton” (since antiquity), “android” (1728), “robot” (1921), and “cyborg” (1960). It seems that thinking machines are, as Claude Lévi-Strauss once said of animals, good to think with.
Why? A few scholars have tried to explain this fascination with humanoid machines. The first was Ernst Jentsch, who claimed in his 1906 essay “On the Psychology of the Uncanny” that “in storytelling, one of the most reliable artistic devices for producing uncanny effects easily is to leave the reader in uncertainty as to whether he has a human person or rather an automaton before him.” He illustrated this with a brief mention of the 1816 short story “The Sandman” by E.T.A. Hoffmann, which features a young man Nathanael, enchanted by the beautiful Olimpia, who lives in the house opposite with her father. She is an excellent dancer, but does not speak beyond saying “Ah-ah!” Nonetheless, Nathanael is so distraught when he discovers the object of his rapture is in fact an automaton, constructed by her “father,” that he commits suicide.
In his essay “The Uncanny” a few years later, Sigmund Freud further developed this idea — and in so doing made that term a staple of literary analysis. But his analysis of ‘The Sandman’ actually diverts attention away from Olimpia’s artificial nature, and towards something quite different: “the theme of the [mythical] ‘Sand-Man’ who tears out children’s eyes.” The notion of the uncanny has nonetheless remained central to thinking about our reaction to human-like machines to this day: the Japanese roboticist Masahiro Mori famously coined the term “uncanny valley” to describe the unsettling effect of being confronted with a machine that is almost, but not quite, human.
However, Minsoo Kang argued in his book Sublime Dreams of Living Machines (2011) that Freud is wrong to only focus on the fears associated with automata, such as being deceived by a machine into thinking that it is human as in the case of “The Sandman.” Throughout history people have also linked hopes and positive feelings to automata. For example, E.R. Truitt, writes in her book Medieval Robots (2015) about the chateau of Hesdin in medieval France, where automata pulled pranks on unsuspecting visitors; or Star Wars fans might think of the comic relief provided by the robots C-3PO and R2-D2.
Kang’s own view is that the humanoid machine is fascinating because it is “the ultimate categorical anomaly. Its very nature is a series of contradictions, and its purpose is to flaunt its own insoluble paradox.” The uncanny is one aspect of this, in challenging the categories of the real and the unreal. But is it not the whole story, as such machines also challenge our divisions between, for example, the living and the dead, or creatures and objects.
Kang’s analysis is surely right — but again, not the whole story. Writer and critic Victoria Nelson added another piece of the puzzle, suggesting that repressed religious beliefs motivate many of the stories we tell about humanoid machines. This too seems right. Classicist Adrienne Mayor describes mythical tales of intelligent machines as “ancient thought experiments” in the potential of technology to transform the human condition. This too seems like an important function, and we argue in the book that stories — in particular, the last hundred years of science fiction — offer the most nuanced explorations available of life with AI.
So one explanation for why humanoid robots are so good to think with is that they can fulfil so many functions, and be by turns unsettling, funny, or thought-provoking. To this we want to add another piece: in a way, they can be what we want them to be, unbounded by what would count as “realistic” for a human. In this, they can fulfill narrative roles akin to those of gods or demons, embodying archetypes and superlatives: the ruthless unstoppable killer, the perfect lover, or the cerebral, ultra-rational calculating machine. They allow us to explore extremes — which is one reason why robot stories tend towards the utopian or dystopian. Stories about such machines are therefore always really stories about ourselves: parodying, probing, or problematizing our notions of what it means to be human.
Featured Image Credits by Pete Linforth from Pixabay
[…] Click here to view original web page at blog.oup.com […]
[…] Why we like a good robot story (Oxford University Press Blog) […]
[…] Why we like a good robot story (Oxford University Press Blog) […]
I wonder whether Freud or others (not having read enough – mea culpa ) explored the effect of impaired cognitive function on persons presented with automata. Would such experience be accepted as normal or as “uncanny” to those so affected and would that give any validity to the reason why “humanoid robots are so good to think with “, a phrase the meaning of which I am sorry but which I do not comprehend.