As Artificial Intelligence technologies enter into more and more facets of our everyday life, we are growing accustomed to the idea of machines talking directly to us. Voice assistants such as Amazon’s Alexa and Apple’s Siri inhabit domestic and professional environments, chatbots are standard in customer care, apps such as Replika offer virtual avatars to provide companionship, and even the Twitter account of NASA’s Perseverance mission sends updates in first person, as though they were posted directly by the rover from the surface of Mars.
These new situations and opportunities make Alan Turing’s imitation game—the Turing test, as commonly called—more relevant than ever before. Only, not in the sense through which the test is usually interpreted.
The Turing test, in fact, is generally regarded as a way to assess the proficiency of AI. Its real “message,” however, does not have much to do with the intelligence of machines, but rather with our relationship with them. The question, Turing told us, is not if machines are able or not to think. It is if we believe them to do so—in other words, if we are prepared to accept the machines’ behaviour as intelligent. Once we take this point of view, we realize that that our interactions with AI are shaped by our tendency to project humanity onto things: this is one of the least understood, but most interesting implications of the Turing test.
AI, seen by humans
The paper that introduced the idea of the test, Turing’s “Computing Machinery and Intelligence,” starts with the question “Can machines think?” Yet, Turing immediately proceeds to declare this question of little use: it would be impossible, he reasons, to find an agreement on what we mean with the word “thinking.” He proposes therefore to replace this question with a practical experiment, the Turing test, in which a human interrogator exchanges written messages with an unknown partner to find out if this is a human or a machine. Although the test is usually discussed as a threshold to assess if artificial intelligence has been reached, such an approach misses its most important implication. Assigning to a human interrogator the responsibility to evaluate the behaviour of the machine, Alan Turing refuses to define AI in absolute terms and considers instead how humans perceive and understand AI.
In this sense, Turing’s proposal located the prospects of AI not just in improvements of hardware and software, but in a more complex scenario emerging from the interaction between humans and computers. By placing humans at the centre of its design, the Turing test provided a context wherein AI technologies could be conceived in terms of their credibility to human users.
I believe in you, Alexa: the example of voice assistants
The example of voice assistants, such as Alexa or Siri, helps us appreciate the implications of the Turing Test as seen from this point of view. In order to ensure that voice assistants reply with apparent sagacity to users, Amazon and Apple assign to dedicated creative teams the task of scripting appropriate responses. Their work is facilitated by data collected by recording users’ conversations with these systems, which help writers anticipate frequent questions and queries.
The seemingly clever replies that result from this work are actually one of the least “smart” things Siri can do. The activation of scripted responses is, in fact, exceedingly simple at a technical level: it is more similar to dramaturgical scripting and does not foreground complex social behaviour from the part of the machine. Yet, the ironic tone of many such replies is striking to many users, and is often indicated as evidence of voice assistants’ intelligent skills.
In other words, the least complex feats of AI systems may appear to users as if they were evidence of significant technical achievements in AI. This example shows that our perception of AI technologies does not correspond to their internal functioning: it always depends on the subjectivity our own gaze and on the all-too-human tendency to attribute sociality and agency to things.
Therefore, what the Turing test, and AI more broadly, call into question is the essence of who we are. But not because, as suggested by posthumanist theory, AI erodes the boundaries between humans and machines. Rather, the key message of the Turing test is that our vulnerability to being deceived is part of what defines us. Humans have a distinct capacity to project intention, intelligence, and emotions onto others. This is a burden but also a resource: after all, it is what makes us capable of entertaining meaningful social interactions with others. But it also makes us prone to project intelligence onto non-human interlocutors that simulate intention, sociality, and emotions.
Featured image by Possessed Photography