A chatbot, or ‘chatterbot’, is computer program designed to engage in conversation through written or spoken text. It was one of the words on the Oxford Dictionaries Word of the Year 2016 shortlist. The idea of a chatbot originates with Alan Turing’s mid twentieth century aspiration to build a thinking computer. Turing proposed a test to determine what might count as success in this venture. If, over a five-minute text-only conversation, a human judge could not distinguish the machine from a human, then we might call it a thinker.
In the years since Turing, chatbots have travelled the path from philosophical thought experiment to Internet commonplace. But there’s a gap between the goal of a chatbot that we might call a thinker and what Amazon, Apple, Facebook, and others have done with Turing’s idea. If and when Amazon’s Alexa, Apple’s Siri, or the chatbots on Facebook Messenger, pass the Turing Test they will do so almost as an afterthought. It may not be so much that those driving the development of chatbots lack the capacity to build machines that pass the Turing Test, but that they have other priorities that they judge to be more important.
Turing gave voice to a sense of wonder about minds whose basic architecture is radically different from our own. What would it be like to share ideas with such a thing? We find this sense of wonder in those who seek to communicate with chimps, dolphins, and parrots. It explains the appeal of the various machine minds that populate science fiction. HAL 9000, the computer in Stanley Kubrick’s movie 2001: A Space Odyssey presents as an invariant red light centred on a yellow dot. HAL’s statements have more emotional resonance than anything said by the movie’s fairly deadpan humans. We sense its anguish as astronaut Dave shuts it down – “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it.” HAL seems to be undergoing something more momentous than the progressive loss of capacities we should expect from the incremental shutdown of a machine.
The current proliferation of chatbots is a spread of ersatz humanity. Successful confidence tricksters enlist our willingness to play along with the scenarios they create for us. So it is with some of the deficiencies of today’s chatbots. ELIZA, a 1960s psychotherapist program, did a pretty good job of simulating understanding by mechanically turning elements of its interlocutor’s statements into requests for further information. Sample dialogue – ELIZA: “How are you today … What would you like to discuss?” Human: “I’m feeling a bit sad.” ELIZA: “Do you believe it is normal to be feeling a bit sad?” Human: “No, can you help me?” ELIZA: “Do you want me to be able to help you?” We welcome the opportunity to interact with something that is responsive in some way as we unload our problems. ELIZA has zero propensity to judge even the most shaming confessions. If ELIZA is helping you to feel better, then why spoil things by exposing it as a woeful Turing Test failure?
This willingness to play along with sufficiently human interlocutors may be a consequence of our evolutionary past. Evolutionary psychologists hypothesize that human brains are built with a Hyperactive Agency Detector. In the environments for which humans evolved, the costs of failing to detect an agent when one is present could be ruinously high. Mistakenly detecting an agent when there is none might be inconvenient. Not noticing a spear-carrying adversary could mean death. We are evolutionarily primed to detect agency in rustling trees and unusual cloud formations. This evolutionary legacy guides our interactions with Siri and the rest. We treat Siri as a sometimes helpful, sometimes frustrating agent in spite of her Turing Test fails.
The recent hack and data-dump from the Ashley Madison “Life is short. Have an affair” dating site exposed some well-known people. It also revealed a large number of chatbots impersonating sexually available women. Suitors commenced their discussions with potential dates already half seduced by a fetching profile photo. Those who programmed the bots understood that the inquiries of romantically interested men tend to fall into patterns. Dating sites acknowledge the use of chatbots. They offer the disclaimer that their sites offer no guarantee of meeting that special someone – or, given Ashley Madison’s advertised rationale, of cheating on that special someone. They emphasize that their purpose is “entertainment”. This makes Ashley Madison seem less like a dating site and more like an update of the 1980s computer game Leisure Suit Larry in which players seek to make selections from a range of pre-programmed statements that will prompt a female character to partially disrobe. Sexually-oriented dating sites may be more interested in chatbots capable of a narrow range of sexually suggestive banter than in bots capable of passing the Turing Test that might prefer to discuss the philosophically vexed issue of rights for artificial beings.
It’s instructive to take the long view of all of this. It’s fun to chat with chatbots. But are they precursors of a future in which we increasingly accept ersatz humanity as a more convenient, cheaper, more entertaining substitute for the real thing? Will our future be one in which we interact with artificial beings that do not pass the Turing Test but which reliably interact with us in ways that Apple and Ashley Madison manage to market at us?
Featured image credit: AI-tech, by geralt. CC0 Public Domain via Pixabay.