Oxford University Press's
Academic Insights for the Thinking World

Does the ‘Chinese room’ argument preclude a robot uprising?

In this blog series, Olle Häggström, author of Here Be Dragons, explores the risks and benefits of advances in biotechnology, nanotechnology, and machine intelligence. In this third and final post, Olle challenges John Searle’s ‘Chinese room’ argument about the future of robotics.

There has been much recent talk about a possible robot apocalypse. One person who is highly skeptical about this possibility is philosopher John Searle. In a 2014 essay, he argues that “the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger.” More specifically, he says:

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires and motivations.

He builds his case for this position on his classic 1980 Chinese room argument against the possibility of computer consciousness. I outlined the Chinese room argument in my previous blog post, and explained why I do not find it convincing. My ambition here is to show that even if we accept the Chinese room argument, Searle’s corollary about a robot uprising does not work.

Here is the core of Searle’s 2014 argument:

Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real.

Those last two statements are highly questionable, especially so when one notes that the Chinese room argument is wholly directed against machines having inner experience of consciousness, beliefs, desires, and so on — and not against the outward appearance of having these things. Searle himself has admitted that, as far as his Chinese room argument is concerned, a computer program, physically realized on a silicon chip, might in principle duplicate (as opposed to merely simulate) the complete outward behavior of a human brain.

4466482623_6aea29d90a_z
Image credit: Code by Michael Himbeault. CC-BY-2.0 via Flickr.

If we make such a computer duplicate (i.e., what is sometimes called a whole brain emulation) and if Searle is right about the nonexistence in this duplicate of real consciousness, real beliefs, real desires, and so on, then the duplicate will nevertheless contain computational structures giving rise to the outward appearance of consciousness, beliefs, desires and so on–structures corresponding to the neurobiological phenomena in the brain that give rise to real consciousness, real beliefs and real desires.

Echoing the terminology of a 1994 paper by Todd Moody, let us call these computational structures ‘z-consciousness’ (as shorthand for zombie consciousness), z-beliefs, z-desires, and so on. Imagine now an ordinary flesh-and-blood human being and her computer duplicate. The human being is able to form new beliefs and new desires, such as the belief that Manchester United will win next year’s Premier League, the desire to see Manchester United do so, or the desire to wipe out humanity. It follows that the computer duplicate is able to form new z-beliefs and new z-desires, such as the z-belief that Manchester United will win next year’s Premier League, the z-desire to see Manchester United do so, or the z-desire to wipe out humanity. Searle has to accept this – or to retract his admission that a computer duplicate is in principle possible.

Perhaps some readers will view the whole brain emulation example as overly speculative and esoteric. Consider then the more down-to-earth case of computer chess. The best chess programs today far surpass the playing strength of strongest human grandmasters. An important part of the program is a function that assigns a value to the position at the end of a calculated variation. This function involves a number of parameters whose exact values are fine-tuned in a process where the program plays a lot of fast games against (versions of) itself and modifies in favor of values that seem to yield success at winning games. For this reason and others, the program will develop new ways of playing, unforeseen by its programmers. The program’s ultimate z-desire when playing is to win the game, and the fine-tuning of the parameters will yield all kinds of subtle effects, such as giving a high value to positions where certain pawn structures are combined with doubled rooks on the c-file. We will then see a marked tendency for the program to move both its rooks to the c-file, due to it having formed, without explicit instructions from the programmers, a z-desire to double its rooks on the c-file in such positions.

Note that the chess program’s choice to play in this way is unhampered by the fact that the choice is not a real choice but merely a z-choice, the play is merely z-play, and the desire concerning where to put the rooks is merely a z-desire. The z prefix is utterly inconsequential.

Now compare this to Nick Bostrom’s favorite example of a superintelligent machine that was initially programmed to facilitate the production of as many paperclips as it can. Upon attaining superintelligence, it understands that humanity will try to get in the way of its desire to turn the Milky Way into a giant heap of paperclips, and therefore forms an instrumental desire to wipe out humanity. According to Searle, this cannot happen, because the AI has no real desires and no real understanding, and it makes no real decisions. All it has are z-desires and z-understanding, so all it can make are z-decisions. But if chess programs that are already in existence today are able to make unsupervised z-decisions to form a z-desire to put its rooks on the c-file, why in the world would a superintelligent paperclip maximizer be unable to make the unsupervised z-decision to form a z-desire to wipe out humanity? When that happens, Searle will tell us (in the unlikely event that he has time to do so before it’s all over) not to worry, because what may superficially look like the AI’s desire to wipe out humanity is actually not a real desire, but merely a z-desire. Those of us who do not share Searle’s confusion about desires and z-desires will find very little comfort in this.

Featured image credit: Wake up Mr. Robot by Michael VH. CC-BY-2.0 via Flickr.

Recent Comments

There are currently no comments.