Oxford University Press's
Academic Insights for the Thinking World

Can a robot be conscious?

In this blog series, Olle Häggström, author of Here Be Dragons, explores the risks and benefits of advances in biotechnology, nanotechnology, and machine intelligence. In this second post, Olle explores the computational theory of mind concept.

Can a robot be conscious? I will try to discuss this without getting bogged down in the rather thorny issue of what consciousness –– really is. Instead, let me first address whether robot consciousness is an important topic to think about.

At first sight, it may seem unimportant. Robots will affect us only through their outward behavior, which may be more or less along the lines of what we tend to think of as coming along with consciousness, but given this behavior, its consequences to us are not affected by whether or not it really is accompanied by consciousness.

On the other hand, even if robot consciousness does not affect us, it does, arguably, affect the robots. If sufficiently advanced robots are conscious, then it might give us reason to consider it morally wrong to, e.g., keep them as slaves. Robot consciousness might also affect the valuation of certain kinds of ‘robot apocalypse.’ Suppose that at some point robots overtake our position as the most intelligent creatures on the planet, and that they eliminate us and go on to colonize the universe. If robots lack consciousness, the scenario seems dismal – a wasted universe. But if they have consciousness, coupled with a capacity for pleasurable experiences, then one might plausibly look more favorably on such a scenario.

Also, consider the futuristic prospect of uploading our minds to machines, as a means to liberate us from our weakly and fragile bodies, to make travel as easy as digital file transmission, to allow us to inhabit wonderful virtual realities, and to make us practically immortal by means of storing safely located backup copies of ourselves. If machines lack consciousness, the whole uploading idea seems rather pointless.

Image credit: Asimo at a Honda factory by Vanillase. CC-BY-SA 3.0 via Wikimedia Commons.

To someone with a naturalistic worldview, machine consciousness is, in one sense, obviously possible. Namely, we know, that there exists configurations of matter that give rise to consciousness, and there is no reason to believe that it would be impossible, even in principle, to artificially construct configurations of matter that have the properties needed. Once we’ve done that, we’ve built a conscious machine.

So let us narrow the question down to whether electronic computers, made of silicon and transistors, can be conscious. For this, we need some understanding of what sort of configurations of matter give rise to consciousness – what is the relevant property? There is much disagreement among philosophers of mind here. The most common idea in favor of the possibility of conscious computers is the so-called ‘computational theory of mind’ (CTOM), which holds that consciousness arises as soon as the right kinds of algorithms or computations are implemented, regardless of the material substrate, be it a biological nervous system or an electronic computer, or something else. What ‘the right kinds’ are remains an open question.

Since a sufficiently detailed computer simulation of my brain instantiates the same algorithms and computations, CTOM seems to promise the possibility of computer consciousness. Personally I find CTOM to be quite attractive. Suppose I am having coffee and a chat with a friend. I judge her to be conscious, and I base this judgment on her outward behavior, not on any specifics of her internal anatomy. If (to my great surprise) she lifts her skull and reveals that her head does not contain an ordinary human brain, but one made of computer chips, then I would not change my mind as to whether she is conscious. If this intuitive way in which we ascribe consciousness to others is correct, then consciousness is substrate-independent, so that CTOM or something like it is true. However, this argument for CTOM is far from conclusive, because our intuition might be wrong.

In the recent anthology Intelligence Unbound: The Future of Uploaded and Machine Minds, philosopher David Chalmers discusses uploading in the context of CTOM. In the same anthology, his colleague Massimo Pigliucci complains that Chalmers proceeds “as if we had a decent theory of consciousness, and by that I mean a decent neurobiological theory.” But is it really fair to hold it against CTOM that it is not a neurobiological theory? Pigliucci has exactly one observation of a consciousness, namely his own. What property of his brain is it that produces consciousness? What is the appropriate generality here? Because his brain is a neurobiological entity, Pigliucci takes for granted that neurobiology provides the correct generalization. But ‘neurobiological entity’ is not the only category in which his brain fits; there are others such as ‘computational device’ (as preferred by adherents of CTOM), or simply ‘material object’ (as preferred by panpsychists). Or perhaps the phenomenon of consciousness does not generalize at all beyond his own brain (a solipsist position). At present, we simply do not know what the right generality is, and any discussion of the fundamental origin of consciousness needs to begin with admitting this.

A more famous argument against CTOM is philosopher John Searle’s Chinese room from 1980. Suppose CTOM is true. Then, if we write a computer program that in sufficient detail simulates the brain of someone who knows Chinese, the computer attains both consciousness and true understanding of Chinese. Alternatively, we can accomplish the same thing by setting up a room with two windows – one for input of Chinese symbols, and one for output of other Chinese symbols, serving as replies – and inside the room sits Searle himself together with giant stacks of paper, some of which contains detailed instructions (corresponding to the computer program) for simple manipulation of Chinese symbols. But Searle still does not understand Chinese – he merely follows the instructions mindlessly. So CTOM obviously has it wrong.

There are various replies to the Chinese argument. The so-called ‘systems reply’, admits that Searle himself does not understand Chinese, but points out that he is just a component of the system and holds that the system (the room, including its contents) does understand Chinese.

In response to this, Searle changes the thought experiment by getting rid of the room, and instead internalizing all the instructions by rote learning. He will still not understand Chinese, and this time there is nothing to the system beyond Searle, leaving no room for the systems reply.

But is it really true that Searle will not understand Chinese? To me the scenario looks very much like a case of multiple personality disorder. We have two persons – English-speaking Searle, and Chinese-speaking Searle – inhabiting the same body. English-speaking Searle claims not to understand a word of Chinese, while Chinese-speaking Searle claims to have such understanding. Why in the world should we believe the former but not the latter?

The Chinese room is a beautiful thought experiment, but fails as a refutation of CTOM. Then again, CTOM could be wrong nevertheless. We have yet to figure out the nature of consciousness and its place in the physical universe.

Featured image credit: Banksy 28 October installment from “Better Out Than In” New York City residency by Scott Lynch. CC BY-SA 2.0 via Wikimedia Commons.

Recent Comments

  1. Rob van Olmen

    I would postulate that consciousness arises feom the free will to choose. A second postulate is that free choice is fundamentally and without exception linked to a quatum effect called collapse of the quatum wave function.
    In our brains the connections between neurons have quatummechanical properties. That is where our consciousness is generated.
    A computer can only be conscious if it is a quantum computer.

    Rob van Olmen.

  2. Dan

    CTO is a pure assumption. And a wrong one – there is a substrate, biological one, of the superior mammals, apes, and that is lacking consciousness. Why humans have it and great apes don’t? As long as there is not a clear answer,how can CTO provide a scientific fundament?

Comments are closed.