Oxford University Press's
Academic Insights for the Thinking World

On the Singularity, emotions, and computer consciousness

The term ‘artificial intelligence’ was coined as long ago as 1956 to describe ‘the science and engineering of making intelligent machines’. The work that has happened in the subject since then has had enormous impact. Margaret Boden is a Research Professor of Cognitive Science at the University of Sussex, and one of the best known figures in the field of Artificial Intelligence. We put four key questions to her about this exciting area of research.

Could AI surpass human intelligence?

In principle, probably. That’s because there’s nothing magical about the mind/brain. It works according to (still largely unknown) scientific principles that could conceivably be simulated in computers. If AI could equal human intelligence, it could probably also surpass it.

Some people believe in ‘the Singularity’: a point at which AI will surpass human intelligence and ‘the robots will take over’. Some believers even predict that it will happen within a few decades from now. This notion is hugely controversial, and in my opinion, it’s nonsense. We are nowhere near simulating general human intelligence—which is what is presupposed by talk of the Singularity.

That doesn’t mean that we shouldn’t already be worrying about certain dangers posed by AI.

Could AI even equal human intelligence?

Yes, in principle. In practice, however, it probably won’t. The project is too difficult, and also too expensive.

Hugely increased computer power and data storage will certainly come. But they won’t be enough. We need powerful ideas, not just powerful hardware. Truly human-level AI would require a complete theoretical understanding of all aspects of human psychology.

Current AI can perform a host of tasks with an extraordinary level of success, often far beyond what unaided humans can do. But all of these tasks are highly specialist. In other words, today’s computer intelligence is very narrow. Or rather, there are lots of computer intelligences, each of which is very narrow.

A single, integrated, intelligence—like that of human beings—was a prime goal of the 1950s/1960s AI pioneers. They wanted to build systems that could use vision, language, learning, creativity, and motor control—all functioning across the board (i.e. with respect to many different sorts of problem), and all cooperating with each other when necessary.

We’re nowhere near that. Most (though not quite all) AI researchers have given up on that dream, turning instead to more specialized tasks.

Could a computer be “conscious”?

No-one knows, because the concept of consciousness isn’t well understood.

‘Functional’ consciousness includes various distinctions about information processing. For instance: attending to something versus ignoring it; being receptive to incoming stimuli versus being unaffected by them; thinking deliberately about something versus reacting unthinkingly; and so on. All these could be modelled in AI.

Indeed, all these aspects of functional consciousness have already been modelled, up to a point. There are several interesting ‘machine consciousness’ systems. One example is LIDA—a complex project/program that’s based on a neuropsychological approach called Global Workspace Theory (GWT). LIDA is a ‘project’ in the sense that it is largely a verbally described plan for a computational system based on GWT. It’s also a ‘program’, in the sense that part of it has actually been implemented, and can be used for solving various sorts of problem.

For the record, the neuropsychologist who first formulated GWT had been inspired by ideas drawn from early AI, called ‘blackboard architectures’. This is one of many examples of mutual influence between AI and theoretical psychology and/or neuroscience.

But ‘phenomenal’ consciousness, or experience—the taste of sugar, the smell of perfume, the terror on seeing a tiger or a forest fire, etc—is seemingly different. Hardly any philosophers (or psychologists) claim to understand what this is. And those few who do, are believed by almost no-one else.

In short, this is a philosophical mystery, not just a scientific one. If we don’t even understand just what phenomenal consciousness is, nor how it’s possible for it to arise in human minds, we’re in no position to assert or to deny its possibility in computers.

Could AI machines have emotions?

Emotions usually involve conscious feelings. But they aren’t only feelings. In addition, they are information-processing mechanisms that have evolved in multi-motive creatures to schedule actions in the pursuit of various (and sometimes conflicting) goals.

In other words, emotions involve both phenomenal and functional consciousness.

And as phenomenal consciousness isn’t understood by philosophers, never mind scientists (see above), no-one knows whether an AI system could have emotions in that sense.

However, the functional aspect of emotions is, in principle, open to AI modelling.

Most AI models of emotion that exist today are theoretically shallow, because their builders don’t appreciate the functional complexities involved. Even those (few) researchers who do recognize this complexity can’t yet implement it fully in working systems.

The best example is a model of certain aspects of anxiety. A program called MINDER simulates a nursemaid looking after several babies, each of whom is an autonomous agent acting in largely unpredictable ways. She has to feed them, watch them, keep them safe from falling into ditches, and rescue them if they do. (A real nursemaid, of course, also has to clean them, cuddle them, speak to them, and so on.) But these goals can conflict: she only has two hands, and can’t be in two places at once. Some are urgent, some aren’t. Some are necessary, some can be ignored. Some can be reactivated after time passes, whereas some can’t. These aspects of the nursemaid’s anxiety-arousing situation are distinguished by the program, and her actions are scheduled accordingly.

Featured image credit: Tissue Network. CC0 public domain via Pixabay.

Recent Comments

  1. Kevin Derby

    “Could AI machines have emotions?”

    If I can program an AI to simulate emotions effectively enough that you cannot tell from the outside whether it has “Real” emotions or not, then does it really matter if they are real?

  2. Alan Ragsdale

    I agree that it will not matter wheather (when) AI emotions are synthetically generated or can be created organically somehow. It is the same situation we humans face today. None of us can know for sure that any other human has emotions or consciousness, they only seem to, by how they act.

  3. Traruh Synred

    https://en.wikipedia.org/wiki/ELIZA

    Programs like this have apparently passed the Turing test at least with some people.

    It says more about the Turing test and shrinks than it does about the prospects for ‘true AI’.

  4. Philippe C

    Very interesting. I agree with every single point and disagree with the general conclusion. What is missing in the power of exponential!

  5. Mark Blitstein

    Emotional mimicry that is functionally complete would have to include algorithms to simulate chemical mediation from both internal sources like endorphin releases, and external sources like pheromones. These mediations would have to be conditioned by feed back to simulate learned patterns. In healthy individuals, flight or fight chemical releases always produce the intended visceral response even if the stimulus hadn’t been previously encountered, but for that to be translated as fear requires coupling to a real or imagined threat. Logic gates replicating neurons is a starting point.

  6. […] Source: On the Singularity, emotions, and computer consciousness | OUPblog […]

  7. Andrew

    Just a different view on consciousness
    https://www.codeproject.com/Articles/1221003/Consciousness-and-a-model-of-intelligence-Neural

Comments are closed.