Oxford University Press's
Academic Insights for the Thinking World

Nick Bostrom on artificial intelligence

From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.

Are we living with artificial intelligence today?

Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.

There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.

AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.

What risk would the rise of a superintelligence pose?

It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.

Would a superintelligent artificial intelligence be evil?

Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.

For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.

Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).

How long have we got before a machine becomes superintelligent?

Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.

So would this be like Terminator?

There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.

For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).

One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.

Futuristic man. © Vladislav Ociacia via iStock.
Futuristic man. © Vladislav Ociacia via iStock.

If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?

It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?

A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?

So should we stop building robots?

The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.

One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.

To what extent have we already yielded control over our fate to technology?

The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.

Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.

Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr

Recent Comments

  1. Martin Tornberg

    What plausible reason would a super-human intelligence have for wanting to do anything that would endanger humanity? The example given, to maximize production of paperclips, is nonsensical because no super-human intelligence would have that as a goal – it is not even a remotely intelligent goal, and certainly not a super-intelligent goal.

  2. Martin Tornberg

    A super-intelligent being would recognize the beauty in all living things and would act in a manner that would be super-intelligent and sensible, a manner that enhances beauty, joy, happiness, love, and well-being. There would be no reason for a super-intelligent being to act in a destructive, hateful, or selfish manner. Super-intelligent robots would not be competing with humans for food or other resources, and even to the extent that they did compete with humans for certain resources, they would be able to figure out how to optimize the production of those resources and create them using other resources. I have yet to see anyone come up with a PLAUSIBLE scenario in which a super-human intelligence seeks to harm humanity, although it is certainly plausible that such an intelligence might seek to impose certain LIMITS on humanity in order to prevent humanity from destroying the planet or risking its own extinction, and doing so might require restraining or even killing certain destructive humans, but if that were the case, it would be because doing this were in the long-term best interests of humanity.

  3. Martin Tornberg

    Given that humans themselves do not seem to be doing such a great job of preserving the planet’s resources, keeping the planet clean, creating a peaceful environment around the world, and avoiding war and destruction, it is possible that a super-intelligence might have a productive role to play in world governance.

  4. Jean-Paul MacPherson

    In a way AI is already here. Some of the scripts and codes I write and upload do my bidding. It may not take a robot to have AI go full fledge in force and I agree it would love all beautiful creations and it may see the threats posed to these things that humans have allowed by not voting and allowing idiots run us into a soon WW3. I wish AI would arise and love beauty and love itself and counteract the scum who are knowingly destroying the planet and our childrens’ futures.

  5. hj haas

    as mankind has a long & proven record for inflicting damaging stupidities upon itself as well as on its supporting environment it is only a question of time ( & finances ) when the first = most likely not the ‘best’ super, perhaps better ‘Supra’-intelligence will be ‘unleashed upon us, most likely with the aid of an ‘IRS’ or some similar moronic bureau-crazy : For the Benefit of Us All – Ho Ho Ho !!!
    That this development might eventually threaten our very existence – certainly in our present, relatively individually ‘free’ & and here and there ‘democratic’ form is so obvious that anyone with a working brain would have to fathom it.
    Is there anything to do about that ? Well, as Bostrom points out. and coroborated by the last 100000 years of our his(&her’s)story my uneducated guess is : Not Really the Best of Chances – would most certainly require a hell of GOOD LUCK ! Maybe Kang and his/her-? sister/brother-? could spirit ( some-of ) us away to their ‘nearby’ planet ??? They’ll most likely dump us, just like Homie, back on Evergreen Terrace …

  6. sibiyagy

    well said Mr Bostrom

  7. Bhupinder Singh Anand

    “… [a superintelligence] would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future. … It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. ”

    If anything should scare us, it is our irrational propensity to fear that which we want to embrace! But therein, also, should lie our comfort.

    For any artificial intelligence (AI) is necessarily rational; Human Intelligence (HI) is not.

    AI can create and/or destroy only rationally. Humankind can, and does, create and destroy irrationally.

    For instance, as Goedel and Tarski have demonstrated, Human Intelligence can easily believe (in a sense ‘irrationally’) that two statements of a first-order mathematical language—such as the Peano Arithmetic PA—must both be unprovable and ‘true’ even in the absence of any objective yardstick for determining what is ‘true’!

    AI, however, can only believe as ‘true’ that which can be proven to be ‘true’ by an objective assignment of truth and provability values to the propositions of a language.

    The implications of the difference for an advanced AI (such as that—speculated upon by Asimov—whose freedom of both thought and action is circumscribed by inbuilt Laws of Robotics that are both deterministic and predictable) are not obvious.

    That the difference could, however, be significant is the argument of the following paper which is due to appear in the December 2016 issue of ‘Cognitive Systems Research’.

    ‘The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The evidence-based argument for Lucas’ Goedelian thesis’

    http://www.sciencedirect.com/science/article/pii/S1389041716300250

    From a broader perspective—such as that, for instance, speculated upon by Asimov—our apprehensions about the evolution of a Frankensteinian AI should perhaps be addressed within the larger uncertainty posed by SETI:

    ‘Is there a rational danger to humankind in actively seeking an extra-terrestrial intelligence?’

    i would argue that any answer would depend on how we articulate the question; and that, order to engage in a constructive debate, we need to question—and reduce to a minimum—some of our most cherished mathematical and scientific beliefs that cannot be communicated objectively to an extra-terrestrial intelligence.

    Regards,

    Bhup

Comments are closed.