Oxford University Press's
Academic Insights for the Thinking World

Let us not run blindfolded into the minefield of future technologies

In this blog series, Olle Häggström, author of Here Be Dragons, explores the risks and benefits of advances in biotechnology, nanotechnology, and machine intelligence. In this first post, looking specifically at the uncertainty of a technological future, Olle marks the challenges of developing these double-edged technologies.

There is a widely held conception that progress in science and technology is our salvation, and the more of it, the better. This is the default assumption not only among the general public, but also in the research community including university administration and research funding agencies, all the way up to government ministries. I believe the assumption to be wrong, and very dangerous. There is no denying that advances in science and technology have brought us prosperity and improved our lives tremendously, or that further advances have the potential to bring glorious further benefits, but there is a flip side: some of the advances that may lie ahead of us may actually make us worse off, and, in extreme cases, cause the extinction of the human race.

I have always been (and still am) drawn to the romantic ideal of scientific progress for its own sake – expanding the boundaries of humanity’s collective body of knowledge as a worthy goal in itself, above and beyond practical applicability and impact on the economy and human welfare. Still, this ideal is not the only aspect that counts, and perhaps, when a line of research is seen to create global risks that are not outweighed by potential benefits, we ought to think again about whether to proceed in that particular direction.

Imagine the world of the early 20th century and what life was like then. Compared to the world today, and the enormous difference. Much of this difference is due to advances in science and technology. In view of what, e.g., mobile phone technology and the Internet have done to our lives in the last couple of decades, there is no slowdown in sight. The 21st century is likely to turn out even more radically transformative than the 20th. Areas that are likely to bring huge impact include synthetic biology, nanotechnology, artificial intelligence (AI) and robotics, plus various ways to enhance our cognitive abilities or otherwise modify human nature by pharmacological or genetic means, or via brain-machine interfaces.

calc
Image credit: 2014-036 rocket science by Robert Couse-Baker. CC-BY-2.0 via Flickr.

The potential gains from these technologies are virtually unlimited, but the same holds true for the risks. In Steve Omohundro’s conservative reading of a recent McKinsey report, AI and robotics are expected to create $50 trillion of value during the next 10 years. At the same time, there is a growing and well-founded concern over what this will do to the labor market, as well as (in the longer run) whether humanity will be able to remain in control once we have created machines that outperform us in terms of general intelligence. This last scenario may sound extreme, but deserves to be taken seriously. Several prominent thinkers who have called attention to it in the last couple of years, including Nick Bostrom, Max Tegmark, Stephen Hawking, and Bill Gates. Other new technologies are similarly double-edged, and among the greatest challenges during the next couple of decades will be how to prevent advances in synthetic biology from leading to a situation where terrorists have access to enormously potent biological weapons of mass destruction.

Our future is uncertain, and we have nothing to gain from being fatalistic about it. Actions taken today may very well mean the difference between, on one hand, an outcome where we are the seed of a civilization that survives and thrives for millions or even billions of years, perhaps also expanding to the stars, and, on the other hand, one in which doom is forthcoming. Any serious attempt to quantify how much value is at stake is bound to give astounding answers.

What to do? Halting scientific and technological progress is not an attractive option, or even (realistically speaking) an option at all. We urgently need to find ways to push such progress in directions that are likely to bring us good, and away from those directions that spell doom. Which technologies should we most eagerly pursue, which ones are less important, which ones should we tread lightly, and might there be ones that should be avoided entirely? Our understanding of these issues are mostly ruefully insufficient (except in a few particularly clear-cut cases such the predominantly malign effects of developing AI technology for autonomous weapons), and needs to be improved. We also need to find safe ways to handle double-edged technologies. The current situation is tantamount to running blindfolded and at full speed into a minefield.

So who is responsible for rectifying this dangerous situation? There are many relevant agents here – individual scientists, research funding agencies, governments and so on, as well as ordinary citizens – in a complex network of interactions. I believe we all share some burden of responsibility. It might be beneficial if we set up some international body, perhaps somewhat along the lines of the IPCC (Intergovernmental Panel of Climate Change), with the task of summarizing state-of-the-art knowledge about the likely benefits and risks of the various technologies, and offering policy recommendations.

Header image credit: Keyboard by Jeroen Bennink. CC-BY-2.0 via Flickr.

Recent Comments

  1. […] Oxford University Press explores the risks and benefits of advances in technology, and specifically the uncertainty of a technological future and the challenges of developing double-edged technologies. […]

Comments are closed.