Existential risks

So far, advances in technology have been largely positive and improved human life in countless ways. Will future innovations continue to be positive or are there reasons to believe they could be catastrophic? In this two-part series we will look at some of the work of philosopher Nick Bostrom and examine risks which could threaten humanity’s very existence.

Niklas Bostrom at University of Oxford
Niklas Bostrom, Swedish philosopher - Future of Humanity Institute, University of Oxford © CC BY-SA 4.0

Nick Bostrom is a Swedish philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, and superintelligence risks. Basically, he poses and investigates big-picture questions about humanity and its prospects.

Niklas Bostrom at University of Oxford
Niklas Bostrom, Swedish philosopher – Future of Humanity Institute, University of Oxford © CC BY-SA 4.0

Within his work, existential risks has been a common theme. In his own words: “An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.”

Until recently the threats were natural such as asteroid impacts, or supervolcanoes. However, in our ever-changing world, new threats are emerging, such as nuclear war, biotechnology, Artificial Intelligence, and climate change.

These kind of threats can be difficult to take seriously. Paul Slovic, a professor of psychology has done important work which found that people care more about one starving girl than about her and her brother, and gradually less as the scale increases. This is clearly a reversal of how we should react and seems to be a bug in our moral framework – our empathy fails us by preventing us from relating to the big picture.

The Vulnerable World Hypothesis is a paper by Bostrom that is relevant to these ideas. It introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development by which civilization almost certainly gets devastated as a default.

So far, technological advances have been largely good for humanity. For example, the global population is constantly increasing at an immense rate and, yet, in the last two centuries standards of living and life expectancy have risen.

However, this paper explores some possible future vulnerabilities and whether it would be possible to manage them and, if so, how.

Image of an urnBostrom uses the analogy of an urn of inventions from which we have been pulling balls. So far we have pulled out many white balls (“inventions with beneficial effects”) and some shades of grey (“moderately harmful or mixed blessings”). Nonetheless, we haven’t pulled out a black ball (“a technology that invariably or by default destroys the civilization that invents it”). Bostrom puts this down to pure luck.

He proposes that if ‘black ball’ inventions exist, we will eventually pull one out, and also reminds the reader that un-invention is impossible, giving us no choice but to hope no black ball exists.

In his paper Bostrom tells the story of Leo Szilard who had the idea of a nuclear chain reaction in the 1930s —the basis for both nuclear reactors and nuclear bombs.

Szilard becomes gravely concerned and sees that his discovery must be kept secret at all costs. He decides, in order to prevent this discovery occurring to other scientists, to write a letter to President Franklin D Roosevelt along with Albert Einstein, explaining that other physicists likely to stumble upon the idea must be persuaded not to publish anything about nuclear chain reactions or any of the reasoning steps leading to the dangerous discovery. However, the U.S. government, after having digested the information provided by Einstein and Szilard, decided to launch the Manhattan Project in order to weaponize nuclear fission as quickly as possible. Subsequently, the U.S. Air Force used the bombs created to destroy Japanese population centers. Many of the Manhattan scientists had justified their participation by pointing to the mortal danger that would arise if Nazi Germany got the bomb first; but they continued working on the project after Germany was defeated.

Today, nuclear weapons are something only states can manufacture. But what if making an atomic weapon was as simple as cooking a bag of sand in the microwave? There could be similar technologies which once discovered allow individuals to easily cause catastrophic damage.

As 3D printing technology advances, it will become easier to print weapons. What if it were possible to “download” the Spanish flu virus? Would it ever be possible to keep a lid on these kinds of dangers? Is there any reason to believe these types of dangers don’t really exist or have we just been lucky so far?

Bostrom suggests that the following options are available as solutions:

1. Restrict invention
2. Ensure there are no bad people
3. Extreme policing
4. Effective global governance

It is immediately obvious that the first two options are impossible in practice. The only plausible alternative seems to be a kind of totalitarianism: extreme surveillance with reliable intervention.

It’s possible that in the urn of inventions are technologies that would make this a lot more plausible and perhaps less terrifying than it currently stands, but it matters which order the balls happen to come out of the urn. This, we can’t control.

Next week, in part 2, we will continue our analysis of the risks posed by innovations, with more of a focus on Artificial Intelligence.