Existential risks – part 2

So far, advances in technology have been largely positive and improved human life in countless ways. Will future innovations continue to be positive or are there reasons to believe they could be catastrophic? In this final part we will look at Nick Bostrom’s view on Artificial Intelligence in some more detail.

Images of robots in female form

Bostrom is one of a number of people who have been very concerned about the potential risks associated with AI and his book Superintelligence covered this topic extensively.

Images of robots in female form

In 1965 the mathematician I. J. Good came up with the classic hypothesis concerning superintelligence: “that an AI sufficiently intelligent to understand its own design could redesign itself or create a successor system, more intelligent, which could then redesign itself yet again to become even more intelligent, and so on in a positive feedback cycle. Good called this the ‘intelligence explosion’.”

There is already superhuman AI in many dimensions. A calculator is better at arithmetic than any mathematician that has ever lived; deep blue is better at chess than any grandmaster. However, these technologies are only capable of a small set of tasks; they are what’s known as ‘narrow AI’. Bostrom is concerned with the potential dangers of Artificial General Intelligence (AGI): the intelligence of a machine that can understand or learn any intellectual task that a human being can. As soon as AI becomes general it will far exceed human intelligence.

Is it something we should be worried about? People can argue about how long it will take for AI to reach AGI, but it seems highly unlikely that the technology will not continue to progress, short of a catastrophe like a global pandemic or nuclear war.

Arnold Schwarzenegger as the TerminatorIt’s difficult for many people to take this threat seriously. If the issue at hand involved a super-intelligent alien life-form that was inevitably going to land on Earth, potentially even in the next fifty years, people would be much more concerned. The idea of such technological developments should provoke fear, but instead it seems cool or interesting. This might be, in part, due to films like The Terminator, The Matrix, and Ex Machina.

However, despite what science fiction has led us to believe, the threat here is not that an army of robots will turn evil and decide they want to destroy humanity. One of the main problems is what’s known as ‘the alignment problem’. The paperclip maximiser is a thought experiment popularised by Bostrom to demonstrate the dangers of an intelligent agent with a seemingly harmless goal.

In short, an AI is programmed with the sole goal of making as many paperclips as possible. It will soon realise humans could switch it off, meaning fewer paper clips, and that human bodies contain lots of atoms that could be used to make paper clips. So, what’s the obvious solution? No more humans.

Aside from the concern of the AI itself destroying us is the concern that the technology could be in the hands of a malicious actor. Anyone who discovers a crucial innovation will find themselves in a position of incredible power and depending on their ethics, that could be abused. More concerning still: any solution we can think of to protect against the superintelligent AI will be obvious to the AI because it will be smarter than all humans.

Even if we tried to keep the AI ‘in a box’ off the internet, we cannot foresee the moves that the AI could make to get around the problem. The AI could use blackmail or a promise such as curing an ill relative to manipulate a person.

Science fiction writer Isaac Asimov created three laws of robotics which feature in some of his stories:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The problem with these laws is that there are limitless ways actions or inaction could lead a human to come to harm, and this would just result in the robot or AI being paralysed and doing nothing.

Looking forward, it is worth considering whether these issues are being underestimated greatly. If we take them more seriously, surely we can be more prepared for their potential (or inevitable) fruition. And what measures we can take?

Finally, do we have reason to be optimistic? Yes, the potentiality of great tragedy exists, but what about all the great innovations that have been made in recent history thanks to the enormous leaps in technological advancements? Will we continue to rise, or is the risk of falling too great?