Will our AI stories lead to immortality or extinction?

By John Sautelle

 

Last month, at Bendelta’s Global Leadership Conference, I had the privilege of experiencing some of the world's leading minds explore the topic “Redesigning Organisations and Leaders for the Cyber Physical Age”. Exposed to a variety of perspectives on the convergence of multiple accelerating technologies, including Artificial Intelligence (AI), virtual and augmented reality (find out more here about the difference), nanotechnology, biomedical technology and quantum computing, I came away excited and with alarm bells ringing.

So why alarm bells? The answer, in part, lies in these questions, posed by Tim Urban (main author of Wait But Why) in his extensively researched blog posts on The AI Revolution (Part 1: The Road to Superintelligence; Part 2: Immortality or Extinction):

‘Why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”?’

Why, indeed, do so many experts hold stories that ASI (Artificial Super Intelligence) will either result in our immortality or extinction?

My primary purpose in writing this blog is to raise awareness, and contribute to the public discourse on AI, a topic relevant to our very survival as humans and which also raises the question of what being human in the future might mean. Tim Urban neatly captures my concerns:

“If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up.”

Let me start our exploration by unpacking the words “Artificial Intelligence”. Until very recently, I was blissfully unaware of what the distinctions are between ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). For those who may be in the same linguistically challenged boat, here is my take on these terms.

With ANI think of smart computers programmed to achieve specific objectives, for example to predict election outcomes, or beat humans in games like chess or Go and drive autonomous vehicles.

AGI is commonly characterised as “systems and tools which are flexible and adaptive with the capacity to learn by themselves”. Some definitions expand this to “doing anything and everything humans can do, except more effectively”, which in itself raises the difficulty we have thinking outside our own anthropomorphic stories!

For ASI, think of super intelligence which so exceeds our current human intelligence we can’t actually imagine what that really means, which is possibly why we cling to collective stories like “the challenges in achieving ASI are so great it will never happen” and “through careful programming we will always be able to control what we create.”

So where are we now? There is demonstrable evidence for ANI and I think mounting evidence that important aspects of AGI are already with us. To this extent, it is reasonably safe to predict that our immediate future will involve massive disruption to our financial, social and political systems. ASI is another story, a quantum leap from where we are now, and at the same time a real future possibility.

Let’s start our ANI and AGI journey with the topical story of Donald Trump. In the lead up to the US election, the world’s leading “experts”, almost without exception, relying on human intelligence built on decades of collective experience and knowledge, predicted a Hilary Clinton victory. Like many experts across domains whose identity is tied up in their expertise, these political experts were subject to the fallibility of their own “expert” stories, and to the very human phenomenon of unconscious biases and prejudices. Notwithstanding an extremely high level of confidence they would get it right, as we know they actually got it very wrong.

Not so for MogIA, a form of ANI, developed by the Indian start-up Genic.ai. Based on a character from “The Jungle Book,” Mowgli, who learns from the environment, MogIA was programmed to do just that. Lindsie Polhemus in her Artificial Intelligence and Politics article puts it this way:

“As the founder, Sanjiv Rai told CNBC, “While most algorithms suffer from programmers/developer's biases, MoglA aims at learning from her environment, developing her own rules at the policy layer and develop expert systems without discarding any data."

So what happened with MogIA’s election prediction? Journalist Mike Brown explains:

“On October 28, MogAI revealed that it foresaw a Trump victory after analyzing over 20 million data points, covering the likes of Facebook, Twitter, YouTube, and Google. This includes engagement stats, like how many people are watching Facebook Live videos.”

Interesting, but not yet too much of an existential concern, right? Well let’s take the AI story a step further through the world of Go. I

Go is an abstract strategy board game, for two players, in which the aim is to surround more territory than your opponent. Invented in ancient China around 3,000 years ago, it is believed to be the oldest board game continuously played today. Go is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries.

Despite its relatively simple rules, Go is very complex. Compared to chess, Go has both a larger board with more scope for play and longer games, and, on average, many more alternatives to consider per move. Conventional wisdom has estimated, conservatively, the number of legal moves in Go to be 2 x 10170. Whilst maths has never been one of my strengths, even I know that is a large number! I can only imagine how much practice it would take to become good at playing, let alone to play at the level of a world champion.

Not that long ago one of our most common stories was “we will never be able to program a computer that can defeat the world’s Go champion”. This story points to an even deeper story, out of conscious awareness of many people, which confidently asserts “it is not possible to create an intelligence that is greater than human intelligence”.

In the meantime, enter stage left, Google DeepMind’s AlphaGo which turned our existing stories upside down by becoming the first computer program to defeat a professional human Go player in October 2015. Soon after, in March 2016, watched by over 200 million people worldwide, AlphaGo notched up a 4-1 victory over Lee Sedol, winner of 18 world titles and widely considered the greatest player of the last decade. In doing so AlphaGo rewrote previously held stories of many experts who were convinced this would happen, if at all, decades from now.

According to the Google DeepMind website, during the games, “AlphaGo played a handful of highly inventive winning moves,… several of which were so surprising they overturned hundreds of years of received wisdom. In the course of winning, AlphaGo somehow taught the world completely new knowledge about perhaps the most studied and contemplated game in history.”

In my words, AlphaGo used some winning moves that no human has identified in 3,000 years of practice. This takes us into very different territory and raises many more questions than a computer outperforming humans in predicting election outcomes. However, as you will see, the AlphaGo story is just warming up, and the next chapter brings us smack up against a profoundly deep and problematic story about our understanding of time. If the survival circuits deeply embedded in our brain were given voice they would no doubt assert “the passage of time is, and always has been, linear”, a story that has helped keep us alive from the days our ancestors first walked the earth. These survival circuits might describe Einstein as a charlatan, profess great cynicism about esoteric concepts like quantum physics, and shrug their collective neural shoulders when invited to consider Moore's Law and exponential growth, let alone notions that even the limitations inherent in Moore’s Law may be a flawed story in itself. They would most likely dismiss the next chapter in the Go story with a wave of their neural hands …..

Using deep neural networks and machine learning techniques, Google’s DeepMind team developed the next version of AlphaGo (AlphaGo Zero). The results, and timeline, are staggering:

Day 1: AphaGo Zero has no prior knowledge of the game and is only provided with the basic rules.

Day 3: AlphaGo Zero beats AlphGo by a score of 100 to 0.

Day 21: AlphaGo Zero reaches the level of AlphaGo Master, defeating 60 top professionals online and current world champion Ke Jie in 3 games out of 3.

This brings me back to some collective stories we may need to revisit: “the challenges in achieving ASI are so great it will never happen” and “through careful programming we will always be able to control what we create.” What if it does happen, and we have not taken the necessary steps to control what we create?

AI picture.png
Kari Pahlman