Artifical Intelligence: How To Stop AI From Destroying The Human Race
Skype founder Jaan Tallinn wants to program machines to keep them from becoming a threat to the human race. Yes, he believes, the threat is real.

MUNICH — When Stephen Hawking warns about the end of the world because humans aren't able to keep up with the rapid progress of artificial intelligence, people listen. And when Tesla founder Elon Musk concurs, people start to worry.
For more than a year now, the two men who are known for their visionary gifts have been warning of the significant threat that comes with constant machine learning. Both have read Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies. What truth is there in apocalyptic scenarios of artificial intelligence run amok?
During February's TED Conference in Vancouver, Skype co-founder Jaan Tallinn also sounded the alarm. "As long as artificial intelligence is less intelligent than humans, we can treat it just like any other technology," said the Estonian physicist and programmer. "But as soon as we have to deal with an artificial intelligence that potentially becomes more intelligent than we are, the situation changes drastically."
What Tallinn is afraid of is the possibility that machines will become capable of drawing increasingly complex conclusions on their own. But he's no pessimist. He is currently involved in 10 different projects, each trying to program artificial intelligence so that it will never become a threat to humanity. One of them is the Future of Life Institute at Oxford University. In an interview, Tallinn discussed exactly he's working on, and how the work is meant to prevent machines from taking over the world.
[rebelmouse-image 27090044 alt="""" original_size="687x600" expand=1]
Skype founder Jann Tallinn — Photo: Christopher Michel
SÜDDEUTSCHE ZEITUNG: What's so dangerous about artificial intelligence?
JAAN TALLINN: A program's not dangerous. But artificial intelligence that potentially becomes smarter than humans might just be.
What does that mean?
There are several metaphors. Gorillas, for instance. It's up to us humans whether they survive or not. Either we protect their environment, or we destroy it. The biggest threat would be an artificial intelligence that truly flouts humanity altogether. You can turn and twist it the way you want. Our planet's destiny always depends on the most intelligent way of using and treating it. It doesn't matter if that's mankind, or something else.
In a world with AI, we would be the gorillas, right?
Yes. We need to understand that we're not just creating some random technology. Its value system doesn't necessarily fit ours, so it might not care about things that are important to us. Nature, for instance.
But won't AI always depend on humans? In order to act, wouldn't it need a contact to the physical reality, like robotics?
If this AI really was smarter, it is difficult to say what it would really need to function. But even today, you can do a lot without physical action. If, for instance, you were closed away in the basement with a billion dollar and an Internet connection, you'd be able to do a lot of harm.
What kind of deadly power could be developed by AI?
What I'm truly worried about is the environment. Humans do already a lot of damage, simply because they don't care. Similar to what has happened to the habitat of gorillas, or other species that have already died out. If we create something that is smarter than us, it'll have power over the environment too, and it won't care about what we need for survival.
Is it possible to program a human value system into AI?
In theory, yes. But only in theory. The real problem is that we ourselves don't fully understand our value system. We know the main ideas — the future, the environment, children and progress. But we don't know where these values come from, also because they constantly change over time.
Does AI have a will to survive?
AI doesn't care about survival. All it cares about is doing its job, the one it has been programmed for. Whatever that may be. As soon as it realizes that it can't do that job when being turned off, it'll find ways to make the power-down impossible.
Is this a form of self-awareness?
No. Such a system doesn't know that it exists as a physical system. A chess computer, for instance, doesn't know that it's an object in a real world. As long as they think that, they don't care about being switched off.
Could that change?
The more powerful AI systems become, the more complex solutions they can draw about what's going on in the world around them. It therefore does make sense to think that it grasps that its survival is vital for the accomplishment of its mission.
What's more dangerous: AI out of control, autonomous, or having the wrong people in control of it?
What I worry about most are the unexpected side effects. Most of the programs are not a 100% correctly written. It doesn't really matter if by a good or an evil person. Bad things might happen because AI does things it hasn't been programmed for.
Today AI outperforms humans in only a few areas. Do you think there's an AI superior to all human in general?
Autonomous weapons systems get very close to it. They know a lot better how to kill than humans. The Future of Life Institute published an open letter a year ago in which it stipulates the worldwide prohibition of autonomous weapons. There are many good reasons why we're not starting an arms race where governments or terrorist units outbalance each other with better weapons.
But especially programmers should be easily convinced of the threat?
I think we need something like a tipping point, the moment when the people in the tech industries get the information from really trustworthy sources. I'm working on that too.
But how to create public awareness? Why not movies?
Movies are a double-edged sword, because they have the duty to entertain. And they do, with a lot of drama. The most likely scenario for the end of humanity is less dramatic. There's no heroic battle. It can go extremely fast. A movie I can recommend is Ex Machina from last year. It has well communicated the biggest threat we're facing: The true danger doesn't come from AI becoming evil but an extremely competent AI that simply doesn't care about humans at all.