Future

Artifical Intelligence: How To Stop AI From Destroying The Human Race

Skype founder Jaan Tallinn wants to program machines to keep them from becoming a threat to the human race. Yes, he believes, the threat is real.

Brain machine
Andrian Kreye

MUNICH â€" When Stephen Hawking warns about the end of the world because humans aren't able to keep up with the rapid progress of artificial intelligence, people listen. And when Tesla founder Elon Musk concurs, people start to worry.

For more than a year now, the two men who are known for their visionary gifts have been warning of the significant threat that comes with constant machine learning. Both have read Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies. What truth is there in apocalyptic scenarios of artificial intelligence run amok?

During February's TED Conference in Vancouver, Skype co-founder Jaan Tallinn also sounded the alarm. "As long as artificial intelligence is less intelligent than humans, we can treat it just like any other technology," said the Estonian physicist and programmer. "But as soon as we have to deal with an artificial intelligence that potentially becomes more intelligent than we are, the situation changes drastically."

What Tallinn is afraid of is the possibility that machines will become capable of drawing increasingly complex conclusions on their own. But he's no pessimist. He is currently involved in 10 different projects, each trying to program artificial intelligence so that it will never become a threat to humanity. One of them is the Future of Life Institute at Oxford University. In an interview, Tallinn discussed exactly he's working on, and how the work is meant to prevent machines from taking over the world.

Skype founder Jann Tallinn â€" Photo: Christopher Michel

SÜDDEUTSCHE ZEITUNG: What’s so dangerous about artificial intelligence?

JAAN TALLINN: A program's not dangerous. But artificial intelligence that potentially becomes smarter than humans might just be.

What does that mean?

There are several metaphors. Gorillas, for instance. It's up to us humans whether they survive or not. Either we protect their environment, or we destroy it. The biggest threat would be an artificial intelligence that truly flouts humanity altogether. You can turn and twist it the way you want. Our planet's destiny always depends on the most intelligent way of using and treating it. It doesn't matter if that's mankind, or something else.

In a world with AI, we would be the gorillas, right?

Yes. We need to understand that we're not just creating some random technology. Its value system doesn't necessarily fit ours, so it might not care about things that are important to us. Nature, for instance.

But won't AI always depend on humans? In order to act, wouldn't it need a contact to the physical reality, like robotics?

If this AI really was smarter, it is difficult to say what it would really need to function. But even today, you can do a lot without physical action. If, for instance, you were closed away in the basement with a billion dollar and an Internet connection, you'd be able to do a lot of harm.

What kind of deadly power could be developed by AI?

What I'm truly worried about is the environment. Humans do already a lot of damage, simply because they don't care. Similar to what has happened to the habitat of gorillas, or other species that have already died out. If we create something that is smarter than us, it'll have power over the environment too, and it won't care about what we need for survival.

Is it possible to program a human value system into AI?

In theory, yes. But only in theory. The real problem is that we ourselves don't fully understand our value system. We know the main ideas â€" the future, the environment, children and progress. But we don't know where these values come from, also because they constantly change over time.

Does AI have a will to survive?

AI doesn't care about survival. All it cares about is doing its job, the one it has been programmed for. Whatever that may be. As soon as it realizes that it can't do that job when being turned off, it'll find ways to make the power-down impossible.

Is this a form of self-awareness?

No. Such a system doesn't know that it exists as a physical system. A chess computer, for instance, doesn't know that it's an object in a real world. As long as they think that, they don't care about being switched off.

Threat? â€" Photo: Keoni Cabral

Could that change?

The more powerful AI systems become, the more complex solutions they can draw about what's going on in the world around them. It therefore does make sense to think that it grasps that its survival is vital for the accomplishment of its mission.

What's more dangerous: AI out of control, autonomous, or having the wrong people in control of it?

What I worry about most are the unexpected side effects. Most of the programs are not a 100% correctly written. It doesn't really matter if by a good or an evil person. Bad things might happen because AI does things it hasn't been programmed for.

Today AI outperforms humans in only a few areas. Do you think there's an AI superior to all human in general?

Autonomous weapons systems get very close to it. They know a lot better how to kill than humans. The Future of Life Institute published an open letter a year ago in which it stipulates the worldwide prohibition of autonomous weapons. There are many good reasons why we're not starting an arms race where governments or terrorist units outbalance each other with better weapons.

But especially programmers should be easily convinced of the threat?

I think we need something like a tipping point, the moment when the people in the tech industries get the information from really trustworthy sources. I'm working on that too.

But how to create public awareness? Why not movies?

Movies are a double-edged sword, because they have the duty to entertain. And they do, with a lot of drama. The most likely scenario for the end of humanity is less dramatic. There's no heroic battle. It can go extremely fast. A movie I can recommend is Ex Machina from last year. It has well communicated the biggest threat we're facing: The true danger doesn't come from AI becoming evil but an extremely competent AI that simply doesn't care about humans at all.

Support Worldcrunch
We are grateful for reader support to continue our unique mission of delivering in English the best international journalism, regardless of language or geography. Click here to contribute whatever you can. Merci!

La Sagrada Familia Delayed Again — Blame COVID-19 This Time

Hopes were dashed by local officials to see the completion of the iconic Barcelona church in 2026, in time for the 100th anniversary of the death of its renowned architect Antoni Guadí.

Work on La Sagrada Familia has been delayed because of the pandemic

By most accounts, it's currently the longest-running construction project in the world. And now, the completion of work on the iconic Barcelona church La Sagrada Familia, which began all the way back in 1882, is going to take even longer.

Barcelona-based daily El Periodico daily reports that work on the church, which began as the vision of master architect Antoni Gaudí, was slated to be completed in 2026. But a press conference Tuesday, Sep. 21 confirmed that the deadline won't be met, in part because of delays related to COVID-19. Officials also provided new details about the impending completion of the Mare de Déu tower (tower of the Virgin).

El Periódico - 09/22/2021

El Periodico daily reports on the latest delay from what may be the longest-running construction project in the world.

One tower after the other… Slowly but surely, La Sagrada Familia has been growing bigger and higher before Barcelonians and visitors' eager eyes for nearly 140 years. However, all will have to be a bit more patient before they see the famous architectural project finally completed. During Tuesday's press conference, general director of the Construction Board of the Sagrada Familia, Xavier Martínez, and the architect director, Jordi Faulí, had some good and bad news to share.

As feared, La Sagrada Familia's completion date has been delayed. Because of the pandemic, the halt put on the works in early March when Spain went into a national lockdown. So the hopes are dashed of the 2026 inauguration in what would have been the 100th anniversary of Gaudi's death.

Although he excluded new predictions of completion until post-COVID normalcy is restored - no earlier than 2024 -, Martínez says: "Finishing in 2030, rather than being a realistic forecast, would be an illusion, starting the construction process will not be easy," reports La Vanguardia.

But what's a few more years when you already have waited 139, after all? However delayed, the construction will reach another milestone very soon with the completion of the Mare de Déu tower (tower of the Virgin), the first tower of the temple to be completed in 44 years and the second tallest spire of the complex. It will be crowned by a 12-pointed star which will be illuminated on December 8, Immaculate Conception Day.

Next would be the completion of the Evangelist Lucas tower and eventually, the tower of Jesus Christ, the most prominent of the Sagrada Familia, reaching 172.5 meters thanks to an illuminated 13.5 meters wide "great cross." It will be made of glass and porcelain stoneware to reflect daylight and will be illuminated at night and project rays of light.

La Sagrada Familia through the years

La Sagrada Familia, 1889 - wikipedia

Support Worldcrunch
We are grateful for reader support to continue our unique mission of delivering in English the best international journalism, regardless of language or geography. Click here to contribute whatever you can. Merci!
THE LATEST
FOCUS
TRENDING TOPICS
MOST READ