A manager of the BYD company tries the facial recognition in a 'Yungui' train in Yinchuan, China on January 10
A manager of the BYD company tries the facial recognition in a 'Yungui' train in Yinchuan, China on January 10 Wang Peng/Xinhua/ZUMA

MUNICH — Elon Musk has yet to prove whether he can actually lead his Tesla company to success. But there’s one thing he definitely has managed to do: make us all afraid of artificial intelligence.

This technology, he repeats at every opportunity, is the “greatest danger to humanity,” more dangerous even than “nuclear weapons.” Musk delivers memorable quotes. The media has the titillating headlines it needs. And readers are invariably frightened.

For many of the researchers who’ve been studying AI for decades, the Tesla founder’s alarmism is annoying. “It seems you can’t open a newspaper without Elon Musk predicting that artificial intelligence needs regulating — before it starts World War III,” Australian professor Toby Walsh recently wrote in the magazine Wired. Walsh, for one, doesn’t think we need to fear what people like him call singularity: the time when machines start to evolve on their own.

And yet, Musk is right about one thing: AI research does need to be regulated. Not because robots would otherwise take power, but because companies and governments have been relying too much on the machines’ supposed intelligence. The Terminator will remain science fiction, but without rules, a dystopia threatens. The following examples show that technological progress can also backfire.

Look who’s talking

At a conference early last month, Google CEO Sundar Pichai shared a recorded phone conversation. What the conference participants heard was a woman reserving a table in a restaurant. Or it least it sounded like a woman. And, as banal as the interaction seemed, they cheered. Why? Because what the audience had just witnessed, for the first time, was AI imitating human speech so perfectly that the restaurant staff at the other end of the line didn’t notice that she was talking to a machine.

On social media, reactions were divided — between excitement and dismay. “This is horrible and so obviously wrong,” sociologist Zeynep Tufekci wrote on Twitter.

The Terminator will remain science fiction, but without rules, a dystopia threatens.

Pichai insisted several times that the call had taken place exactly the way the audience had heard it. But doubts have since been raised about whether Google edited the recording a bit. This matters little. More important is the debate the phone call triggered by the call, and the larger question it poses. Does AI have to identify itself when it communicates directly with people?

The question goes far beyond Google Assistant, which can make appointments for you. What happens when scammers use software that then automatically calls pensioners in bulk? Do social-network bots need to be presented as such so users understand that they are chatting with a computer? AI researchers like Walsh, therefore, call for autonomous systems to be designed in such a way that they cannot be confused with humans.

Tech companies employ tens of thousands of people to remove the dirt from the net. These digital garbage trucks click through disturbing pictures and videos. They spot and delete depictions of extreme violence. Cheap human helpers are employed in emerging and developing countries to spare Facebook and YouTube users the sight — at the risk of their own mental health.

They also feed databases and train software that could, one day, make their jobs obsolete. Facebook’s Mark Zuckerberg constantly talks about “AI tools’ that Facebook should keep clean in the future. At his hearing before the U.S. Congress, he referred more than 30 times to AI, which should independently delete content if it violates community standards.

AI can indeed remove terrorist propaganda and child abuse material. The decision, in such cases, is clear. But it isn’t always so. Even lawyers disagree on the boundary between freedom of expression and censorship. Numerous examples from the past few years have repeatedly shown that it’s not a good idea to let algorithms decide. “It wasn’t us, it was the machine,” must not be allowed to become an excuse for Facebook or YouTube if another satire video has been blocked because AI isn’t able to recognize sarcasm.

Beware the “infocalypse”

In April, BuzzFeed published a video in which a man warns of fake videos. He looks like Barack Obama and speaks like Obama. But he isn’t Obama. In fact it’s actor Jordan Peele. The video is a so-called deepfake.

Artificial neural networks, which are modeled on the natural networks of nerve cells, can now forge sound and video recordings so perfectly that these can hardly be distinguished from the original. With applications like Fakeapp, even the average user without special technical skills can create frighteningly good fake videos.

Many may find it funny when the artificial Obama says: “President Trump is a total and complete deep shit. Now, you see, I would never say these things — at least not in a public address.” It becomes less funny when a manipulated video is suddenly circulating on Twitter in which Kim Jong-un announces that he has just launched a nuclear missile against the U.S.

AI isn’t able to recognize sarcasm.

Do Trump’s advisors have enough time to tell him about deepfakes before he presses the red button? Propaganda using fake videos is already a reality: Just recently a Belgian party posted a Trump deepfake video on social media in which the president allegedly calls on Belgium to withdraw from the Paris climate agreement.

Experts warn against an era of disinformation. Aviv Ovadya, chief technologist at the Center for Social Media Responsibility at the University of Michigan who had already predicted the fake news flood in the U.S. election campaign, sees humanity heading for the “infocalypse.” Fake news may be flooding the Internet, but for a long time, at least, videos were considered forgery-proof. Now you have to learn to mistrust your eyes and ears.

Ordering at Amazon is convenient, but working at Amazon is often the opposite. Every move is filmed in the company’s logistics centers. Surveillance could become even more total in the future. At the beginning of the year, Amazon was awarded two patents for a bracelet that tracks all employee movements. Thanks to ultrasound and radio technology, the bracelet always knows where the wearer’s hands are. Vibrations could signal to the employee that he is packing the wrong goods.

Amazon denies using the bracelet to monitor employees and says it’s intended to simplify the work processes of logistics employees — if it ever comes to be used at all. But even if you believe and trust Amazon, the potential for abuse remains huge. Millions of people in emerging markets are working hard to build smartphones that they will never be able to afford. Trade unions and workers’ rights are foreign to them. Their employers would no doubt find surveillance wristbands practical. This would degrade people even further into robots.

A current example shows that countries like China have few inhibitions when it comes to surveillance. A school in the city of Hangzhou, in eastern China, is testing a facial recognition system: Three cameras observe the students and interpret their facial expressions. If the software thinks it has detected an inattentive child, it notifies the teacher. “Since the cameras have been hanging in the classroom, I no longer dare to be distracted,” one student said. “It’s like scary eyes are always watching me.”