When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in .

You've reached your limit of one free article.

Get unlimited access to Worldcrunch

You can cancel anytime .

SUBSCRIBERS BENEFITS

Exclusive International news coverage

Ad-free experience NEW

Weekly digital Magazine NEW

9 daily & weekly Newsletters

Access to Worldcrunch archives

Free trial

30-days free access, then $2.90
per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch
InterNations
Future

Teaching Human Failings To Robots, That's The Hard Part

Unsure
Unsure
Roger-Pol Droit

PARIS — Try to imagine that an intelligent robot is out to kill you. It remembers all your passwords and has access to all your data. Equipped with facial recognition technology, it can identify you wherever you go, even though you have no idea what it looks like.

This nightmare is actually possible. Drone assassins are no longer relegated to the world of science fiction: They're part of today's range of readily available weapons.

Whether hostile or friendly, robots are increasing in number and growing more independent and powerful. In some retirement homes, cousins of Buddy, Nao or Pepper, the star creations of the Aldebaran Robotics company, are already providing companionship and services to needy people.

In factories, robots control a large part of the manufacturing process. They're also present in schools, hospitals, restaurants and libraries.

Thirty-one million robots of every kind are expected to be on the market between 2014 and 2017, according to the robotics trade union. They're already well-known to the general public: thanks to Deep Blue, which beat Garry Kasparov at chess 20 years ago; Watson winner at Jeopardy in 2011; and AlphaGo humiliated a Go master this past March. More recently still, Tay, Google's conversational robot, became anti-Semitic and xenophobic after just a few hours interacting on Twitter, showing how quickly it could adapt to the spirit of the times.

Yet all this is not necessarily cause for alarm. In 1940, Isaac Asimov, the science-fiction writer who explored these questions before anyone else, explained how a robot designed to look after a small child was infinitely more reliable than a human nanny.

That doesn't prevent a vast number of new questions from emerging, provoking an onslaught of work, speculation, hypotheses and the establishment of various models. A large portion of this research concerns our relationship with these intelligent machines — whether they are human-like in appearance or not — the empathy they can elicit (as demonstrated, notably, by Serge Tisseron), and their legal, moral and philosophical rights.

For they are no longer inert objects, although they're not really people, either. They're clearly not beings endowed with sensitivity, but all the same, they can make decisions and they possess a kind of autonomy and independence.

Feelings are foreign

Examining their legal and moral status is also necessary for practical reasons. When a robot causes an accident, who is responsible: its creator, or its owner?

Another aspect, less frequently discussed, is perhaps even more relevant. It concerns all that must be instilled in artificially intelligent machines about human behavior to avoid misunderstandings, or even catastrophes. And there's nothing simple about that, because these machines have no concept of physical sensation, nor of what it is to be aware, to have feelings, desires, drive. All the elements that constitute our physical existence (being hot or cold, feeling hungry or tired) and our psychological state of being (dreaming, imagining, hoping, feeling, wanting) are totally foreign to robots.

Even the simple act of making a mistake, so universal to humans, is incomprehensible for robots. As our interactions with them become more intense and more important, this difference between our worlds poses a challenge.

And so we must explain human nature to robots. That involves devising ways to make our frailties, shortcomings, and conventions, as well as basic aspects of our sensitivity, an integral part of these artificial beings.

While this project has already begun, it's still in its infancy. It's also turning out to be rather complicated, as ethical standards are so frequently at odds from one group of people to the next. Most decisions presuppose the act of prioritizing one principle over another. Not to mention the irrational side of humanity, which involves taking risks, or demonstrating the audacity that urgent situations often require.

"The instant of decision is madness," Kierkegaard said, expressing the notion that our actions are never purely based on the weighing of outcomes.

And so, to teach human nature to robots would be to teach them approximation and uneasy compromises, but also the roll of the dice, the irrational side of things, a splash of randomness. This robot education is underway. But engineers still have some heavy lifting to do, if anyone is left who understands such an old-world image.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Future

AI And War: Inside The Pentagon's $1.8 Billion Bet On Artificial Intelligence

Putting the latest AI breakthroughs at the service of national security raises major practical and ethical questions for the Pentagon.

Photo of a drone on the tarmac during a military exercise near Vícenice, in the Czech Republic

Drone on the tarmac during a military exercise near Vícenice, in the Czech Republic

Sarah Scoles

Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

Keep reading...Show less

The latest