When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in .

You've reached your limit of one free article.

Get unlimited access to Worldcrunch

You can cancel anytime .

SUBSCRIBERS BENEFITS

Exclusive International news coverage

Ad-free experience NEW

Weekly digital Magazine NEW

9 daily & weekly Newsletters

Access to Worldcrunch archives

Free trial

30-days free access, then $2.90
per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch
InterNations
Future

Artificial Intelligence And The Limits Of 'The Imitation Game'

Rest assured, computers aren't that smart. They lack common sense. Or so we assume: for if a computer could become conscious, how would we really know?

Sophia, a life-like humanoid robot.
Sophia, a life-like humanoid robot.
Rémy Demichelis

PARIS — No matter how sophisticated a computer may be, it still needs someone holding its hand. Or as Yann LeCun, head of AI research at Facebook, put it at a recent conference in Paris: "Even a rat has more consciousness than the best artificial intelligence systems we can build."

Sure, computers can beat the world champion of "Go," instantly detect a mistake in your Google search entry or drive cars. But no matter how much a machine learns on its own (that being one of the key definitions of AI), you still have to tell it — in the case of self-driving vehicles, for example — that it needs to go around, rather than through, a roadside tree.

One of the greatest challenges for AI today is to endow machines with common sense.

There are many types of learning, and human learning is still a difficult model to replicate. "A baby observes and understands the world through interaction. She discovers alone that there are animated objects and other inanimate ones," Yann LeCun explained during her conference appearance. "From the eighth month of life, the child understands that an object can't stay up in the air by itself. The principles of learning are in nature and our job as researchers is to explore that."

One of the greatest challenges for AI today is to endow machines with common sense — like not driving through trees. When we hear, "John came out of the apartment with Paul, he took his keys," we all understand that both "he" and "his keys' refer to John and not Paul. We can also guess that John went through the door and not the window, but an artificial intelligence system is still unable to make those assumptions.

Winning the Turin test

Still, researchers are making progress bridging the gap between AI and human beings. That is either exciting news if you're like the late Marvin Minsky, one of the founding fathers of computer science, or it's scary if you're like Elon Musk, CEO of Tesla and SpaceX.

In 1950, the British mathematician Alan Turing, famous for deciphering Nazi Germany's Enigma code, imagined something called the "Imitation Game" — a test to determine whether or not a machine could think. The test consists of having a person interact with both a real human and what we would nowadays call a chatbot, namely a program that responds to Internet users in a dialog box. If, based on the responses he receives, the experimenter cannot tell the difference between the person and the machine, then the machine passes the test.

But does winning the Imitation Game really mean a machine thinks like us, or that it has consciousness?

In 2014, a team from the University of Reading announced that a software had done just that. The program simulated the responses of a fictional boy named Eugene Gootsman, a sarcastic 13-year-old who supposedly lived in Ukraine. When asked the number of legs of a millipede, the program replied: "Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me." With a discussion time limited to five minutes, Eugene fooled 33% of experimenters.

Plenty of people, however, criticized the experiment by saying the conversation time was too short and the percentages too low. Jean-Paul Delahaye, a researcher at the Computer Science Laboratory of Lille, France, described it at the time of a "degraded" form of the Turing test.

But does winning the Imitation Game really mean a machine thinks like us, or that it has consciousness? For Turing, that wasn't the issue, and for a simple reason: Answering it is impossible, even between humans, the only way to know that another person thinks is to be that particular person. "It is usual therefore to have a polite convention that everyone thinks," he said. All we can do, therefore, is assume consciousness in the other, Turin reasoned. We can't really test it.

The Imitation Game is thus a second-person approach, one based on verbal or written exchanges. But it says nothing about the first person, that is to say about how a machine — or person — perceives the yellow color of a lemon, for example. It also doesn't say if the machine knows what it's talking about, or if it behaves, rather, like a good student who recites her lesson without actually understanding anything.

A statue of Alan Turing in Bletchley park Park — Photo: NUMRUSH

The consciousness conundrum

On the neuroscience side, the question of consciousness has long been dealt with by a so-called third-person approach, that is, by observing how the brain works. The trouble is that there are many things going on in the brain that the subject doesn't realize is happening. There is a tendency nowadays to combine the second- and third-person approaches — namely the interaction with the subject and the observation of the brain, for example, through an electroencephalogram.

The question of whether a machine can be conscious also nags neuroscientists. Stanislas Dehaene, a researcher who is a member of the French Academy of Sciences, wrote an article about it in the journal Science last fall. He suggests that one aspect of our consciousness is the ability to be attentive to one particular thing.

"When you look at these optical illusions where there are two drawings in one, like an old lady and a young woman, you only see one at a time," says Darinka Trübutschek, a doctoral student at the Paris School of Neuroscience, who worked in Dehaene's team.

Another aspect of consciousness is the ability to represent oneself, what is called "reflexivity." Dehaene concludes that — based on these two criteria — it is theoretically possible for an AI machine to be conscious.

"We know how to make machines that focus their attention or that have reflexivity, but is it the same as our consciousness?" asks Jean-Gabriel Ganascia, researcher at the Laboratory of Computer Sciences of Sorbonne University and author of a 2017 essay entitled "Le Mythe de la singularité" (the myth of singularity). "Turing says our consciousness is tied to our needs," he explains. "We love water because it is essential to our survival, but for an electronic machine, it would be poison."

It is theoretically possible for an AI machine to be conscious.

Regardless of their field, researchers agree on one point: It's not a matter of computing power. "A quantum computer wouldn't be any more conscious," says Pierre Uzan, professor of philosophy at Paris Diderot University and author of Conscience et physique quantique (consciousness and quantum physics).

Uzan agrees with Turin that the first-person approach to the consciousness question seems to be beyond the reach of science. The third approach, external observation, and the second, dialogue with machines, are therefore the only theoretical means at our disposal to answer the enigma. Nearly 70 years after Turing's seminal article, science is still being reminded of its limits.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Dottoré!

The Language Of Femicide, When Euphemisms Are Not So Symbolic

In the wake of Giulia Cecchettin's death, our Naples-based Dottoré remembers one of her old patients, a victim of domestic abuse.

Photograph of a large mural of a woman painted in blue on a wall in Naples

A mural of a woman's face in Naples

Oriel Mizrahi/Unsplash
Mariateresa Fichele

As Italy continues to follow the case of 22-year-old Giulia Cecchettin, murdered by her ex-boyfriend Filippo Turetta, language has surfaced as an essential tool in the fight against gender violence. Recently, Turetta's father spoke to the press and used a common Italian saying to try and explain his son's actions: "Gli è saltato un embolo", translating directly as "he got a blood clot" — meaning "it was a sudden flash of anger, he was not himself."

Maria was a victim of systemic violence from her husband.

Keep reading...Show less

The latest