Rest assured, computers aren't that smart. They lack common sense. Or so we assume: for if a computer could become conscious, how would we really know?
PARIS — No matter how sophisticated a computer may be, it still needs someone holding its hand. Or as Yann LeCun, head of AI research at Facebook, put it at a recent conference in Paris: "Even a rat has more consciousness than the best artificial intelligence systems we can build."
Sure, computers can beat the world champion of "Go," instantly detect a mistake in your Google search entry or drive cars. But no matter how much a machine learns on its own (that being one of the key definitions of AI), you still have to tell it — in the case of self-driving vehicles, for example — that it needs to go around, rather than through, a roadside tree.
One of the greatest challenges for AI today is to endow machines with common sense.
There are many types of learning, and human learning is still a difficult model to replicate. "A baby observes and understands the world through interaction. She discovers alone that there are animated objects and other inanimate ones," Yann LeCun explained during her conference appearance. "From the eighth month of life, the child understands that an object can't stay up in the air by itself. The principles of learning are in nature and our job as researchers is to explore that."
One of the greatest challenges for AI today is to endow machines with common sense — like not driving through trees. When we hear, "John came out of the apartment with Paul, he took his keys," we all understand that both "he" and "his keys' refer to John and not Paul. We can also guess that John went through the door and not the window, but an artificial intelligence system is still unable to make those assumptions.
Winning the Turin test
Still, researchers are making progress bridging the gap between AI and human beings. That is either exciting news if you're like the late Marvin Minsky, one of the founding fathers of computer science, or it's scary if you're like Elon Musk, CEO of Tesla and SpaceX.
In 1950, the British mathematician Alan Turing, famous for deciphering Nazi Germany's Enigma code, imagined something called the "Imitation Game" — a test to determine whether or not a machine could think. The test consists of having a person interact with both a real human and what we would nowadays call a chatbot, namely a program that responds to Internet users in a dialog box. If, based on the responses he receives, the experimenter cannot tell the difference between the person and the machine, then the machine passes the test.
But does winning the Imitation Game really mean a machine thinks like us, or that it has consciousness?
In 2014, a team from the University of Reading announced that a software had done just that. The program simulated the responses of a fictional boy named Eugene Gootsman, a sarcastic 13-year-old who supposedly lived in Ukraine. When asked the number of legs of a millipede, the program replied: "Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me." With a discussion time limited to five minutes, Eugene fooled 33% of experimenters.
Plenty of people, however, criticized the experiment by saying the conversation time was too short and the percentages too low. Jean-Paul Delahaye, a researcher at the Computer Science Laboratory of Lille, France, described it at the time of a "degraded" form of the Turing test.
But does winning the Imitation Game really mean a machine thinks like us, or that it has consciousness? For Turing, that wasn't the issue, and for a simple reason: Answering it is impossible, even between humans, the only way to know that another person thinks is to be that particular person. "It is usual therefore to have a polite convention that everyone thinks," he said. All we can do, therefore, is assume consciousness in the other, Turin reasoned. We can't really test it.
The Imitation Game is thus a second-person approach, one based on verbal or written exchanges. But it says nothing about the first person, that is to say about how a machine — or person — perceives the yellow color of a lemon, for example. It also doesn't say if the machine knows what it's talking about, or if it behaves, rather, like a good student who recites her lesson without actually understanding anything.
A statue of Alan Turing in Bletchley park Park — Photo: NUMRUSH
The consciousness conundrum
On the neuroscience side, the question of consciousness has long been dealt with by a so-called third-person approach, that is, by observing how the brain works. The trouble is that there are many things going on in the brain that the subject doesn't realize is happening. There is a tendency nowadays to combine the second- and third-person approaches — namely the interaction with the subject and the observation of the brain, for example, through an electroencephalogram.
The question of whether a machine can be conscious also nags neuroscientists. Stanislas Dehaene, a researcher who is a member of the French Academy of Sciences, wrote an article about it in the journal Science last fall. He suggests that one aspect of our consciousness is the ability to be attentive to one particular thing.
"When you look at these optical illusions where there are two drawings in one, like an old lady and a young woman, you only see one at a time," says Darinka Trübutschek, a doctoral student at the Paris School of Neuroscience, who worked in Dehaene's team.
Another aspect of consciousness is the ability to represent oneself, what is called "reflexivity." Dehaene concludes that — based on these two criteria — it is theoretically possible for an AI machine to be conscious.
"We know how to make machines that focus their attention or that have reflexivity, but is it the same as our consciousness?" asks Jean-Gabriel Ganascia, researcher at the Laboratory of Computer Sciences of Sorbonne University and author of a 2017 essay entitled "Le Mythe de la singularité" (the myth of singularity). "Turing says our consciousness is tied to our needs," he explains. "We love water because it is essential to our survival, but for an electronic machine, it would be poison."
It is theoretically possible for an AI machine to be conscious.
Regardless of their field, researchers agree on one point: It's not a matter of computing power. "A quantum computer wouldn't be any more conscious," says Pierre Uzan, professor of philosophy at Paris Diderot University and author of Conscience et physique quantique (consciousness and quantum physics).
Uzan agrees with Turin that the first-person approach to the consciousness question seems to be beyond the reach of science. The third approach, external observation, and the second, dialogue with machines, are therefore the only theoretical means at our disposal to answer the enigma. Nearly 70 years after Turing's seminal article, science is still being reminded of its limits.