Google and Amazon are trying to force us to speak politely to their AI-driven personal assistants. But giving souls to our technology is a dangerous return to the past.
PARIS — Google Home, the search giant's personal assistant with which you can engage in basic conversation, has just introduced the "Pretty Please" function. This feature rewards users politeness towards the device, following in the footsteps of Amazon's Alexa personal assistant. Adults will thus be answered in the same courteous tone, while the "magic words' may be set as required of children before their requests are fulfilled.
The goal is to discourage rude behavior as well as to avoid scrambling the learning process of artificial intelligence with all kinds of insults that could boomerang back to users. Google says its decision is based on sociological studies that demonstrate the need to be courteous with robots.
- Please, Google, turn on the oven.
- Thank you for being so kind, Gaspard, I will do it immediately.
But what has become a social imperative represents, in my opinion, a profound epistemological error. Artificial intelligence has neither personality nor consciousness — it suffers neither from the cold nor from insults. Far from reproducing the process of thought or emotion, it merely imitates their results, as computer scientist Jerry Kaplan explains. When Google Assistant answers a question using a search engine or suggests a choice of music, it doesn't follow human reasoning but rather synthesizes the millions of human reasoning processes from which its algorithm derive. It's a complex, sophisticated and undoubtedly useful mechanism, but just a mechanism nonetheless — not an autonomous being endowed with any sense of morality. We don't say "please" to a washing machine anymore than we do to a car or word processing software, do we?
Rather than the vanguard of progress, treating an AI-powered device with politeness is instead a clear sign of regression for our civilization, slipping back into the failings of anthropomorphism. A visit to the Quai Branly Museum in Paris is enough to see how traditional societies were doing their utmost to attribute a soul, power and feelings to inanimate objects (the Polynesians' famous Mana).
It would be paradoxical if technological progress were to make us lose our scientific minds.
It wasn't until the development of experimental science that we started freeing things from our own shadow. The late French philosopher Gaston Bachelard identified, in the forming process of the scientific mind, what he called the "animist obstacle" and defined it as a "belief in the universal character of life."
Will the 21st century regress into the worship of silicon chips, and discussions about the vices and virtues of AI? Are we going to resurrect animal spirits for machines that we have designed and manufactured ourselves? Will the connected house look like a dense forest full of ghosts and mysteries? It would be paradoxical if technological progress were to make us lose our scientific minds.
Far from contributing to harmonious social relations, the logic behind "Pretty Please" thus risks recreating a relationship with the primitive world in which keyboards must be blessed and refrigerators greeted. Moreover, it makes politeness an automatic, standardized and repetitive reflex when its entire value depends on its sincerity. As we're sometimes told when our apologies are too automatic, machine-like, "You don't really mean it!" Robots not only don't deserve politeness, but we must absolutely avoid that politeness becomes robotized.
So, for once, we should ask our children to be discourteous or, even better, completely indifferent to the connected objects that are beginning to surround us. It's not a matter of transforming robots into slaves, as some fear. A slave is a conscious being who is denied any faculty of self-determination, whereas AI is an artificial fiction that we have determined ourselves, and whose learning capacities remain linked to the algorithms that govern it. It's a question of differentiating very clearly, for essential moral reasons, between a subject that is an end in itself and therefore deserves respect and a purely utilitarian object. Robots are neither friends nor enemies, nor angels nor demons. They are instruments. This must be clear if we want to be able to live with them in peace.