Emotion-reading robot Pepper
Emotion-reading robot Pepper Aldebaran

MARSEILLE — Have you met Pepper? This four-feet-tall emotion-reading robot is expected to hit stores soon in Tokyo, where technology lovers will be able to acquire one for the equivalent of $1,650. The child-faced robot, the latest invention of French start-up Aldebaran, was created to “live alongside humans.” But household chores such as vacuuming or cooking are not among Pepper’s abilities. Instead, this aristocrat of the robot tribe is more like Star WarsC-3PO.

Like its golden movie counterpart, it’s a protocol droid, “endearing and kind,” says Aldebaran’s founder and CEO Bruno Maisonnier. It doesn’t move the same way C-3PO does, but its many sensors feed its algorithms with information about the people it talks to, making conversations with the robot rather entertaining.

“Pepper understands our primary emotions: joy, sadness, anger, surprise, neutrality,” Maisonnier explains. “It can determine the sex and the age of a person, and therefore identify all members of a family. It can keep up with 70% of a conversation. By analyzing our facial expressions, our vocabulary and our body language, it guesses your mood and adapts to your behavior. If you frown, it’ll understand that something’s bothering you and can try to cheer you up by, for example, playing a song you really like.”

After having spent several months with the people at SoftBank, a Japanese telecom company and Aldebaran’s primary shareholder, Pepper is said to spark as much curiosity as good humor. “Our goal is to make kind, pet-like humanoid robots that will live with humans as an artificial species,” Maisonnier says.

From pouts and frowns to grins and smiles, our expressions betray pretty much all our feelings. And thanks to the progress achieved in mathematical analysis, artificial intelligence specialists have exploited this metalanguage of facial expression so that, one day, machines can have a certain form of empathy.

“Our faces contribute to 55% of the global impact of the message we’re expressing,” explains Axel Boidin, founder of French start-up Picxel, which specializes in facial recognition. “From a physiological point of view, emotional responses translate into a combination of distortions of our facial features that inform the people in front of us of our real intentions, and so contribute to coordinating the conversation. Robots will soon be able to understand these rules.”

A longtime pursuit

Scientists have been trying to turn our mimicking into equations for a long time. In the 1970s, psychologist Paul Ekman even made it his specialty by decrypting the Rosetta Stone of emotions (what he called the “Facial Action Coding System”), which is now the basis of the universal alphabet book of behavioral psychology. The dictionary they thus devised lists the 10,000 facial expressions our 43 facial muscles are capable of producing. Most of them are funny faces, and we’re able to distinguish a tiny part of the 3,000 combination that actually mean something.

Will a robot do better? At this point, the few companies that have gone into automated emotional recognition are a little bit powerless. They all know more or less how to identify the characteristic elements of the seven sorts of basic emotions Ekman listed. Most of them use image libraries as a comparison tool. In the U.S., the start-up Affectiva, founded by researchers from the Massachusetts Institute of Technology, uses a database of thousands of emotional reactions that allows its algorithms to decipher what a camera records. The company hopes to equip future smartphones with software capable of analyzing our reactions when we’re following an online course or playing a video game.

Aldebaran robots — Photo: Facebook page

Picxel has a different approach. Its algorithm works with ultrafast cameras to track the micro-expressions that reveal our most intimate emotions. These muscular contractions don’t last longer than a few microseconds, are impossible to control and, most importantly, they don’t lie. They’re the Freudian slips of our body language.

Some of us are naturally able to perceive them, like Tim Roth’s character in the series Lie To Me. Professor Ekman, who trains FBI and CIA agents, among others, also says that we can learn to read them — though the machine that can do this automatically hasn’t been invented, yet. But “giving a camera the ability of tracking these unsaid emotions will revolutionize many fields,” Boidin speculates.

In the future, stores could, for instance, use connected screens as a new sort of dressing room. These screens could analyze our reactions while suggesting different products to determine what we like most. In different sections of a store, they could also determine which products best attract consumers attention.

These powerful tools will eventually enable pollsters to collect impartial information on how a film, politician or advertisement is perceived. “With those tools, our computers will be able to automatically adapt the environments and luminosity depending on our mood, as well as adapting their behavior,” Boidin explains. “They’ll know they should be accommodating if they see we’re angry, or stimulating if they think we’re being apathetic.”

In cars, emotion detectors will come in handy to anticipate the first signs of fatigue. Connected to surveillance cameras at border checkpoints, airports and public places, as Paul Ekman envisions, they’ll be able to identify suspicious behavior to help locate terrorists.

“The collected data will enable us to build incredibly evolved models on how we behave, how we make decisions and engage,” Ekman says. In other words, intuition will soon be a thing of the past.

Translated and Adapted by: