When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in.

You've reach your limit of free articles.

Get unlimited access to Worldcrunch

You can cancel anytime.

SUBSCRIBERS BENEFITS

Ad-free experience NEW

Exclusive international news coverage

Access to Worldcrunch archives

Monthly Access

30-day free trial, then $2.90 per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch
Sources

How We Build Human Bias Into Artificial Intelligence

AI learns everything it knows from us — including bad habits
AI learns everything it knows from us — including bad habits
Charles Cuvelliez

-OpEd-

PARIS — When Amazon realized that its AI recruiting tool favored men, the company quickly shelved it. Back in 2016, a chatbot released by Microsoft turned into a sex-obsessed neo-Nazi machine in only 24 hours. These incidents, along with others, played right into the hands of all those who say there is too much AI in our daily lives.

But some researchers are looking at things from a different perspective: if AI makes such mistakes, it's because it has been taught that way. That means we can also teach it to avoid such mistakes. How? By tracking down all the biases contained in the data the AI is fed when it learns.

Said data is only the result of our own biases, which have been at play for a long time. They are easy to find: consider the number of simultaneous appearances of "woman" and "nurse" in the same text compared to a possible proximity with "doctor." It's easy to imagine that computer scientists will be often referred to in masculine terms or close to the word "man." But there is more: Does anybody know, for example, that the error rate for facial recognition can reach 35% for women with black skin, compared to 0.8% for men with fair skin?

We are the ones who teach the machines to be biased.

All this is due to the body of initial data available to train AI algorithms. The data available is AI's Achilles heel. For example, social networks provide an abundant and cheap source of data. But the presence of fake news, hate speech and general contempt towards minorities and women that can be found there doesn't bode well for AIs.

An experiment conducted on Twitter by a researcher at Swinburne University revealed that negative feelings were most often expressed against female leaders rather than male leaders.

Here's another experiment we can all make. Just enter the keyword "president" or "prime minister" on Google Images: men are over-represented by 95%. But it's not Google's fault.

Biases can also be found elsewhere: Has anybody ever wondered why voice assistants or customer contacts in call centers, when they are handled by robots, have reassuring female voices?

It's true, studies show that men and women prefer a female voice to speak to them. It's more reassuring. It's maternal. When you look at it closely, this preference becomes more refined, though not in the right direction: We prefer a male voice to talk to us about computers or cars, and a female voice for all things interpersonal.

AI learns its racial bias from its creators — Photo: Abyssus

Recently, manufacturers of intelligent automated personal assistants and connected speakers have adapted their algorithms to show less patience with the rude and harassing nature of users who sometimes vent their frustrations against machines. The thinking behind it is to try and avoid people venting against women on the street, because they have become too disinhibited by their experience with machines.

Amazon has reprogrammed Alexa to answer questions with an explicit sexual nature in a curt fashion. Google Home, meanwhile, has introduced the "Pretty Please" function, which adapts to the kind or unkind tone with which a user addresses it.

But Google Home has no conscience nor personality: It doesn't actually care whether it's talked to politely or not. Are you, at the end of the day, polite with your washing machine? Probably not always, especially when it's broken. But it could affect the machine's learning process. A user would only moderately appreciate it if his smart speaker only talked to him harshly.

Apple also offers its own Siri assistant in several versions: female or male voice, with different English accents. The voice still remains by default female, but we note that it can be male by default for Arabic, French, Dutch, and English (one wonders why).

Artificial intelligence has no bias like us.

It's not enough to talk with different accents. Personal assistants and smart speakers need to understand everyone.

To do this, the companies that design them rely on a corpus of audio clips, speeches, and more. It's easy to imagine that some groups in society are under-represented, such as low-income, rural, lower social classes that use the Internet less. Obviously, you're not going to find them in the corpus.

One of these corpora, Fischer's corpus, contains speeches by people whose mother tongue is not English, but we immediately see that they are under-represented. More amusingly, maybe, Spanish and Indian accents are already a little more represented than the various accents within Great Britain.

Artificial intelligence has no bias like us. We are the ones who teach the machines to be biased. The World Economic Forum believes that it will take until the next century to achieve true gender equality. Chances are that with AI, we might have to wait even longer.


*Charles Cuvelliez is a lecturer at the Université Libre de Bruxelles and director of the Brussels School of Engineering.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Society

Meet Thiago Brennand, Brazil's Answer To Andrew Tate

Here's the Brazilian media spectacle of brazen masculinity, white privilege — and, finally, an arrest.

Man smoking a cigar

Thiago Brennand, Brazilian businessman smoking a cigar.

Jessica Santos

SÂO PAULO — Behold Thiago Brennand: Brazil's own rich white guy boasting an arsenal of 67 guns, accused of attacking a woman in public — and he's now become a very public spectacle. For a foreign reader it can recall the saga of Andrew Tate

First, Brennand's story in brief. The Brazilian businessman made headlines in 2022 when a video surfaced that showed him assaulting a model, Helena Gomes, inside a São Paulo gym.

After Gomes filed a complaint, at least 11 other women came forward to the São Paulo Public Prosecutor's Office to report that they had been assaulted by Brennand. In September, Brazilian police issued a warrant for his arrest – but the businessman fled to the United Arab Emirates, where he was briefly detained before posting bail and being released the following day.

In March, Brazil issued a new arrest warrant for Brennand. He spent eight months living in the UAE before the country approved Brazil’s extradition request. He was flown back on April 29 to São Paulo, where he was jailed and will be tried for rape – the first of several charges he faces.

Prior to the 2022 incident, Brennand was also investigated in 2020 for assaulting his son, but the case was closed after his son retracted the accusation. Brennand has been involved in other aggression incidents as well, including at equestrian clubs.

Keep reading...Show less

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Already a subscriber? Log in.

You've reach your limit of free articles.

Get unlimited access to Worldcrunch

You can cancel anytime.

SUBSCRIBERS BENEFITS

Ad-free experience NEW

Exclusive international news coverage

Access to Worldcrunch archives

Monthly Access

30-day free trial, then $2.90 per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch

The latest