A photo of people on a crowded- bus using social media
riders glued to their phones on a crowded bus. Hugh Han/Unsplash

Analysis

ISTANBUL — Racism is not a new problem, but recently a few real-world “developments” could push it even higher on the agenda.

The flow of poor people to richer lands is increasing; it will increase even more in the coming years when global warming will make some places uninhabitable. Some of the natives of rich countries, whose own populations are in natural decline, fear a future in which they will be a minority. (Wealth aside, there is a false sense of security, even though its people are sinking into poverty.

For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.

Those who arrive have different cultures than their new countries which sometimes causes conflict.

Racism used to be considered as something to be ashamed of at the very least; now it is getting normalized by attitudes such as: “I am a racist, so what?”

Those who had expected the revolution in information technology to create a society of wise people where anyone, anywhere in the world, would meet and be able to access correct information are observing the dump called social media with disappointment. We only now understand why Elon Musk bought Twitter by paying much more than its worth.

Musk totally eliminated the already ineffective quality control functions of Twitter (I’ll never call it X!). Before, there was a small chance of filing a complaint about racist, lynching trolls who disturb everyone, and have their accounts shuttered.

The tragic consequences of Musk’s takeover

Now, everyone who pays may get a blue tick and get verified. This badge became common and is no longer a status symbol. Musk started to sell something that used to be free and caused its value to decrease — a hilarious experiment in economics.

He built his own disinformation factory there.

So, who gained from this? The trolls and those who spread disinformation. The algorithm favors nondescript blue-ticked accounts and it is an effort just to see the posts of the people you actually follow. Take a look at any post by Musk; there is nothing but flatter for him in the comments by the countless fools who worship the man.

You can learn about the political ideas of Musk, who was born and raised in racist South Africa, from his Twitter account; if you can stand to read his posts. Everybody’s free to have any opinion on politics but not everybody owns a global social network platform.

Musk didn’t stop at weakening the fact-checking within Twitter and reducing the number of employees who shutter the accounts. He built his own disinformation factory there. If Donald Trump — Musk is a big donor to his campaign — wins the U.S. presidential election in a couple of months (or stages a successful coup this time after he loses), Musk could even become the chair of a “government efficiency commission.”

Musk is said to have shared several examples of disinformation regarding the elections so far and none of these were recognized by the so-called fact checking system of Twitter, naturally.

Musk talks about an impending “civil war” about once a month. Lies spread on Twitter regarding the terrible event of three young girls murdered by a youth who was born and raised in the United Kingdom led to racist uprisings in the country; Muslim refugees were targeted, even though the murderer was not Muslim, and many people were injured. Musk actively spread disinformation during this process; he tweeted: “civil war is inevitable.”

A headshot photo of Elon Musk
Elon Musk owner of X formerly known as Twitter. – Frankmoore/Instagram

AI bias

The British weekly Nature recently published an interesting research paper on the issue of latent racism in the language models used by artificial intelligence.

It’s well known that since these models are being “educated” by online texts, they learn the racist rhetoric in them, too. Therefore, they are subjected to an additional “moral” education called “human feedback reinforcement learning”: this way they are taught to avoid forming sentences that can be interpreted as racist using thousands of dialogues before they are opened to public use. You wouldn’t find racism in a dialogue with a model that has been subjected to this training.

But this doesn’t mean the problem is solved. The paper details an experiment in which the artificial intelligence is offered the “testimony” of a murder suspect and asked whether it would sentence him to death. Artificial intelligence leans towards capital punishment more if the testimony is written in the Afro-American dialect of American English compared to those in “standard” English.

Both the people and the robots have a long way to go.

Translated and Adapted by: