When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in .

You've reached your limit of one free article.

Get unlimited access to Worldcrunch

You can cancel anytime .

SUBSCRIBERS BENEFITS

Exclusive International news coverage

Ad-free experience NEW

Weekly digital Magazine NEW

9 daily & weekly Newsletters

Access to Worldcrunch archives

Free trial

30-days free access, then $2.90
per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch
Germany

Google, Facebook And Beyond: Why Algorithms Discriminate

Yes, technology can be biased, racist and sexist. That's because the creators of algorithms are overwhelmingly white men, and the online results are becoming a huge problem in an increasingly digitized world.

Google Images search for CEO
Google Images search for CEO
Sara Weber

MUNICH Algorithms decide who Facebook suggests you should befriend and when future self-driving cars will apply the brakes. They are a sequence of orders that can solve a problem, if applied correctly. They are supposed to be systematic, logical and produce the same result even after the umpteenth application.

But every algorithm is programmed by humans who are part of various social strata and whose work is influenced, consciously or not, by their preferences and prejudices. And the more the world is driven by technology, the more this is a problem.

"We assume that technology is neutral, but there are no neutral algorithms," says Corinna Bath, a computer specialist who researches technology development and gender issues. She says technology is always influenced by social standing, though very few people realize that.

Algorithms can be downright dangerous if they have been programmed to make racist or sexist decisions. Mind you, this probably doesn't happen on purpose. Most programmers probably aren't aware of the fact that their prejudices are influencing the code they develop.

Blatant sexism in image returns

If you were to Google the term "CEO," the search engine suggests a disproportionately high number of images of white men. But in reality about 27% of all CEOs in the U.S. are actually women. According to a study conducted by the University of Washington, only 11% of all the images suggested are those of women. And one of the first female images, by the way, is of the doll CEO Barbie.

[rebelmouse-image 27089868 alt="""" original_size="634x615" expand=1]

"Technological developments are progressing on a constant basis, and if we don't start to examine algorithms on their ethical suitability soon, this could cause some serious problems," Bath says. She has observed that antiquated gender roles are perpetuated in technological developments, which means algorithms are actually more sexist than today's society. And besides, the people who use these algorithms don't question the results they produce but accept them to be an accurate depiction of reality.

If you Google the phrase starting with "women should …," Google search predictions autocomplete the sentence with phrases such as "should not study," "should not wear trousers" and "should not have rights." According to Google, an algorithm automatically generates the suggested responses without any human interference. The algorithm, the search giant says, is based on many objective factors, among them the frequency with which a term is searched. Which is why the predictive phrases may seem "strange or surprising."

Another description for this search function's behavior would be "discriminatory." The UN Women organization criticized the supposed "autocomplete truth" in 2013 in one of its campaigns, saying that women should have equal rights everywhere, even on Google.

But it's getting worse. A study conducted by Carnegie Mellon University has shown that women get fewer advertisements on Google for well-paid jobs in managerial positions than men. Searching Google for names that sound African-American results in advertisements that imply a criminal record, according to a 2013 study by Harvard Professor Latanya Sweeney.

Even photos of hands are racist

When you Google the term "hand," the search engine primarily returns images of fair-skinned hands. Such symbolic images, also called "stock images," are used to illustrate texts that relate to medical topics, cosmetics or jewelry and in other advertisements. Brown or black hands are very rarely used for these purposes. An initiative called World White Web is attempting to draw attention to the "whiteness" of the Internet by using these same examples. It offers stock images of hands in various skin colors, which can be used free of charge.

Bath says that technology development itself is responsible for what's happening. Programmers are overwhelmingly white men and therefore members of a very homogenous group. They develop products under the assumption that the user will have similar interests and competencies. In addition, programmers will often think of a stereotype while working rather than of real people. So, for example, telephones designed for senior citizens will often have large dialing pads because eyesight degenerates with age. But not old people have poor eyesight: Other age-related problems, such as dementia or hearing difficulties, also influence the way elderly people use phones and which functions are relevant to them — but many programmers don't think of these influencing factors.

Every new technological development can lead to renewed discrimination, for example, when images are automatically given a keyword, such as is the case on Google and Flickr. A picture of a dachshund, for example, will be given the keyword "dog," while a picture of the Eiffel Tower is given "Paris." When the algorithm was produced, it was fed information and examples to enable it to recognize certain categories on its own — or so the programmers promised. But this doesn't always work. This is demonstrated in photos of black people to which Flickr associated the keyword "ape" and Google the term "gorilla." Images of white people, however, weren't confused with those of animals. Google apologized and removed the keyword completely after staunch protests.

But algorithms are also used to by companies to determine who among hundreds of applicants will be invited to a job interview. Andreas Dewes, physicist and data analyst, researches discrimination and ethics in a data-driven society to understand, for example, whether men are preferred over women when job interview candidates are chosen. And are women treated worse than men?

His simulation, based on more than 10,000 case studies, demonstrates that the more information an algorithm has about a person, the more it is able to discriminate against that person. It's not always sexist, as it can also discriminate against skin color or social strata, information that is readily available online. Talk about Big Data. "The algorithm can be just as discriminating as a human if it, for whatever reason, has full access to certain sets of information," Dewes says. "A lot can go wrong when using algorithms."

The probability of algorithms finding information that can be used to discriminate increases as more and more data is available about growing numbers of people. This becomes particularly critical when algorithms make decisions that have a profound influence on our lives. "What, for example, am I to do if, based on my online behavioral patterns, I am classified as a terrorist?" Bath asks. "Algorithms can produce incorrect information."

There's only one solution. Developers of algorithms must become aware of the problem, ideally during their studies. But users should also be aware that every algorithm they use has been programmed by a human, someone who is fallible and is not free from prejudice.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Future

Livestream Shopping Is Huge In China — Will It Fly Elsewhere?

Streaming video channels of people shopping has been booming in China, and is beginning to win over customers abroad as a cheap and cheerful way of selling products to millions of consumers glued to the screen.

A A female volunteer promotes spring tea products via on-line live streaming on a pretty mountain surrounded by tea plants.

In Beijing, selling spring tea products via on-line live streaming.

Xinhua / ZUMA
Gwendolyn Ledger

SANTIAGOTikTok, owned by Chinese tech firm ByteDance, has spent more than $500 million to break into online retailing. The app, best known for its short, comical videos, launched TikTok Shop in August, aiming to sell Chinese products in the U.S. and compete with other Chinese firms like Shein and Temu.

Tik Tok Shop will have three sections, including a live or livestream shopping channel, allowing users to buy while watching influencers promote a product.

This choice was strategic: in the past year, live shopping has become a significant trend in online retailing both in the U.S. and Latin America. While still an evolving technology, in principle, it promises good returns and lower costs.

Chilean Carlos O'Rian Herrera, co-founder of Fira Onlive, an online sales consultancy, told América Economía that live shopping has a much higher catchment rate than standard website retailing. If traditional e-commerce has a rate of one or two purchases per 100 visits to your site, live shopping can hike the ratio to 19%.

Live shopping has thrived in China and the recent purchases of shopping platforms in some Latin American countries suggests firms are taking an interest. In the United States, live shopping generated some $20 billion in sales revenues in 2022, according to consultants McKinsey. This constituted 2% of all online sales, but the firm believes the ratio may become 20% by 2026.

Keep reading...Show less

The latest