When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in .

You've reached your limit of one free article.

Get unlimited access to Worldcrunch

You can cancel anytime .

SUBSCRIBERS BENEFITS

Exclusive International news coverage

Ad-free experience NEW

Weekly digital Magazine NEW

9 daily & weekly Newsletters

Access to Worldcrunch archives

Free trial

30-days free access, then $2.90
per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch
THE CONVERSATION

Artificial Intelligence Could Steal Our Jobs — And Our Souls

Technological progressions have always changed how we behave. But AI has much more far-reaching potential to change the very meaning of what it is to be a human.

Is Artificial intelligence altering humanity?
Is Artificial intelligence altering humanity?
Dan Feldman and Nir Eisikovits

The history of humans' use of technology has always been a history of coevolution. Philosophers from Rousseau to Heidegger to Carl Schmitt have argued that technology is never a neutral tool for achieving human ends. Technological innovations – from the most rudimentary to the most sophisticated – reshape people as they use these innovations to control their environment. Artificial intelligence is a new and powerful tool, and it, too, is altering humanity.

Writing and, later, the printing press made it possible to carefully record history and easily disseminate knowledge, but it eliminated centuries-old traditions of oral storytelling. Ubiquitous digital and phone cameras have changed how people experience and perceive events. Widely available GPS systems have meant that drivers rarely get lost, but a reliance on them has also atrophied their native capacity to orient themselves.

AI is no different. While the term AI conjures up anxieties about killer robots, unemployment or a massive surveillance state, there are other, deeper implications. As AI increasingly shapes the human experience, how does this change what it means to be human? Central to the problem is a person's capacity to make choices, particularly judgments that have moral implications.

Taking over our lives?

AI is being used for wide and rapidly expanding purposes. It is being used to predict which television shows or movies individuals will want to watch based on past preferences and to make decisions about who can borrow money based on past performance and other proxies for the likelihood of repayment. It's being used to detect fraudulent commercial transactions and identify malignant tumors. It's being used for hiring and firing decisions in large chain stores and public school districts. And it's being used in law enforcement – from assessing the chances of recidivism, to police force allocation, to the facial identification of criminal suspects.

Algorithms trained on biased data build biased models and perpetuate existing prejudices.

Many of these applications present relatively obvious risks. If the algorithms used for loan approval, facial recognition and hiring are trained on biased data, thereby building biased models, they tend to perpetuate existing prejudices and inequalities. But researchers believe that cleaned-up data and more rigorous modeling would reduce and potentially eliminate algorithmic bias. It's even possible that AI could make predictions that are fairer and less biased than those made by humans.

Where algorithmic bias is a technical issue that can be solved, at least in theory, the question of how AI alters the abilities that define human beings is more fundamental. We have been studying this question for the last few years as part of the Artificial Intelligence and Experience project at UMass Boston's Applied Ethics Center.

Losing the ability to choose

Aristotle argued that the capacity for making practical judgments depends on regularly making them – on habit and practice. We see the emergence of machines as substitute judges in a variety of workaday contexts as a potential threat to people learning how to effectively exercise judgment themselves.

AI conjures up anxieties about killer robots and a massive surveillance state, but there are other, deeper implications — Photo: Cottonbro

In the workplace, managers routinely make decisions about whom to hire or fire, which loan to approve and where to send police officers, to name a few. These are areas where algorithmic prescription is replacing human judgment, and so people who might have had the chance to develop practical judgment in these areas no longer will.

Recommendation engines, which are increasingly prevalent intermediaries in people's consumption of culture, may serve to constrain choice and minimize serendipity. By presenting consumers with algorithmically curated choices of what to watch, read, stream and visit next, companies are replacing human taste with machine taste. In one sense, this is helpful. After all, the machines can survey a wider range of choices than any individual is likely to have the time or energy to do on her own.

At the same time, though, this curation is optimizing for what people are likely to prefer based on what they've preferred in the past. We think there is some risk that people's options will be constrained by their pasts in a new and unanticipated way - a generalization of the "echo chamber" people are already seeing in social media.

The advent of potent predictive technologies seems likely to affect basic political institutions.

The advent of potent predictive technologies seems likely to affect basic political institutions, too. The idea of human rights, for example, is grounded in the insight that human beings are majestic, unpredictable, self-governing agents whose freedoms must be guaranteed by the state. If humanity – or at least its decision-making – becomes more predictable, will political institutions continue to protect human rights in the same way?

Utterly predictable

As machine learning algorithms, a common form of "narrow" or "weak" AI, improve and as they train on more extensive data sets, larger parts of everyday life are likely to become utterly predictable. The predictions are going to get better and better, and they will ultimately make common experiences more efficient and more pleasant.

Algorithms could soon – if they don't already – have a better idea about which show you'd like to watch next and which job candidate you should hire than you do. One day, humans may even find a way machines can make these decisions without some of the biases that humans typically display.

But to the extent that unpredictability is part of how people understand themselves and part of what people like about themselves, humanity is in the process of losing something significant. As they become more and more predictable, the creatures inhabiting the increasingly AI-mediated world will become less and less like us.


Nir Eisikovits, Associate Professor of Philosophy and Director, Applied Ethics Center, University of Massachusetts Boston; Dan Feldman Senior Research Fellow, Applied Ethics Center, University of Massachusetts Boston

This article is republished from The Conversation under a Creative Commons license.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.

Ideas

Look At This Crap! The "Enshittification" Theory Of Why The Internet Is Broken

The term was coined by journalist Cory Doctorow to explain the fatal drift of major Internet platforms: if they were ever useful and user-friendly, they will inevitably end up being odious.

A photo of hands holding onto a smartphone

A person holding their smartphone

Gilles Lambert/ZUMA
Manuel Ligero

-Analysis-

The universe tends toward chaos. Ultimately, everything degenerates. These immutable laws are even more true of the Internet.

In the case of media platforms, everything you once thought was a good service will, sooner or later, disgust you. This trend has been given a name: enshittification. The term was coined by Canadian blogger and journalist Cory Doctorow to explain the inevitable drift of technological giants toward... well.

The explanation is in line with the most basic tenets of Marxism. All digital companies have investors (essentially the bourgeoisie, people who don't perform any work and take the lion's share of the profits), and these investors want to see the percentage of their gains grow year after year. This pushes companies to make decisions that affect the service they provide to their customers. Although they don't do it unwillingly, quite the opposite.

For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.

Annoying customers is just another part of the business plan. Look at Netflix, for example. The streaming giant has long been riddling how to monetize shared Netflix accounts. Option 1: adding a premium option to its regular price. Next, it asked for verification through text messages. After that, it considered raising the total subscription price. It also mulled adding advertising to the mix, and so on. These endless maneuvers irritated its audience, even as the company has been unable to decide which way it wants to go. So, slowly but surely, we see it drifting toward enshittification.

Keep reading...Show less

The latest