An AI face on 4 different screens with a camera at the top and a speaker at the bottom of the machine.
Wehead shows their AI companion with a face at Show-Stoppers during 2024 Consumer Electronic Show Gene Blevins/ZUMA

-Analysis-

TURIN — What does it mean to approach technology with an ethical perspective? Rather than seek the Ten Commandments of technology, we must ask what form of power a technological innovation introduces into society.

For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.

Take Manhattan as we know it today. It was designed by the architect and politician Robert Moses, who believed that the best part of the city should be reserved for the best part of the population. In Mose’s mindset, the best part of the population was the wealthy, white middle class. So, he laid out every aspect of the city, from the streets to the infrastructure, to achieve that goal.

The ethics of technology sees technology itself as a socio-technical construct, aiming to highlight the relationships between technology and social life. Regarding artificial intelligence, we should question what effect it has on our decision-making abilities.

The Nudge theory

The question has existed since the late 1940s and the early 1950s, when Claude Shannon designed the prototype of a new communication system, considered the precursor of AI: a mechanical mouse he named Theseus. This mouse could navigate through a maze by banging on wired walls. It was a different kind of machine compared to those of the industrial revolutions because it used information as a tool of control. Theseus was not merely a surrogate for human power but a machine oriented toward a purpose and equipped with the means to achieve it. These were the origins of cybernetics.

Another question we face is the machine’s persuasive capacity. In many studies I read, unfortunately, the rhetoric is similar to what we’ve seen with tobacco and weapons: no distinction between persuasion and manipulation is made. Let me explain. I may not change your mind, but I can make you act in a certain way. This is what the Nudge Theory, one of the fundamental theories of behavioral sciences, tells us. The theory is so potent today because we live in a cultural context that struggles with norm compliance.

A law must be knowable, universal and general

To give an everyday example: we went from the traffic light (which imposes a behavior; an external normative indicator, Kant would have said) to the roundabout (which is based on the principle “you regulate as you wish”). How do we regulate as a society if normative guidance doesn’t work?

Amsterdam’s Schiphol airport, for example, observed that cleaning men’s rooms cost almost twice as much as cleaning the women’s rooms. An external normative device would have worked perfectly: they could have written “please do it inside the toilet” as large as they wanted, but the situation wouldn’t have changed. By simply drawing a fly inside the urinals, the cost of cleaning fell to almost the same as for the women’s rooms. It effectively shifted from a norm to an informational device that nudges men and changes their behavior.

In my opinion, this is where we find the strongest possible tension regarding the rule of law. The problem of legitimacy in the modern age comes into play here: what makes something that’s lawful a law?

Close-up photo of a white robot hand reaching toward the camera, palms open.
Close-up photo of a white robot hand. – Possessed Photography/Unsplash

Mind and machine

In his Theory of Justice, John Rawls suggested three criteria for a law to conform to the rule of law: it must be knowable, universal and general. How do we make the operating rules of intelligent machines legitimate? Let’s see if they meet the criteria.

Are these rules knowable? The code could be made open so that everyone reads it. But Turing Award-winner Ken Thompson demonstrated in his 1982 paper “Trusting the Trust” that we cannot be certain the algorithm does only what the code dictates; meaning that anyone could alter the code. Second, are they universal? Well, the algorithm profiles. Third, are they general? Intelligent machines obey only the server owner. As we enter 2024, when the United States and the Europe Union will vote, we must consider how humans can assimilate, manage and harmonize this tool within the social contract.

Can we trust these intelligent machines? When I board a plane, I trust that someone has checked the tires, the engines have been serviced, and the pilot has had specific training. All of this can be called a social contract.

An AI machine working with the modes we’ve discussed risks eroding the trust that underlies the social contract: all studies on polarization demonstrate this. A platform is not a neutral entity: it is a subject that monetizes this polarization. That is, it extracts value from something that is a common and shared value, which is social collectivity. Isn’t it a good idea to start defining and measuring digital manipulation, which is different from persuasion?

While it is true, as data show, that people exposed to AI systems do not change their minds in the long term, it is equally true that these systems induce immediate behaviors (such as purchases). And that is why the difference between manipulation and persuasion must be put back at the center of public debate.

I believe keeping the human side central to the man-machine relationship is not just an educational or training issue. It is a matter of tolerance for the democratic system. We should reintroduce a category that already exists in law: the difference between danger and risk. We handle dangerous substances every day (think of gasoline), but we do so using a series of protective devices.

So the issue is not the power of the machine, but how we inject it into a social context and how we manage and monitor the social context itself.