-Analysis-
MADRID — For the past decade, debates about artificial intelligence have revolved around issues of safety, job automation and algorithmic bias. However, with the emergence of generative models of AI — able to write texts, produce images, program code or diagnose medical conditions — a more structural question emerges: will artificial intelligence deepen social inequalities or, on the contrary, could it be a resource to fight them?
For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.
The rise of a new technology often bursts forth like a lightning bolt: sudden, wild and luminous, as if it were a spontaneous phenomenon that tears apart the continuity of what is known. And yet, the development of systems such as GPT-4 or DALL-E is not occurring in a neutral vacuum: it is built with natural resources, human labor and global infrastructures of extraction and exploitation.
Every technological breakthrough is also a political and economic operation that concentrates power, resources and profits in the hands of specific players.
A large part of the problem lies in the data, the raw material with which these systems are trained. As AI algorithms are introduced into public management, from education to healthcare, a fundamental question arises: who is behind the making of these algorithms?
In many cases, the answer is a bunch of private companies which, under the façade of technical neutrality, make crucial decisions about who gets access to resources, who is excluded, and how the benefits of automation are distributed. As Virginia Eubanks reminds us in Automating Inequality (2018), automated systems of social resource allocation in the United States systematically penalize the working classes and racial minorities.
Biases at the source
Biased historical data produces biased algorithms. The difference, Eubanks points out, is that in automated processes it is much more difficult to challenge the decision or claim a correction. The massive databases with which the algorithms have been trained contain cultural, linguistic, and racial biases that can be reproduced in the systems’ responses, affecting automated decisions in highly sensitive areas such as hiring, credit evaluation, or medical diagnosis.
Several recent studies alert to this risk. Sandra Wachter and Brent Mittelstadt, in their article “A right to reasonable inferences” (2019), argue that algorithmic systems inherit and amplify “historical biases” unless actively corrected, often underestimating the risk of reinforcing pre-existing social hierarchies.
In contexts where automated decisions affect access to public housing, healthcare or employment, the consequences can be severe for vulnerable groups.
Artificial intelligence could be the greatest plunder in human history
In the same vein, Spanish anthropologist José Mansilla Fernández explains in his research that according to Karl Marx, the attack against machinery by the Luddites — opponents to technology — was a misdirected rage, because the problem was not the technology itself, but who controls it and for what purpose. He says that for Marx, technology is neither good nor bad per se; the conflict is that today’s algorithms are in the hands of an economic elite that prioritizes efficiency over collective welfare.
German philosopher and writer Fabian Scheidler, author of The End of the Megamachine (Icaria, 2024), warns that “artificial intelligence could be the greatest plunder in human history, turning humanity’s collective heritage of knowledge and creativity into a commodity in the hands of the ultra-rich. It gives the billionaires who control the algorithms and platforms unprecedented opportunities to influence public opinion through micro-segmentation and other methods, in order to manipulate political decision-making processes and further increase their wealth.”
A tool to reduce inequalities?
However, not everything on this matter suggests a dystopian scenario. Some researchers argue that, under certain conditions, artificial intelligence could be used to reduce inequalities. Researchers such as Rediet Abebe, Solon Barocas and Jon Kleinberg point out in Roles for Computing in Social Change that AI systems integrated into carefully designed public infrastructures can mitigate inequalities, provided that their incentives are aligned with redistributive and equity objectives.
There have already been experiences in public health, social risk prediction or improvement of access to education where open and auditable algorithms have reduced historical barriers to such services.
Artificial intelligence will not alone determine the future of social inequalities.
Of course, it will all depend on the political and cultural model that frames its deployment. Artificial intelligence will not alone determine the future of social inequalities. It will do so depending on how its development and application are governed by public or private interests, by redistributive or extractive logics. Much of its emancipatory or regressive potential lies in this political bifurcation.
Common good or a weapon of inequality?
As Mansilla points out: “AI will obviously push in the direction dictated by these interests: greater concentration of wealth, job polarization — precariousness for many, super-profits for a few — and commodified creativity.
If it were in the hands of another system — for example, one oriented towards commons — it could democratize access to knowledge, free up time for genuine social relationships, or enhance collective creativity. But under capitalism, it reproduces the same logic: technology as a weapon of inequality.”
Finally, Scheidler adds an environmental dimension to the debate: “In its present form, artificial intelligence also accelerates the destruction of the biosphere, as it consumes an increasing share of the world’s energy production, making a rapid and sustainable transition to renewable energies unfeasible”.
Hegemonic artificial intelligence is not a flawless monolith.
Hegemonic artificial intelligence — the one that optimizes profits for Silicon Valley while making work precarious and commodifying even language — is not a flawless monolith. In its interstices grow practices that embody what Paolo Virno would call “the communism of intellectual capacities”: projects where technology disengages from private accumulation to become a common good.
These cracks are not technical alternatives, but spaces of political conflict — and the battle for value in the digital era is what matters most.