-Analysis-
ROME — When Syrian and Palestinian refugees were stranded on a scorpion-infested island on the Greek-Turkish border in the Evros River last summer, it took the Greek authorities more than 10 days to send relief supplies.
Greek authorities claimed they could not locate the abandoned migrants, despite having received their precise geographical coordinates. This far-fetched claim, refuted by an investigation by the German media company Deutsche Welle, is a perfect example of how selectively the European Union uses border surveillance technologies.
Another example of this selective use came four months later: in Dec. 2022, Human Rights Watch accused Frontex, the European border and coast guard agency, of using its drones and aircraft to locate and report refugee boats to the Libyan coast guard — who brought them back to Libya, where they experienced violence in which Frontex is undoubtedly complicit.
Available technology has not been used in crucial moments when it could save lives. Instead, it is used to turn away asylum seekers. In the aforementioned cases, the authorities did not hesitate to commit an illegal act by failing to rescue the migrants and instead conspiring with the Libyan coastguard, violating the fundamental rights of these asylum seekers.
Misuse of technology
This is the context in which the EU has just approved the first proposal for a regulation on artificial intelligence. The act risks worsening the already devastating impact of border technologies that are used to carry out European migration and asylum policies.
“Everything that is currently in use is not regulated,” explains Caterina Rodelli, an analyst at Access Now, a digital rights advocacy organization. “Regulations exist, but they don’t consider all types of rights that can be violated.”
“Let’s take the example of the General Data Protection Regulation (GDPR). Many surveillance systems which are based on artificial intelligence process data, but the GDPR only protects privacy and personal data — while today, we observe violations at various levels (discrimination, violation of the right to asylum, the right to a fair trial),” adds Rodelli. “There is a lack of legislation that recognizes the complexity of the situation created by the use of these systems and the impact of automation and artificial intelligence on border controls and asylum procedures.”
The AI Act
Access Now is part of a coalition of organizations fighting for the regulation of artificial intelligence so that the fundamental human rights of all migrants traveling to the EU are respected. Through the #protectnotsurveil campaign, the organization has been highlighting the many issues with the proposal, which was presented by the Commission in April 2021.
The act groups different artificial intelligence systems using risk categories — high, medium and low — depending on how much potential they have to cause a violation of fundamental rights. Each category is subject to different requirements.
“The regulations, however, should establish rules in which the yardstick is not the degree of risk, but rather the respect of a fundamental right, which the GDPR does,” says Rodelli.
For the GDPR, in fact, a system is authorized or prohibited depending on whether or not it respects the right to privacy and data protection. On the positive side, however, the proposal provides for the possibility of banning the use of certain technologies. “But if it is true that some systems were expected to be banned, such as remote biometric recognition (basically all the systems that would allow a surveillance society), in the field of migration there was no ban,” explains Rodelli.
On May 11, the proposal was approved, albeit “with various improvements,” according to Rodelli, by two EU committees. Some new bans on the use of such technologies have been introduced. For example, emotion recognition systems and biometric categorization systems, including those that would allow identifying the exact place of origin of a person by analyzing how they speak, have been banned.
But not even in this revised and improved version of the proposal has it been possible to prohibit the adoption of some artificial intelligence systems in the context of migration and asylum policies — despite the fact that, Rodelli says, “some systems, according to civil society but also according to experts of the academic world, will inevitably lead to violations of human rights … We just put them in the ‘high risk’ category.”
The potential dangers
Those producing and using these systems will have to first conduct an impact assessment on fundamental rights, although “the specifics of this assessment are not yet clear, just as it is not clear who would be in charge of approving it,” notes Rodelli. The proposal describes a new European committee for AI, while, on a national level, it states that “the member states will have to designate supervisory authorities responsible for implementing the legislative requirements.”
It is essential that there be as much transparency as possible around these systems, which should be registered in a public database. “This is the only way it will be possible to establish responsibilities in the event of a violation of rights,” observes Rodelli, adding that “The parliamentary committees have included appeal mechanisms in the text with respect to high-risk systems, even if not entirely satisfactory. For example, there is no possibility for public interest organizations to lodge an appeal on behalf of an individual. The authorities fear being overwhelmed by lawsuits initiated by NGOs.”
For governments, these tools have the advantage of presenting politically charged choices as objective and neutral, especially since they are based on an inevitable “automation bias” or automation conditioning. These are predictive algorithms and models that are constructed by collecting data that is in itself biased, such as data on “irregular migrants,” an expression that categorizes people as potentially dangerous.
Governments have a choice
At the other end of the process, the human being tasked with interpreting those decisions or forecasts, who is, in theory, free to contest them, will be doubly conditioned by the political context in which they operate and by an artificial intelligence system that presents itself as impartial.
The European Parliament voted to approve the act on June 14, but the real battle will begin during opaque discussions between the European Parliament, the European Council and the European Commission to reach an agreement on the final version of the regulation, which will occur almost certainly by the end of this legislature.
These companies want the freedom to innovate without too many brakes.
“The council presented its version in December, eliminating, with respect to the commission’s proposal, any obligation of transparency for the police and migration control authorities,” explains Rodelli. “In other words, the authorities would be required to comply with the regulation, but would not be required to provide information on how they comply. It would mean carrying forward what is already the norm, i.e. impunity.”
If the member states are so determined to maintain this norm, it is because the use of technologies based on machine learning and decision automation is already widespread in the field of migration and asylum policies.
With Huawei’s help
Numerous examples have been collected in the framework of the European project AFAR (Algorithmic Fairness for Asylum Seekers and Refugees) and the #protectnotsurveil campaign (in particular in the report “Uses of AI in Migration and Border Control”). Italy is cited for the use — later prohibited by the privacy guarantor — of remote biometric identification systems such as the real-time Sari, which is used in ports to identify people disembarking, as well as the system adopted in Como, in collaboration with Huawei, to identify irregular migrants sleeping in parks. “These were blocked because in Italy there is a strong privacy guarantor,” comments Rodelli.
These initiatives remind us that behind every technology enthusiastically adopted by governments and local authorities to monitor and reject migrants, there are companies equally determined to preserve their freedom. These companies want the freedom to innovate without too many brakes and to market their products. In February, the Corporate Europe Observatory organization published a report on the lobbying work of large technology companies to “make harmless” the regulation on artificial intelligence. These companies also want the freedom to test their technology on categories of people who are less likely to assert their fundamental rights.
“The criticisms of artificial intelligence focus on something else, namely on the risk that it could replace the human being. The reality is that these systems exist and already have deleterious effects but on categories of people considered less important,” Rodelli observes.
The coming months will tell us whether the new regulation will make it possible to reduce these effects by actually protecting the fundamental rights of all, or whether it will end up legitimizing and automating the discrimination, suspicion and violence that the European Union reserves for those seeking protection and a new life on its territory.