Migrant Lives

How Canada Is Using AI To Help Decide Immigration Cases

Algorithms can certainly speed things up. But are they an appropriate tool processing residency and asylum claims that are nuanced and complex by nature?

Toronto from afar
Toronto from afar
Petra Molnar and Samer Muscati


OTTAWA — The large-scale detention of undocumented immigrants in the U.S; the wrongful deportation, in the UK, of 7,000 foreign students accused of cheating on a language test; racist or sexist discrimination based on a social media profile or appearance. What do these seemingly disparate examples have in common? In every case, an algorithm made a decision with serious consequences for people's lives.

Algorithms and artificial intelligence (AI) are increasingly being used in immigration and refugee systems, and Canada is no exception, according to research carried out in partnership with Citizen Lab. In our new report, we look at how Canada's use of these tools threatens to create a laboratory for high-risk experiments, and how these initiatives may place highly vulnerable people at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada's domestic and international human rights obligations, influencing decisions on multiple levels.

Since 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in a variety of immigration decisions that are normally made by a human immigration official.

These initiatives may place highly vulnerable people at risk of being subjected to unjust and unlawful processes.

What constitutes automated decision-making? Our analysis examines a class of technologies that augment or replace human decision-makers, such as AI or algorithms. An algorithm is a set of instructions, a "recipe" designed to organize or learn data quickly and produce a desired outcome. These outcomes can include recommendations, assessments and decisions.

We examined the use of AI in immigration and refugee systems through a critical interdisciplinary analysis of public statements, records, policies and drafts by relevant departments within Canada's government. While these are new and emerging technologies, the ramifications of using automated decision-making in the immigration and refugee space are far-reaching. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status.

The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of human rights in the form of bias, discrimination and privacy breaches, as well as issues of due process and procedural fairness. These systems will have real-life consequences for ordinary people, many of whom are fleeing for their lives.

At the U.S./Canada border — Photo: David Joles/Minneapolis Star Tribune/ZUMA

Our analysis also relies on principles enshrined in international legal instruments that Canada has ratified, such as the International Covenant on Civil and Political Rights, the International Convention on the Elimination of All Forms of Racial Discrimination, and the Convention Relating to the Status of Refugees, among others. Where the responsibilities of private-sector actors are concerned, the report is informed by the United Nations Guiding Principles on Businesses and Human Rights. We also analyze similar initiatives occurring in Australia and the United Kingdom.

Setting a dangerous precedent

Marginalized and under-resourced communities such as residents without citizenship often have access to less robust human rights protections and lesser legal expertise with which to defend those rights. Adopting AI without first insuring responsible best practices and building in human rights principles at the outset will exacerbate preexisting disparities and lead to rights violations.

We also know that technology travels. Whether in the private or public sector, one country's decision to implement particular technologies makes it easier for other countries to follow. AI in the immigration space is already being explored in various jurisdictions across the world, as well as by international agencies that manage migration, such as the UN.

These systems will have real-life consequences for ordinary people, many of whom are fleeing for their lives.

Canada has a unique opportunity to develop international standards that regulate the use of these technologies in accordance with domestic and international human rights obligations. It is particularly important to set a clear example for countries with weaker records on refugee rights and rule of law, as insufficient ethical standards and weak accounting for human rights impacts can create a slippery slope internationally. Canada may also be responsible for managing the export of these technologies to countries more willing to experiment on non-citizens and infringe the rights of vulnerable groups.

It is crucial to interrogate these power dynamics in the migration space, where private-sector interventions increasingly proliferate, as seen in the recent growth of countless apps for and about refugees. However, in the push to make people on the move knowable, intelligible and trackable, technologies that predict refugee flows can entrench xenophobia, as well as encourage discriminatory practices, deprivations of liberty, and denial of due process and procedural safeguards.

Fundamental human rights must hold a central place in this discussion.

With the increasing use of technologies to augment or replace immigration decisions, who actually benefits? While efficiency may be valuable, those responsible for human lives should not pursue efficiency at the expense of fairness — fundamental human rights must hold a central place in this discussion. By placing such rights at the center, the careful and critical use of these new technologies in immigration and refugee decisions can benefit both Canada's immigration system and the people applying to make the country their new home.

Immigration and refugee law is also a useful lens through which to examine state practices, particularly in times of greater border control security and screening measures, complex systems of global migration management, the increasingly widespread criminalization of migration and rising xenophobia. Immigration law operates at the nexus of domestic and international law and draws upon global norms of international human rights and the rule of law.

Canada has clear domestic and international legal obligations to respect and protect human rights when it comes to the use of these technologies, and it is incumbent upon policymakers, government officials, technologists, engineers, lawyers, civil society and academia to take a broad and critical look at the very real impacts of these technologies on human lives.

Keep up with the world. Break out of the bubble.
Sign up to our expressly international daily newsletter!

Air Next: How A Crypto Scam Collapsed On A Single Spelling Mistake

It is today a proven fraud, nailed by the French stock market watchdog: Air Next resorted to a full range of dubious practices to raise money for a blockchain-powered e-commerce app. But the simplest of errors exposed the scam and limited the damage to investors. A cautionary tale for the crypto economy.

Sky is the crypto limit

Laurence Boisseau

PARIS — Air Next promised to use blockchain technology to revolutionize passenger transport. Should we have read something into its name? In fact, the company was talking a lot of hot air from the start. Air Next turned out to be a scam, with a fake website, false identities, fake criminal records, counterfeited bank certificates, aggressive marketing … real crooks. Thirty-five employees recruited over the summer ranked among its victims, not to mention the few investors who put money in the business.

Maud (not her real name) had always dreamed of working in a start-up. In July, she spotted an ad on Linkedin and was interviewed by videoconference — hardly unusual in the era of COVID and teleworking. She was hired very quickly and signed a permanent work contract. She resigned from her old job, happy to get started on a new adventure.

Others like Maud fell for the bait. At least ten senior managers, coming from major airlines, airports, large French and American corporations, a former police officer … all firmly believed in this project. Some quit their jobs to join; some French expats even made their way back to France.

Share capital of one billion 

The story began last February, when Air Next registered with the Paris Commercial Court. The new company stated it was developing an application that would allow the purchase of airline tickets by using cryptocurrency, at unbeatable prices and with an automatic guarantee in case of cancellation or delay, via a "smart contract" system (a computer protocol that facilitates, verifies and oversees the handling of a contract).

The firm declared a share capital of one billion euros, with offices under construction at 50, Avenue des Champs Elysées, and a president, Philippe Vincent ... which was probably a usurped identity.

Last summer, Air Next started recruiting. The company also wanted to raise money to have the assets on hand to allow passenger compensation. It organized a fundraiser using an ICO, or "Initial Coin Offering", via the issuance of digital tokens, transacted in cryptocurrencies through the blockchain.

While nothing obliged him to do so, the company owner went as far as setting up a file with the AMF, France's stock market regulator which oversees this type of transaction. Seeking the market regulator stamp is optional, but when issued, it gives guarantees to those buying tokens.

screenshot of the typo that revealed the Air Next scam

The infamous typo that brought the Air Next scam down

compta online

Raising Initial Coin Offering 

Then, on Sept. 30, the AMF issued an alert, by way of a press release, on the risks of fraud associated with the ICO, as it suspected some documents to be forgeries. A few hours before that, Air Next had just brought forward by several days the date of its tokens pre-sale.

For employees of the new company, it was a brutal wake-up call. They quickly understood that they had been duped, that they'd bet on the proverbial house of cards. On the investor side, the CEO didn't get beyond an initial fundraising of 150,000 euros. He was hoping to raise millions, but despite his failure, he didn't lose confidence. Challenged by one of his employees on Telegram, he admitted that "many documents provided were false", that "an error cost the life of this project."

What was the "error" he was referring to? A typo in the name of the would-be bank backing the startup. A very small one, at the bottom of the page of the false bank certificate, where the name "Edmond de Rothschild" is misspelled "Edemond".

Finding culprits 

Before the AMF's public alert, websites specializing in crypto-assets had already noted certain inconsistencies. The company had declared a share capital of 1 billion euros, which is an enormous amount. Air Next's CEO also boasted about having discovered bitcoin at a time when only a few geeks knew about cryptocurrency.

Employees and investors filed a complaint. Failing to find the general manager, Julien Leclerc — which might also be a fake name — they started looking for other culprits. They believe that if the Paris Commercial Court hadn't registered the company, no one would have been defrauded.

Beyond the handful of victims, this case is a plea for the implementation of more secure procedures, in an increasingly digital world, particularly following the pandemic. The much touted ICO market is itself a victim, and may find it hard to recover.

Keep up with the world. Break out of the bubble.
Sign up to our expressly international daily newsletter!