Updates, commentary, training and advice on immigration and asylum law

Government to “redesign” controversial visa algorithm

THANKS FOR READING

Older content is locked

A great deal of time and effort goes into producing the information on Free Movement, become a member of Free Movement to get unlimited access to all articles, and much, much more

TAKE FREE MOVEMENT FURTHER

By becoming a member of Free Movement, you not only support the hard-work that goes into maintaining the website, but get access to premium features;

  • Single login for personal use
  • FREE downloads of Free Movement ebooks
  • Access to all Free Movement blog content
  • Access to all our online training materials
  • Access to our busy forums
  • Downloadable CPD certificates

Earlier this year JCWI, with the help of Foxglove, launched a legal challenge against the Home Office over its use of an algorithmic “streaming tool” that assigned risk categories to visa applications. The tool, previously covered on Free Movement, scored visa applicants for risk based in part on their nationality.

Those assigned a low risk were assigned to caseworkers expected to process applications quickly, and their work only checked if the visa application was denied. Those assigned a high risk were sent to caseworkers expected to take their time, and their work would be scrutinised if they granted an application. Certain nationalities were assigned high risk scores based on a combination of previous adverse decisions or immigration enforcement encounters, and “intelligence”. 

Yesterday the Home Office confirmed that it will “discontinue” the streaming tool from 7 August and committed to redesigning it. Government lawyers told us:

In the course of that redesign, our client intends carefully to consider and assess the points you have raised in your Claim, including issues around unconscious bias and the use of nationality, generally, in the Streaming Tool. For clarity, the fact of the redesign does not mean that the SSHD accepts the allegations in your claim form. However, the redesign will be approached with an open mind in considering the concerns you have raised.

In the meantime, it will be using an “interim” sifting process that relies only on “person-centric attributes” and does not take into account nationality. It will provide us with details of this, alongside equality and data protection impact assessments for the interim process. 

JCWI’s legal arguments

The algorithm and the tool exacerbated the racism inherent in using nationality as an indicator of risk. It made decisions less accountable and obscured the source of bias by steering decision-makers towards particular outcomes while claiming not to. The bias in the tool was self-amplifying – a risk assigned to a particular nationality would result in more refusals of that nationality, which would in turn feed back to the risk score. 

Translating these concerns into legal grounds for challenge, our arguments were:

  1. The use of nationality in the streaming tool constituted direct discrimination under the Equality Act 2010, which was not exempted by the ministerial authorisation purporting to do so. That authorisation only extended to a “more rigorous examination” of the application, whereas in practice the tool labelled applications as ‘red’, ‘amber’ or ‘green’ — a system likely to influence decision-makers on whether or not to grant it.
  2. The streaming tool was irrational because it generated a feedback loop that created the “risk” posed by nationalities in the first place. Those nationalities targeted by the tool were more likely to be denied visas, and the rate of denial of visas to those nationalities used to justify the risk assigned them in the tool. 
  3. The streaming tool promotes reliance on irrelevant considerations by the human decision-maker. They were encouraged to grant ‘green’ applications quickly and would only have their work checked (and therefore potentially criticised) if they denied a green application. The opposite is true with those marked ‘red’ where they were given time to find reasons to refuse, and risked having their work checked for granting. This supplanted a proper consideration of applications on their merits. 

We also complained of a failure to conduct a data protection impact assessment and to have regard to the public sector equality duty. For those interested, Foxglove has published the grounds in full.

What would a better system of visa tech look like?

The use of algorithms doesn’t have to be like this. With a transparent risk profile and dataset, and clearly articulated reasons for decisions that could be scrutinised by lawyers, it could make decisions fairer and more efficient. But this tool obscures the data that goes into it, and muddies the water between the decision-maker and the true basis of the decision.

It’s extremely important that we ensure that the use of algorithms and tech solutions remains transparent and susceptible to scrutiny under public law principles. We and our colleagues at Foxglove will be closely monitoring the development of the new tool to ensure that that happens.

It’s not just the robots who are biased

We also know that new tech solutions will be built on top of a rotten system already full to the brim with bias. Immigration raids continue to be targeted at people of certain nationalities in what is a self-perpetuating feedback loop, without any AI or algorithmic assistance. As the Independent Chief Inspector of Borders and Immigration points out, regarding the use of “intelligence” by Immigration Enforcement:

IE’s intelligence about illegal working mostly consisted of low-level allegations made by members of the public, which were lacking in detail and the reliability of which was difficult to assess. This had led IE to focus on high street restaurants and takeaways, which was self-reinforcing and limiting in terms of organisational knowledge and the nationalities encountered. Other business sectors and possibly other nationalities had been neglected by comparison.

The inspector also found:

Bangladeshis, Indians, Pakistanis and Chinese made up almost two-thirds (63%) of all illegal working arrests. Whatever the logic of this approach from a removals perspective, the inference for other nationals working illegally, especially if they were not employed in restaurants and takeaways, was that the likelihood of being arrested for working illegally was low and the likelihood of removal was negligible.

We need to make sure that the growing use of big data and tech solutions doesn’t obscure, entrench or reinforce racial biases. But we also need to root such biases out of the decidedly non-tech-based shambles that comprises most of the immigration system we have today.

Relevant articles chosen for you
Chai Patel

Chai Patel

Chai Patel is Legal & Policy Director at the Joint Council for the Welfare of Immigrants (JCWI). Chai joined JCWI in 2015. Prior to that he was in the Human Rights department at Leigh Day, working on abuse and human rights claims, and on the death penalty team at Reprieve, focussing on international strategic litigation, casework, and investigation.

Comments