Critics say that AI tools might affect immigration decisions made by the Home Office
Campaigners are asking for the end of “robo-caseworkers” because they are worried that officials will rely too much on automated decisions for enforcing rules.
A Home Office AI tool that suggests actions against adult and child migrants could make it too easy for officials to approve life-changing decisions without careful review, according to campaigners.
As more details about the AI system came out, critics warned that it could “encode injustices” because an algorithm helps decide things like deporting people back to their home countries.
The government says the system helps by organizing work better and that a human is still in charge of every decision. It is being used to manage the growing number of asylum seekers, about 41,000 people, who may face removal.
Migrant rights campaigners are asking the Home Office to stop using the system, saying it makes cruelty and harm more efficient through technology.
A look into how the system works has become possible after a year-long effort to get information. This effort led to the release of some documents, which were shared with the campaign group Privacy International. The documents showed that people whose cases are being processed by the system are not told that AI is involved.
The system is one of many AI programs being used by UK public authorities as officials try to make things work faster and more efficiently. There are growing calls for more openness about how the government uses AI in areas like health and welfare.
Peter Kyle, the Secretary of State for Science, said AI has “great potential to improve our public services,” but added that to fully benefit from it, we need to trust these systems.
The Home Office documents reveal that the Identify and Prioritise Immigration Cases (IPIC) system uses a lot of personal information about people who may face enforcement action. This includes biometric data, ethnicity, health information, and criminal records.
The goal of the system is to make it easier, faster, and more effective for immigration enforcement to identify, prioritize, and manage the necessary services and actions needed for their cases.
Privacy International expressed concern that the system might be set up in a way that would make human officials simply approve the algorithm’s suggestions without carefully reviewing them. This is because it’s easier to accept the computer’s decision than to challenge it.
If officials want to reject a decision about sending someone back to their home country, they must provide a written explanation and tick certain boxes to show their reasons. However, if they agree with the computer’s decision, they just click a button marked “accept” with no need to explain or review it further.
When asked if this creates a bias towards accepting AI decisions, the Home Office chose not to comment.
Officials describe IPIC as a tool that helps immigration officers by suggesting the next case or action to consider. They emphasize that every recommendation made by IPIC is still reviewed by a caseworker, who must consider each case individually. The system is also being used for cases involving EU nationals applying to stay in the UK.
Jonah Mendelsohn, a lawyer at Privacy International, said the tool could affect many people’s lives.
“People going through the immigration system don’t know how this tool is being used in their cases or if it could lead to wrong actions against them,” he said. “Without changes for more transparency, the government’s plan to be ‘digital by default’ by 2025 might make existing problems in the immigration system worse.”
Fizza Qureshi, the head of the Migrants’ Rights Network, called for the tool to be removed. She warned that it could lead to racial bias and invasion of privacy, as it collects a lot of data, including health information, and might increase surveillance of migrants.
IPIC has been widely used since 2019-2020. The Home Office has refused to release more information, saying that too much transparency could help people bypass immigration controls.
Madeleine Sumption, from the Migration Observatory at Oxford University, said using AI in immigration decisions isn’t necessarily bad, as it could improve human decision-making. However, without more transparency, we can’t know for sure if AI is actually helping or just making things worse.
For example, if a country like Iran is unlikely to accept people who are deported, pursuing those cases might waste limited resources. Or if a person’s reason to stay is based on human rights laws, meaning they are unlikely to be deported quickly, it might be better to focus on other cases and avoid putting people in indefinite detention.
Documents from the Home Office say that the tool is used to “assess if someone can be removed and how much risk they pose, automate the process of finding and prioritizing cases, and track how long there have been barriers to removal.”
A new draft bill was introduced in the UK Parliament last month, which would allow automated decision-making in most cases, according to lawyers. This would be allowed as long as affected individuals can have a say, receive meaningful human review, and challenge automated decisions.
Published: 11th November 2024
Also Read:
Tax unhealthy foods to tackle obesity, say campaigners
A report says the UK’s asylum system makes women who are escaping sexual abuse suffer more trauma
What will the UK’s 2030 clean energy plan mean for businesses and the public?