Q&A: Oversight and governance of AI use at borders, humanitarian aid zones are weak, says expert

Lawyer and anthropologist Dr. Petra Molnar has travelled to the U.S.-Mexico border, to Israel and the Occupied Palestinian Territories, and to the Greek island of Lesvos, charting the rise of automated and algorithmic decision-making on vulnerable communities fleeing persecution

Why It Matters

Data collection practices often rely on active and informed consent – but how can people in precarious situations genuinely understand what is happening with their biometric information, how it will be stored, and who it will be shared with?

Lawyer and anthropologist Dr. Petra Molnar has been researching the growing uses of technology, data and automation in immigration. (Romi Levine/Utoronto.ca)

As AI and automation use grows at borders, in immigration and humanitarian aid programs, Canada and the world need to ensure vulnerable people’s needs are taken into account, says an expert.

Dr. Petra Molnar is a lawyer and anthropologist whose research specializes in the use of technology at borders and in migration contexts. Her new book, The Walls Have Eyes, charts the rise of artificial intelligence and data collection in migration contexts, and the private, governmental, and NGO players involved in embedding these technologies at active borders. 

How can vulnerable and undocumented migrants meaningfully give active consent to having their irises and fingerprints scanned, knowing that if they do not comply, they may be denied access to critical aid, asked Molnar.

“The use of these technologies allows states to decide not only who lives and who dies, but also […] which groups of people fundamentally deserve a chance at life and which should be disproportionately exposed to vulnerability, violence and premature death,” she writes. 

Canada leads in AI and automation for immigration 

Globally, Canada is celebrated as one of the most immigration-friendly economies in the world. In 2022, the country met its target of welcoming more than 430,000 new permanent residents, and the goal for 2024 sits at another 450,000.  

“Newcomers enrich our communities, and contribute to our economies by working, creating jobs and supporting local businesses,” said a press release by Immigration, Refugees and Citizenship Canada (IRCC). 

Molnar’s research spans continents, from Israel and Palestine to the U.S.-Mexico border. As the associate director of the Refugee Law Lab at York University in Toronto, she initially observed how Canada manages its borders and how it has become “one of the leaders of introducing AI and automated decision-making into various facets of its immigration system, including visa-triaging algorithms, facial recognition at borders, and other projects.” 

However, in The Walls Have Eyes, Molnar cites a disturbing data violation case in which the Canada Border Services Agency used a provincial facial recognition database to find an individual stopped for a traffic violation, and reopen a deportation case against him. 

The lawyers representing the refugee disputed the case because the images were of two different Bangladeshi men. 

From the moment an individual or family chooses to apply for resettlement in Canada to when that resettlement status is made official—whether they are in the country or outside it—technology, data, and automation form a large part of immigrants’ experiences. 

Given the significant number of applications IRCC receives, the department is beginning to experiment with “automation and advanced data analytics.” IRCC clearly states that the technology is used to “support, assist and inform” decision-makers, and “employees remain central.” 

The department has said that the system is not currently set up to refuse, or recommend refusing specific applications.

While the overall number of planned permanent resident admissions has increased between 2022 and 2024, the proportion of admissions that are awarded to refugees, protected people and those with humanitarian and compassionate needs has decreased

In this context, having access to the appropriate technology to complete the required forms could determine whether or not a refugee family ultimately has access to a safe haven

Below is a snapshot of Future of Good’s Q&A with Dr. Molnar. The Q&A has been edited for readability.

Q: What led you to research technology in migration contexts?

A: I am not a technologist by training, or even by interest – six years ago, I barely knew what an algorithm was. I was working as a refugee lawyer and an anthropologist on human rights issues, violence against women and immigration detention. 

I started looking at the interplay between technology and migration by partnering with an amazing group in Canada called The Citizen Lab

We ended up writing a report in 2018 called Bots at the Gate, which was one of the first reports analyzing automated decision-making in immigration, from a human rights perspective. Nobody was more surprised than me about how much attention that report got.

Over the years, trying to compare the Canadian situation with Europe, the U.S.-Mexico border and other parts of the world that I’ve worked in was paramount to stitch together this global story. 

Q: A theme that runs through the book is the fact that people on the move do not seem to have access to human rights, and are often at the forefront of experimentation when it comes to [new] technology. What were people on the move feeling about the technologies and data collection mechanisms they were interacting with, that they didn’t have an option to step around?

A: This question highlights the difference between rights on paper and rights in practice. There is a big difference between the fact that we have a right to privacy or a right to seek asylum, and the way these issues play out on the ground. 

What you’re touching on about informed consent is pertinent to talk about here: much of this technology occurs in spaces where there is a huge power differential between the actors developing the deploying the technology, and then the community on whom this technology is applied and, in some instances, tested. 

A specific example of biometric technology comes to mind: iris scanning or fingerprinting are making their way into refugee camps. Often it is done under the guise of efficiency or streamlining services, but when you talk to people who are on the receiving end of the technology, some troubling themes emerge. 

People that I have spoken with reflect on feeling really dehumanized, or being reduced to a fingerprint or an eye scan – in a system that is already predicated on huge power differentials. 

Can you actually meaningfully say no, or is it actually a really complicated choice between having your irises scanned, or not eating that week? That is not true and informed consent. 

Q: A large part of your book focuses on private sector actors who are profiting off these technologies. What, in your research, did you find was the role of the NGO and aid sectors? To what extent were they using these technologies?

A: Paying attention to the NGO sector is super important, in how these technologies are developed, deployed and even normalized. The majority of my work focuses on the private sector or the state, but always with an eye to other entities, too. 

We need more critique around the way that technologies are playing out in the humanitarian space, rather than a lean towards ‘technosolutionism.’ How many times have we seen ‘AI for good’ and ‘tech for good’ – but good for whom? Who gets to determine what we innovate on and why? 

There is a massive amount of money being poured into technologies for border enforcement, humanitarianism and biometrics in refugee camps. Let’s think about where this money would be better spent. 

Q: Did you get the sense that in refugee camps or active borders, technology was becoming increasingly critical to accessing safety and aid? 

A: I think the normalization of data collection is problematic, because it seems to be indiscriminate: there are not a lot of safeguards, and oftentimes, the public, researchers or journalists do not know what is happening or why. 

A particular issue happened a few years ago where the United Nations High Commissioner for Refugees (UNHCR) collected a bunch of super sensitive information from Rohingya refugees who were sheltering in Bangladesh, and inadvertently shared this information with Myanmar, the government that the refugees were fleeing from. 

How can something like this happen? Do we have sufficient human rights assessments and data impact assessments in place to prevent something like this?

Your question is also getting at the fact that technology is also part of our lives. Smartphones are a lifeline to many people on the move. That is something I saw context-after-context: one of the first things people want to do is call their loved ones. They need access to a phone, the internet, and a power supply. 

Q: What has to happen that can help us to reimagine these technologies? Where do we begin to dismantle, audit and rebuild?

A: Technology is a lens to seeing how power operates: who gets to innovate and why? What kinds of priorities matter? There is a reason we are developing tools like biometrics, robo-dogs at the border, and AI lie detectors, and not using AI to audit racist decision-making at the border. That is a clear, normative choice that a powerful set of actors is making. 

Who is around the table when we make decisions about tech? Should we maybe build different tables altogether? It comes down to centring the experience and expertise of people on the move.

Tell us this made you smarter | Contact us | Report error

  • Sharlene Gandhi is the Future of Good editorial fellow on digital transformation.

    Sharlene has been reporting on responsible business, environmental sustainability and technology in the UK and Canada since 2018. She has worked with various organizations during this time, including the Stanford Social Innovation Review, the Pentland Centre for Sustainability in Business at Lancaster University, AIGA Eye on Design, Social Enterprise UK and Nature is a Human Right. Sharlene moved to Toronto in early 2023 to join the Future of Good team, where she has been reporting at the intersections of technology, data and social purpose work. Her reporting has spanned several subject areas, including AI policy, cybersecurity, ethical data collection, and technology partnerships between the private, public and third sectors.

    View all posts