Your Undivided Attention - Why Are Migrants Becoming AI Test Subjects? With Petra Molnar
Episode Date: June 20, 2024Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this ...growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe Walls Have Eyes: Surviving Migration in the Age of Artificial IntelligencePetra’s newly published book on the rollout of high risk tech at the border.Bots at the GateA report co-authored by Petra about Canada’s use of AI technology in their immigration process.Technological Testing GroundsA report authored by Petra about the use of experimental technology in EU border enforcement.Startup Pitched Tasing Migrants from Drones, Video RevealsAn article from The Intercept, containing the demo for Brinc’s taser drone pilot program.The UNHCRInformation about the global refugee crisis from the UN.RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy BuolamwiniCan We Govern AI? With Marietje SchaakeCLARIFICATION:The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019
Transcript
Discussion (0)
Hey everyone, it's Reston.
This is Aza.
Welcome to your undivided attention.
The massive wave of refugee migration happening around the world,
it's only going to grow over the next few decades.
Climate change, growing political instability, hunger,
these are some of the forces behind an unprecedented refugee crisis
that's expected to include over a billion people by 2050.
It used to be it was Mexico and Central America,
and those people still come.
But there are people from various countries in Africa,
people from Bangladesh, from India.
The journeys that these refugees will take
are taking, right now, are extremely perilous.
Terrible disaster at sea of fishing boat
sinking with hundreds on board,
hundreds of migrants missing.
It's believed some 700 people were on board,
the boat sinking off the coast of Greece.
And after all of that,
most refugee journeys today end at a closed border.
or detention center or refugee camp.
Many governments are turning to novel technologies
to slow the crisis at their borders,
technologies that use AI.
Fifteen of these cameras will monitor the entire 200-kilometer length
of the Greek-Turkirk by the end of the year.
They can detect people hiding behind trees and bushes
and they're supplemented by radar,
unmanned aerial vehicles and ground sensors.
This ground drone known as a robodog
could soon be patrolling our nation's southwest border.
The federal government partnering with ghost robotics,
testing out robodogs to potentially assist customs in border protection.
And the sharpest edge of these AI technologies can be found at the U.S.-Mexico border.
Yeah, I've been to the Sonora Desert to the U.S. Mexico border.
And what I've noticed again and again across all sorts of borders around the world
is that border practices are sharpened through the use of surveillance,
AI, automated decision-making, and projects like that.
And the U.S.-Mexico frontier is a unique context
because a lot of this technology is visible.
That's Petra Mulnar.
She's an immigration lawyer and author of the new book,
The Walls Have Eyes, on the rollout of high-risk technology
at borders around the world.
And the stated goal of this rollout is both to secure national borders
and make the immigration process easier and more humane.
but she argues the result so far has been anything but
so petra we're really excited to have you on your invited attention and welcome to the show
thank you so much for having me it's a real pleasure to be here
so i figure maybe the place to start in your book you make this argument that borders
take on a kind of laboratory role for surveillance technologies
that there's a kind of legal gray area i know you were just in this northern desert
in the u.s southern border and i'd love for you to
take us there into your eyes, into your feet, into your ears. What did you see and hear about
theirs? Like, set the scene. So you can rent a car, or in my case, I worked closely with a search
and rescue group, and we went inside the desert to drive around, see a lot of this infrastructure
firsthand. For example, there are AI towers, essentially fixed towers that you can drive to
and see them firsthand, and they dot the Sonora Desert and kind of create this surveillance
drag net that ensnars people on the move. And you can also go to the wall, the infamous wall that's
dominated our new cycle for many years now. And it's a beast to behold. I mean, it's there. It cuts
across the land. It's rusty and you can touch it and the rest will stay in your hand for the
rest of the day, kind of like a reminder of its physicality and its violence. But one of my favorite
kind of memories is that it also just kind of ends. There's an end point to the wall and you can
climb up this little hill and kind of pop your head over to the other side.
Great reminder that borders are also socially constructed and arbitrary.
But when it comes to the technology, it's really about these experimental projects that are
being tested out on people that would just not happen in other spaces.
And sometimes what helps me is to think about it as if we're tracing the journey of somebody
who is on the move.
So there's a class of technologies that you might be interacting with before you even cross a border.
You might still be in your home country, and already you might be, you know, at risk of, for example, having your social media swept by a state and different data extracted from your kind of online profile and online activity.
Governments are already extracting data from people on the move before they even move.
There's all sorts of things that happen if you find yourself in a situation of a humanitarian emergency, perhaps in a refugee camp, that's where we're seeing a lot of biometric data being collected.
So things like iris scanning, fingerprints, and other types of data like that.
And also carceral-type technologies, too, you know, things that track people's movements, different types of surveillance.
And then as we move closer to the border, that's where we see a lot of the kind of sharp edges of border surveillance.
Yeah, could you give us just a tour for listeners of some of the other technologies?
And you mentioned robodogs.
You mentioned these AI surveillance towers.
I think in one of the more surreal moments of my career, and I've had many,
in early 2022, the Department of Homeland Security announced
that they're going to be rolling out robodogs in the Sonora Desert.
So quadruped machines that you would see on something like a sci-fi show,
like the Black Mirror, are now going to be joining this kind of global arsenal of technologies.
So it really is important to pay attention to what's happening right on the ground.
So the robodogs, drones, different blimps, different types of surveillance cameras,
and even really kind of strange sci-fi type projects like, you know, sound cannons that are being
rolled out to prevent people from even getting close to the border, and even AI lie detectors
that the European Union has been testing out, for example.
And then perhaps the last category is also technology that you might face once you've already
arrived in a country.
So things like voice printing for refugee status determinations where there will be predictions
made about your country of origin based on your accent.
for example, visa algorithms that triage different applications and things like that.
So it really is this kind of group of technologies that impacts movement at every single point of a person's journey.
So presumably this tech is meant to make the border more secure, making it easier and less painful or sort of costly to process the surge in the number of migrations that are happening, the number of refugees moving around the world.
What have government said about why they're making these massive investments and is that working?
So I think to this question, we can look at it two ways.
And I want to bring it back to the U.S. case in particular.
The U.S. government has been for years now saying that we need smart borders, we need new surveillance,
to deter people from coming and to strengthen the border and to kind of create a more functional border enforcement system.
But the thing is deterrence doesn't work.
This is something that those of us who've been studying and working in migration for years have seen time and again.
There are real reasons why people are forced to flee, and people don't stop coming.
Instead, what happens, and this has been documented at the U.S.-Mexico border by colleagues,
such as Jeffrey Boyce and Samuel Chambers, they've been doing really amazing quantitative analysis,
counting the numbers of deaths, and they have nearly tripled after the introduction of smart border technology.
So this is, I think, there is a failure in understanding human behavior
and the kind of push-and-pull factors of migration that are inherent in why people make
the choices they do. I think we sometimes forget how desperate people really are to seek safety.
And the fact that they would put themselves at such risk really shows that there are real
reasons why they are moving, right? And the other side to this is governments oftentimes say,
well, you know, we need to make our immigration system more efficient, more fair, and therefore
we need these technologies. I mean, I'm someone who used to practice refugee law, right? I've seen
this system firsthand. I know it's not efficient. I know it's very slow.
But the answer to a broken system isn't a Band-Aid solution that oftentimes also doesn't work.
A lot of this technology doesn't work. It's highly discriminatory.
Sometimes it happens behind closed doors where there's very little public scrutiny,
either by journalists or human rights defenders.
And so the fact that this is being introduced at such an alarming fast rate
is something that I think we all need to pay attention to.
Technology often moves to where law isn't, where law hasn't caught up.
It's a kind of law arbitrage.
And I think what you're pointing at is that borders are a gray area.
You call them a human laboratory for high-risk experiments.
You also use this term the border industrial complex.
Why are borders being used as a place for deploying sort of the latest in surveillance tech?
So I think this is where it's helpful to go back to the context, even before any technology is introduced, because borders and migration and right.
decision-making, it's already opaque and discretionary.
That's how the way the law works, policies work, and people's experiences show this, right?
You can have two officers have the exact same set of evidence before them
and render two completely different, yet equally legally valid determinations.
That's how it works, right?
Many of us have had very problematic experiences in airports at the border,
and oftentimes you don't know why.
You might suspect why, but there's so much gray area and discretion here.
We also know that a lot of decision-making is predicated on really problematic assumptions
about people's behavior, about race, about religion, about all sorts of determinations
and biases that an officer might hold.
So if that's the ecosystem that we are already dealing with and the vast power
differentials, too, between those who make the decisions and those of us who are kind
of at the sharpest edges, it then creates this, again, this perfect laboratory for when you
start introducing technology to either augment or replace human decision-making altogether.
And again, instead of kind of slowing down and having some of these societal conversations,
what's been really tricky and difficult to observe is how it then becomes a lucrative border
industrial complex. That's a term that many colleagues have used, like my colleague, Todd Miller,
he's a journalist in Arizona. He's been writing about the border industrial complex for many
years documenting just the vast amounts of money that is to be made at the border. And you really see
this when you start interacting with some of the private sector actors that are kind of in the
epicenter of power. I've had the chance to go to some of those conferences, right, like the
World Border Security Congress and others, where you see private sector actors selling all sorts
of wares, you know, you see the tanks, you see the drones, the robodogs, and states and
governments are just kind of lapping that up.
It strikes me that if I was a VC, I'd be looking at the UN's estimate that by 2050,
there'd be 1.2 billion climate refugees and saying, oh, that's a great market.
There's a huge incentive for me to invest.
This border ecosystem is just going to grow.
And so I'm curious right now, like, what are the actual, the companies that are in there?
Let's dive a little bit into this, like, very perverse incentive.
For sure. I mean, it's kind of mind-boggling the amount of companies that are involved. And some might be familiar to listeners. It's the big players like Palantir, Clearview AI, Airbus, Talis, actors like that. But what I found particularly disturbing is some of the small and medium-sized companies that kind of sneak under the radar, punfully intended, you know, and are able to present these projects in a way that are seen as inevitable.
There's one company in particular that comes to mind when you asked your question,
and it was a company that was started by a 27-year-old tech bro in Silicon Valley, if I can put it this way.
And he thought it would be a good idea to put a taser on a drone
and then have this drone fly around the U.S.-Mexico corridor,
picking out people and tasing them and waiting for the Border Patrol to come.
Here's a clip from the demo video the company created when they were trying to market this.
When the drone detects an intruder, control the drone is shifted to a human operator
at a nearby border control office.
Then the controller pilots the aircraft down and interrogates the suspicious person.
Luckily, this didn't get rolled out in real life, but he got VC money for this.
How can that happen?
How is that even possible, right?
Well, because it's lucrative and there are very few regulations that would prevent a company
like this from operating.
Do you want to talk about why there's more latitude for trying on more of these kinds of things here versus Silicon Valley not being able to deploy robodogs in the streets of San Francisco or L.A.?
You know, I think a lot of this has to do with just the fact that we really are dealing with differences in lived experience.
And I know that might sound a bit simplistic, but I do think it has a lot to do with the fact that the people who are thinking about and innovating and developing a lot of this technology,
oftentimes don't interact with the communities that it's hurting.
You know, I sometimes when I talk to the private sector,
I like to go around the room and ask, you know, who's an engineer?
A couple of hands go up.
Who likes to code? A couple more hands go up.
Who's ever been in a refugee camp?
Not many hands.
And I mean, I'm simplifying it a bit of because, of course,
there's diversity in the private sector space too,
but what we're seeing is it breaks kind of along this power differential again.
And when you don't regularly interact with people who are on the move,
who might be refugees, or let's also broaden it out,
who maybe have been victims of predictive policing
or who have had to use an algorithm
to see if they would be eligible for welfare.
Again, it just becomes kind of divorced
from the real-life applicability
of what is being done in the private sector
for the sake of innovation and all of that.
And I do think, you know, I mean,
there are clearly projects and companies
that are being weaponized against people on the move
in people who are marginalized or in war and things like that,
like the company we were talking about with the drone and taser.
But I do think some of it is about just lived experience
and not having those kind of connections
and seeing what actually is happening beyond just the development
and innovation phase.
And we even see that kind of in these framings
that are so common now, right?
AI for good, tech for good, but good for who?
Yeah, I think I read in your book
that the industry is projected to have
total of $68 billion by 2025.
Do you want to give any other numbers about just the size of the fortunes that could be made here?
I think the fact that this is just a huge growth industry is important for people to get.
Yeah, definitely.
I mean, we've been seeing kind of this exponential rise of the border industrial complex,
and $68 billion is, I think, just scratching the surface.
Because we're, again, talking about a very lucrative market where not only do we have to pay attention
to the kind of border enforcement companies, but also military companies.
who are now making incursions into that space.
And so militarization of borders and using military-type technology, like robodogs and all sorts
of different things that are also making their way there, is also, again, kind of inflating
these numbers.
And the rise, not only just in the U.S., but also at the EU level and the international
level, as we are seeing projected numbers of migration rise in the coming decades.
That's kind of inevitable at this point, I think.
One of the areas I think our listeners, and certainly I am really interested to know more about,
is the diffusion process of this technology, that I could hear people maybe in the back of their minds,
thing like, okay, well, this is at the border.
Like those, it's to protect us from, you know, the other, it's not really about people, you know,
inside the U.S.
It's not going to get to us.
So I really want to know about, like, how does the technology diffuse?
Like, what's the path?
What are warning signs, if at all, of it going from the border to a broader society?
Where have you seen that happen?
I think people seeing that path, if there is that path, is really important for understanding
why we might want to get ahead of it now.
Yeah, for sure.
And when I get asked this question, I always think about how best to answer it,
because I do think it's important to keep the kind of context specific to the border sometimes
because it is this kind of high-risk laboratory that really impacts vulnerable people.
But at the end of the day, it doesn't just stop at the border.
And that's a trend that I've been noticing the last few years for sure.
So if we go back to the Robo Dogs that were announced by the Department of Homeland Security
for border purposes in 2022, just last year, I think it was May, the New York City Police
Department proudly unveiled that they're going to be rolling out robodogs on the streets
of New York City.
And one was even painted with black spots on it like a Dalmatian.
So again, very proud of its kind of like surveillance, tech focus.
And I should say the robodogs were before piloted in New York and in Honolulu during the COVID-19 pandemic for surveillance on the streets.
And then after public outcry, surprise, surprise, we're pulled.
So again, the border tech stuff doesn't just stay at the border, but it then starts proliferating into other spaces of public life.
And, you know, we've seen similar technology like drones and different types of cell phone tracking, be deployed against protesters.
and even things like sports stadium surveillance.
There's some work being done in the European Union
on some of the technologies that are deployed for border enforcement
and for criminal justice purposes
also then being turned on people who are enjoying a football game
or a soccer game, for example.
I think that's the interesting thing with tech, right?
It might be developed for one thing
and then repurposed for a second purpose
and sold to a third purpose.
And it just kind of flows in these ways
that are difficult but important to try.
track. Yeah, there's sort of a version of build it, they will come. It's like build it and it
will be used. You know, one of the other things we picked up from your book is you talked about a
policy I'd never heard of called codis, which you say moved the U.S. closer towards
construction of a discriminatory genetic penopticon, a kind of dystopian tool of genetic
surveillance that could potentially encompass everyone within the United States, including
ordinary citizens when they've not been convicted or even suspected of criminal conduct. Can you talk a
bit more about that? Yeah, that's the other kind of element of this dystopia, the fact that,
you know, your body becomes a border in a way, not only just with biometrics, but also with
DNA collection. And there's been different pilot projects kind of rolled out over the years.
Again, how is that possible, right? Like, have we agreed to this as people who are crossing borders?
The fact that states are now considering collecting DNA for border enforcement is very dystopic.
because I think that's ultimately what it is about,
the fact that each of these incursions is moving
the so-called Overton window further and further.
You know, we're talking, first it's biometrics,
then it's robodogs, then it's DNA.
What is it going to be next, right?
And I don't mean to just fearmonger
or kind of future predict or anything.
This is based on years of work
across different borders and seeing the appetite
for a level of technological incursion
that I don't think is going to say,
stop anytime soon.
Where have there been examples in the world where things have gone the other way around,
where it's not just a temporary public outcry and robodogs get taken back, but
like something really significant has happened where a surveillance technology from the border
gets rolled back because it really doesn't fit a country's values?
I think my silence is indicating that it's hard to think of a kind of a positive like that.
But I will say, we are catching this at a really crucial moment
because there are conversations about, well, how do we regulate some of this?
Like, do we put some red lines under some of this technology?
And there were some really, really inspiring conversations being had at the European Union level,
for example, because it went through this really long protracted process
of putting together an AI act, basically, the first regional attempt to regulate AI.
And even though in the end it didn't go as far as it, I think, should on board of technologies,
there were conversations about, for example, a ban on predictive analytics used for border
interdictions or pushback operations or using individualized risk assessments and things like that.
I think traction on these issues can be gained by kind of extrapolating from the border
and making citizens also worry about biometric mass surveillance and surveillance in public
space and things like that and finding kind of moments of solidarity among different groups
that are equally impacted by this.
And that is where the conversation seems to be moving,
kind of less from now we're fact-finding
and showing all these kind of egregious human rights abuses,
which are still happening.
But like, what do we then do about it together collectively?
It seems like one of the ways to motivate public action
to regulate this is to show how, you know,
what starts at the,
border to deal with, quote unquote, the other and the immigration that's coming into the country,
then later can get turned around to be used on our own citizens. And in your book, you actually
have talking about how the global push to strengthen borders has gone hand in hand with a rise
in far-right politics to root out the other. And you talk about examples of far-right governments
who turn around and use the same technology tested at their border on their own citizens
to start strengthening their regime. And you give examples, I think, in Kenya, Israel, Greece.
Could you just elaborate on some of these examples? Because I think if people know where this goes,
then it motivates, how do we get ahead of this more?
Yeah, I think it's important, yeah, to bring it back political contexts
because all around the world we're seeing the rise of anti-migrant, far-right groups
and parties making incursions into, you know, the political space,
sometimes in small ways and sometimes in major ways.
And, you know, I think it's an open question, what's going to happen in the United States
this year, right, with the election that you guys have coming up.
What I've seen, for example, in Greece is that parties that are very anti-examination,
immigration, normalize the need to bring in surveillance technology at the border and test it out
in refugee camps, for example, and then say, okay, well, we're going to be using similar things
by the police on the streets of Athens, for example. You know, in Kenya, similar things with
normalization of just kind of data extraction for the purposes of digital ID are then used and weaponized
against groups that already face marginalizations like Somali Kenyans, Nubian community, and
smaller groups like that. So again, I think.
think the fact that there is this kind of global turn to the right and more of a fear-based kind
of response to migration motivates more technology. And you again see this kind of in the
incursion of the private sector, kind of normalizing some of these really sharp interventions
and say, oh, well, you know what, we have your solution here. You are worried about migration
and the other. Let's bring in this project. And then, oh, lo and behold, you can actually
use it on, you know, protesters that you don't like or sports stadium fans who are
too rowdy and groups like that as well. Okay, so we just talked about Kenya and Greece in the
context of other governments, but what about Israel? What's their role in all this? Are they using
these technologies at their borders? Yeah, for sure. I mean, Israel is definitely a nucleus in everything
that we're talking about today. And I also felt compelled to go to the occupied West Bank for the book
because it's really the epicenter of so much of the technology that is then exported for border
enforcement in the EU and at the U.S.-Mexico border, right? But what is really troubling in how
Israel has been developing and deploying technology is that Palestine has become the ultimate
testing ground, a laboratory, if you will. Surveillance technology is tested on Palestinians,
both in the West Bank and in the Gaza Strip and then sold to governments around the world
for border enforcement. And all of these projects that are normalized in these situations,
then can get exported out into other jurisdictions.
One of the things you mentioned early in this interview was the EU using AI lie detections.
What's that about?
That seems one sci-fi and two, surprising to me that the EU would be doing something like that.
Yeah, that is a project that keeps me up at night for a variety of reasons.
I mean, so basically, here I'm talking about the I-Border Control Project, and the name is fascinating.
It's I-Border C-TRL.
So you have like your iPhone, your iPad, and now your I-Border Control.
It was funded by the Horizon 2020 research scheme.
So this is a big pan-European research scheme that gives money to companies and sometimes consortia of universities as well.
This is important.
I'll get to that in a bit.
To do all sorts of projects.
and they decided to try and create a lie detector
that would be able to discern
whether somebody was more likely
than not to tell the truth based on face recognition
or micro-expression analysis.
So if you just even pause there,
and I'm going to put my refugee lawyer hat on,
like, how can an AI lie detector
deal with differences in cross-cultural communication, for example?
I mean, I've worked with people
who wouldn't make eye contact with a judge
or a decision-maker of the opposite gender,
because of religion, because of their experiences,
or maybe because they were nervous?
Or even more subtly, what about the impact of trauma on memory
and the fact that we don't tell stories in a linear way anyway?
We already know that humans struggle with this, right?
There's so many problematic assumptions about human behavior
that judges make, that immigration officers make.
Oh, somebody's too shifty and not looking me in the eye,
that must mean they're lying.
So if we know that humans struggle with this,
What are we going to do when we start having so-called AI lie detectors, right?
And from a legal perspective, you know, the traditional ones that kind of like go back and forth
and show the line that you see on like CSI, those are not even admissible in a court of law
as evidence in a lot of jurisdictions.
And what's interesting too, again, the politics that we were talking about really play into
this because the three countries where this was going to be piloted were Latvia, that one's
a little bit of a question mark, but then it was Hungary and Greece.
two major interdiction points for people on the move who are arriving.
And this project, again, shows so much.
It's like this microcosm, I think, of a lot of us who are concerned about these projects.
Not only was this funded as a pilot project, but then it got some public outcry.
There was a big intercept piece on it where journalists kind of played around with it
and realized, well, actually doesn't even work, right?
A lot of this is snake oil technology.
That's an important part here.
A lot of it is just kind of said to be AI, but it actually doesn't do what it's supposed to
do. But the fact, again, that this was going to be rolled out and out there is very, very
troubling. Because then it also sets off other projects. There's another one called Avatar that
was going to be using similar technology. And who knows what else is in the works, right?
You talk in your book about how states on the fringes of the EU, like Greece, come to function
as Europe's shield doing Brussels dirty work and further driving the demand for securitization
and surveillance. Why is it that these poor countries have an economic incentive to tap into the
border industrial complex.
Greece is such a fascinating example here, and I should say, you know, I hold Greece close to my
heart because I ended up living there for almost three years. I went for two months for
research, and then I stayed. It's one of those places, but it's so full of complexity, and
it's a really strange actor in all of this, because, of course, as a country that's not been doing
very well economically, it stands to make quite a bit of money by being the border
enforcer of the EU. And when you go to EU policy documents, you see it referred to as Europe's
shield, as, you know, the kind of protector of the European way of life. And as a result, you know,
as a thank you, perhaps it gets large amounts of money from the European Union to build these
very high-tech refugee camps, for example, on the five islands that are sometimes known as the
hotspots, right? Maybe you remember from watching the news in 2015, 2016, when a lot of Syrian
refugees were coming to Greece, this is where they would be housed. Well, now they're housed in high-tech
surveillance-heavy refugee camps that are essentially open-air prisons. But again, the incentive for a
country like Greece, but also Spain and Italy and some of the frontier nations, is that they can
get money and also political support at the regional level, at the EU, for doing some of the work
that Brussels and Germany is, you know, really concerned about. It also kind of launders the responsibility,
right? Because Europe wants to be seen as a human rights leader and a leader in technology
and a leader that respects the rights of people on the move. But actually the reality on the ground
is anything but. Petra, could you tell this story of how surveillance can go wrong? You mentioned
EU border agents who didn't intervene to save a sinking ship because of an error in the
surveillance. Could you tell us about that?
So this is a case, I think, yeah, that highlights how surveillance can either inadvertently
or purposely be weaponized and leading to loss of life.
So there's been many, many cases of ships sinking in the Mediterranean or GNCs,
so the sea that separates the African continent from Europe,
where people have been crossing for years.
And entities like Frontex or the European Union's border force
or even state entities like the Hellenic Coast Guard
use all sorts of different surveillance for so-called search and rescue operations.
But in one case, I think that you're referring to, the surveillance apparently only showed one person on a boat that was sinking.
And so, you know, the authorities decided not to act.
And in fact, 60 people drowned.
I have a friend who works in humanitarian aid and refugee camps.
And he actually was telling me about how these AI surveillance technologies can get turned towards humanitarian workers to disincentivize them from helping refugees who are trying to come across.
Do you have any examples of that that you want to share with our listeners?
Yeah, this technology is also weaponized against search and rescue groups or organizations or individuals.
And, you know, we've seen examples of search and rescuers like Sarah Mardini, for example, and Scott Binder in the EU who were divers, who were rescuing people from sinking ships, be criminalized as a result of this.
So if you can see, you know, on your surveillance screen that a search and rescue boat is coming, a non-governmental one, right?
because oftentimes people step in from either the humanitarian sector or the civil society sector
to prevent people from dying at the border.
You can then say, oh, well, you were actually implicated in this activity
and we're going to charge you with, in their case, you know, human smuggling
and facilitation of trafficking and things like that.
Similar cases have happened in Arizona, of course, with humanitarians there
being charged with facilitation of illegal entry and charges like that.
Some of them have been dropped, but it's, again, it creates a chilling effect on people,
wanting to help when you see somebody in distress at the border.
So let's zoom out for a second.
You're talking here about essentially the incentives for countries like Greece to invest into technology solutions.
Tristan has, I think, a really interesting sort of frame.
And that frame is resources and attention always goes to the edges of the arms race.
That is, why is it that we don't put money into retraining of existing border guards?
Why don't we put money into rehabilitating, say, like, those who are homeless in the U.S.?
Well, it's because that's not where the arms race currently is.
The arms race is at the edge of technology.
That's where you get power if you put your money.
And so it sort of predicts why it is that we're always spending money to make new solutions
and not going back and patching like the center,
which is sort of falling apart.
And that really gets us to this sort of next collision,
which is the collision of mass data collection
and surveillance capitalism.
You write in your book about how both companies and governments
always make this argument that more data is better,
but we also know from history that isn't true.
So I'd love for you to talk more about data collection
as a political exercise and about,
this intersection of the border and surveillance capitalism?
Data is really at the center of all of this.
And I always think about what Mariam Jamal,
who is a young Somali-Kenyan activist that I worked with in Kenya says.
She says, data is the new oil.
And that is something that we're seeing across all these different contexts.
Because that's the underpinning for so much of the surveillance
that happens around the world, not just at the border,
but people have been writing about, again,
this kind of surveillance industrial complex as well, right?
And again, the datification of migration is something that, you know, we see at the private sector level and as well as the public sector level, but also at the UN, for example, and even in research, right?
Like, we seem to think that more data is always better, which, you know, is true in some cases, but at what cost, right?
And it just also makes me think about, you know, who has the ability to opt out from being a data subject?
Some people who are at the sharpest edges just simply are not able to, right?
They are either, their data is either extracted or, you know, they might not even know that they are a data subject, right, when they're crossing borders or engaging in different types of projects.
Whereas you have people who are perhaps some of the most powerful folks on the planet saying, oh, you know, I don't want my children on social media or, oh, they're not going to have a smartphone until they're 16.
So again, it breaks along these lines of power because it is always kind of predicated on, well, which are the groups that are kind of the testing grounds, the guinea pigs for data extraction.
and it's people who have historically been made marginalized,
whether it's people on the move,
whether it's people in the criminal justice sector, right?
People on welfare, people in the child welfare system.
It's really troubling how normalized data collection really is.
But I think one of the reasons why is because it seems like it's just so inevitable, right?
We all have smartphones, we all have apps for every single thing,
from our dentist to our Uber driver, right, to food delivery to everything.
everything has become about data collection.
And that is a troubling kind of starting point.
And of course, the more data you have, the better you understand a population,
the better you can protect and serve them.
At the same time, the better you can exploit them.
And, you know, of course, the famous historical examples of Nazi Germany
strategically collecting vast amounts of data on Jewish communities
that facilitated the Holocaust with, of course, they worked with IBM on that,
or the Tutsi registries based on ethnic identity cards,
which facilitated the Rwanda genocide in 1994.
And I think people might be surprised to know that the UN itself
has several of these kinds of databases
that combine surveillance tech and biometrics
to monitor populations on the move.
And that's sometimes gone really wrong.
Like I remember from your book,
there is a story about how it went really wrong with the Rohingya.
And so I'd love for you to tell us a little bit more
about like the surprising,
role the UN might be playing here? Yeah, the UN and other types of international organizations
are a key player in the kind of ecosystem of power and innovation and border tech, because they're
really powerful actors. They set the agenda and the norms around, again, kind of what we see as
innovation and why we should be collecting more data. That's kind of a given. If you go to a lot of
UN policy documents and different pronouncements, data is in there, right? We need more data. We need to
collect more information. But what's tricky from a legal perspective and a governance perspective
is that international organizations are this kind of third space, right? They're not a private
sector company and they're not a state. So how do you regulate them? What kind of governance
mechanisms exist? Oftentimes it's kind of an in-house, you know, ethics statement, for example,
on biometric data collection. This all sounds really kind of up there and theoretical. So I'll
I'll bring it to the ground to the example you mentioned with Rohingya refugees.
So Rohingya refugees have been escaping Myanmar for many years now
and finding shelter or refuge in neighboring Bangladesh.
The UN is active in Bangladesh and has been collecting data from refugees there
and it came out a couple years ago that they inadvertently took this collected data
and shared it with the Myanmar government, the very government that the refugees were fleeing from.
Now, if we are assuming that this is an accident, it's a pretty big.
big one, because it's extremely sensitive personal information that often was also collected
in situations that were maybe not totally driven by consent, right? Because the power dynamics
are there. Can a refugee really opt out of data collection if they're in a camp that's administered
by the United Nations, right? It's very different than you and I go into a grocery store and
saying, oh, no, you know, I don't want to participate in the survey or I don't want to give you
my information because you can just go home. That's not the dynamic, right, that people on the
move face.
And the fact that the United Nations High Commissioner for Refugees made such a big mistake is very telling.
Because what's happening with other actors in the space?
What kind of data retention practices do they have?
Or data sharing practices too?
You know, what is really happening?
So much of it, again, happens in that kind of murky, opaque area that's very difficult to penetrate by journalists, by lawyers, by human rights monitors,
because so much is done by third-party actors like international organizations that don't have to report in the same way that's state.
do, or even the private sector does.
You know, if I'm a country, I would say there's going to be some kind of border arbitrage
that if I don't beef up my border, then refugees are going to be turned away from countries
that have a beefier border, and they're just going to flow to me.
So I really don't have a choice anyway.
So I guess what would you say to them, and really what I'm looking for, is this is not a kind
binary answer. There's not a yes or no. This is like a way that we move. This is a verb. This is an
adaptive process. And from your vantage point, what would be a better adaptive process? How would we be
doing the process of deciding what is okay and what's not okay in a better way, acknowledging
how fast everything is moving? You know, maybe this seems again too simplistic, but it's about
talking directly with people who are impacted by the technology and who are hurt by it, because it is
now, I would say, pretty robustly documented that border technology infringes on all sorts of
human rights and has potentially even led to loss of life, right, at the U.S.-Mexico border.
So if we know all of that, and then we also see this kind of race to the bottom that states
are engaged in, I think that's absolutely right, of beefing up their borders, like really
strengthening border security and not wanting to be the country that says, oh, well, okay, my doors are
open. You're like, what are the incentives there? I think we need to have a conversation about some no-go
zones, frankly, when it comes to technology. I mean, we tried that with autonomous weapons,
and that still hasn't happened, right? And that's really like the sharpest edge of this
conversation. But yeah, what about robodogs? What about predictive analytics for border
enforcement? What about data that's collected as part of a DNA sample? Are we actually okay with
that as a society? And if not, then we really need to draw some red lines under this. A moratorium at the
very least, but a ban, actually, I think is definitely something that needs to be explored.
And I think it's a little short-sighted too, actually, on part of a lot of what sometimes we call
receiving countries. So countries like the United States or Canada or the EU that have been
historically in the last few decades, the receiving point for people on the move, it's very
short-sighted to see migrants and refugees as a threat because a lot of people contribute very highly
to countries that are their second home, right? And I think we've,
lost sight of that. The fact that everything's now weaponized against this kind of specter of
migration is incredibly short-sighted. Because if we are going to be dealing with larger and larger
numbers of people on the move in the future, it's actually an opportunity to think about,
well, how do we uphold people's human rights? How do we actually function as a society that respects
human dignity and wants to be a functional place where people can thrive and raise their
children and contribute to local economies? That's really what we need to.
to be talking about here.
I'm just curious, what, if any, are the bright spots or the bright people that you would
point our attention to for where we get hints of like trailheads for hope?
Like we, you know, we're going to have more climate refugees.
We wish we could choose about this, but we can't.
There's going to be more people on the move.
And as you said, deterrence isn't going to work because it is a matter of life and death.
And so we need examples of, you know, integration working better, refugee camps working better
and ways that technology can play a positive and helpful.
role in that. Is there's any other examples of that you want to mention? Sure. I mean, there's really
inspiring practices when it comes to, for example, education technology and making sure that children
in conflict zones are able to still learn when they're on the move or when they're in refugee
camps. There's all sorts of really interesting projects out there that are kind of bringing the
classroom to the child that's mobile. You know, other inspiring things that I can kind of think of
are just ways, again, that, for example, you know, journalists are thinking about telling different
stories and kind of focusing on technology as a way to level the playing field in the kind of
vast power differential that we're talking about. And that's something that we're trying to do at
the refugee law lab, which is kind of my academic hat. My colleague Sean Rehag, for example,
he's more on the kind of data science side, but he's looking at, for example, using big data
sets to crunch numbers and look at, for example, refugee decisions in Canada to create
information for refugee lawyers to be better informed on how a particular
particular judge might render their decision. Very, very helpful because again, you're dealing with
attorneys who might not have the same level of resources as a government lawyer might. So there are
definitely bright spots when it comes to using technology as well to kind of meet in the middle
and work against the kind of differentials in power and privilege and even the kind of norm setting
that it comes to, like who gets to innovate and why. We really need to find ways to kind of talk to
each other more about this. Well, I think we have a lot of really important things for people to
be processing and holding. It's quite sad. It's really hard. I mean, when I really process this,
it's like there's just going to be so many people that are on the move over the next couple of
decades, and we don't get to choose about that. And if deterrence doesn't work, then they're
going to come to the places that they believe are going to help them. And then there are also
finite resources and there's finite ways of countries both wanting to do the best they can
to integrate and welcome those who they can and integrate them at the pace in which they can do
that and provide resources. And if I can just add, I think it's also about paying attention to
who's around the table to have these conversations and making sure that, again, those who are
impacted have a seat at the table or even build a new table altogether, right? Because when we
look at migration in particular, I think ultimately it's about
again, why do certain innovations matter over others? I mean, we could be using the same set of
resources to look at root causes of migration and maybe prevent people from coming in the first
place, not in the kind of border enforcement way, but in a way that reminds us, you know, that
almost every single person that I have had the privilege of working with over the last few years
has said, I don't want to be a refugee. I want to be home. I want to raise my children. I want to
live a life that we all want to live, right?
And yet, I think we've lost sight of that.
The fact that the vast majority of people who are on the move
are not doing it voluntarily, they're forced to do it.
So instead of, you know, developing robodogs and AI light detectors
and all sorts of things like that,
why not use this money and these resources to help people stabilize the countries that they're in,
think about resources distribution,
and think about more long-term solutions.
Well, I think that might be a good place to end.
And Petra, thank you so much for coming on your undivided attention.
This is certainly an issue that requires undivided attention.
Thank you so much for having me.
It was a real pleasure to be able to share a bit about my work.
Petra Molnar's book is The Walls Have Eyes, Surviving Migration in the Age of Artificial Intelligence.
Your undivided attention is produced by the Center for Humane Technology, a nonprofit working to
catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and
producer, and our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane
Technology team for making this podcast possible. You can find show notes, transcripts, and much more
at humanetech.com. And if you like the podcast, we'd be grateful if you could rate it on Apple
podcast, because it helps other people find the show. And if you made it all the way here,
Let me give one more thank you to you for giving us your undivided attention.
