Ideas - Artificial intelligence isn't a 'potential' danger — it’s already here!
Episode Date: May 8, 2025The grave consequences of encoded discrimination in AI systems are happening now, it's not a future problem, warns MIT researcher Joy Buolamwini. She points to how evidence of racial bias, sex and gen...der bias and ableism in artificial intelligence pose unprecedented threats to humankind. The founder of Algorithmic Justice League, a movement to prevent AI harms, has been at the forefront of AI research. She encourages experts and non-experts to join in the fight for "algorithmic justice." Her book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, uncovers the existential risks produced by Big Tech. As she says, "AI should be for the people and by the people, not just the privileged few.”
Transcript
Discussion (0)
When a body is discovered 10 miles out to sea, it sparks a mind-blowing police investigation.
There's a man living in this address in the name of a deceased.
He's one of the most wanted men in the world.
This isn't really happening.
Officers are finding large sums of money.
It's a tale of murder, skullduggery and international intrigue.
So who really is he?
I'm Sam Mullins and this is Sea of Lies from CBC's Uncovered, available now.
This is a CBC Podcast.
Hi there, it's Nala here.
Thank you so much for listening to Ideas.
Before we start today's show, I need to ask you for a favor.
If you like Ideas, please hit the follow button on your app, whether you're using Apple,
Spotify or anything else.
We've got some really interesting episodes on the way that you'll want to hear on being
queer in Africa, on the flaw in facial recognition and on the history of human shields.
Okay, here's today's show.
Welcome to Ideas.
I'm Nala Ayed.
Imagine being accused of and arrested for a carjacking.
I don't know if I can say what I wanna say,
but what in the hell?
Seriously?
A crime you did not commit.
There was a knock at the door.
It was the police, six officers.
We have a warrant for arrest. Initially, I thought it was a joke.
It was no joke and it was all in front of her children.
They're like, my mom's going to jail. Mom, you're pregnant. Like, what is going on?
Like, they're scared. They're still scared.
You're eight months pregnant and you spent 11 hours sitting in jail. After
she was released on a $100,000 personal bond, she went straight to the hospital.
They had to give me fluids because I was dehydrated. My blood pressure was up. My
baby's heart rate was low. Now comes another cruel twist. You're sitting in that cell because a computer made a mistake.
Her attorney says Detroit police investigating a carjacking had run surveillance video from a gas
station through their facial recognition software and got a hit. Not only that, the people who
programmed the computer also made a mistake and the police department who used the flawed software made a mistake.
That's what happened to Portia Woodruff in Detroit in 2023.
What's crazy to me about her story is, a few years earlier,
the same police department falsely arrested Robert Williams
in front of his two young daughters and his wife.
And so this isn't a case where we didn't know there were issues, right?
But it was willful negligence to continue to use systems that have been shown time and
time again to have all kinds of harmful biases.
And so these algorithms of discrimination persist.
That's Dr. Joy Bolamwini.
She's one of the people standing between you and an AI-fueled nightmare.
She also calls herself a poet of code.
Her research as a graduate student at MIT called out Microsoft, IBM, Amazon and other tech giants,
whose facial recognition systems failed to identify people of colour.
The worst results were related to darker skinned females.
Face detection wasn't really detecting my face until I put on a white mask.
Pull on the white mask,
the face of the white mask is detected.
Take it off, my dark skin face, the human face,
the actual face, not detected.
And so this is when I said,
hmm, what's going on here?
Dr. Joy, as she likes to be known, is the founder of the Algorithmic Justice League,
an organization that resists the threats of artificial intelligence in a world that seems
all too eager to adopt it everywhere.
She spoke in February 2025 at the Sanford School of Public Policy at Duke University.
Thank you.
Thank you.
Hello, hello, can you all hear me?
All good.
I'm going to share a little bit about my journey and wearing many different hats. So amateur model from time to time, author, academic.
But once upon a time, I was a student.
In the student hat,
I went to Georgia Tech enamored by
computer science and building robots.
That's really what got me started on
my path towards algorithmic justice.
And I was really into this notion
of showing compassion through computation.
What would it look like to use my tech skills
to do some kind of good in the world?
That was the motivation.
And I had many opportunities to do this at Georgia Tech.
When I was a Fulbright Fellow, I had an opportunity of working with Zambian youth and thinking
through different types of women's rights apps, cervical cancer apps, apps that really
resonated with what the youth there wanted to do.
I also had an opportunity to work with the Carter Center in Ethiopia, using mobile development tools
to help combat neglected tropical diseases and so forth.
But another hat I love to wear
is that of being an artist and being a poet.
And so that's why I really wanna start the talk here today
with a poem called,
"'AI, Ain't I a Woman?'
inspired by Sojourner Truth's 19th century speech in Akron, Ohio, actually
talking about the need for intersectionality in the women's rights movement. So who's ready for
some verses? We got some? Okay, knowing their lives have altered many destinies.
In her eyes I see my mother's poise.
In her face I glimpse my auntie's grace.
In this case of deja vu, a 19th century question comes into view.
In a time when Sojourner Truth asked, ain't I a woman?
Today we pose this question to new powers, making bets on artificial intelligence, hope towers.
The Amazonians peek through windows blocking deep blues as faces increment scars.
Old burns, new earns, collecting data chronicling our past, often forgetting to deal with gender,
race, and class.
Again, I ask,
ain't I a woman? Face by face the answers seem uncertain, young and old, proud icons are dismissed.
Can machines ever see my queens as I view them? Can machines ever see our grandmothers as we knew
them? Ida B. Wells, data science pioneer, hanging back stacking stats
on the lynching of humanity,
teaching truths hidden in data,
each entry and omission,
a person worthy of respect.
Shirley Chisholm unbought and unbossed
the first black congresswoman,
but not the first to be misunderstood
by machines well versed
in data-driven mistakes.
Michelle Obama unabashed and unafraid to wear her crown of history, yet her crown seems a mystery to systems unsure of her hair. A wig, a bouffant, a toupee, maybe not. Are there no words for our
braids and our locks? Does relaxed hair and sunny skin make Oprah the First Lady?
Even for her face well known, some algorithms fault her.
Echoing sentiments that strong women are men.
We laugh, celebrating the successes of our sisters
with Serena's smiles.
No label is worthy of our beauty. Dr. Joy Boilamwini sharing her poem, AI Ain't I a Woman, in February 2025 at Duke University.
I wonder if you don't mind just before we begin, if we can test out your peas,
if you can say a bunch of words with like,
peeling potatoes for Play-Doh or something like that.
Peeling potatoes for Play-Doh,
do we have a popper?
Cause we have no pop guard here, but how are my peas? Mississippi, M-I-S-S-I-S-S-S-I, P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-S-I-S-S-S-I PPPPPP.
You've done this a few times.
Yeah.
We're good.
In May 2025, Dr. Joy sat down for a conversation with me based on her talk at Duke.
My name is Dr. Joy Bulemoini.
I'm a poet of code.
I tell stories that make daughters of diaspora's dream and sons of privileged
paws.
I'm the founder of the Algorithmic Justice League and the author of Unmasking AI.
Beautiful.
Now, I just want to talk to you about your introduction.
So you said Daughters of Diaspora.
Can you just repeat that?
I actually dedicated the book Unmasking AI to Daughters of Diaspora.
So I do describe myself as a poet of code and part of that work is making
children's of diaspora's dream regardless of how people identify.
Then being that I am a daughter and identify as such,
also making daughters of diaspora's dream.
You describe yourself as a poet of code,
but we're talking about computer code.
How did you become a poet of code?
For me, it was a way of being intentional
to make sure that as I was going further
in my education on computer science,
by this time I was working on a third degree at MIT and would then go on for fourth degree
to finish a PhD there, that I wasn't forgetting my humanity. And so my challenge to myself as a poet of code was to think through what poets do, which
I think at our best is having others see themselves in ways they might not have imagined before,
interrogate our assumptions about the world, and look again and hear again. And also there's a sense,
at least in the poetry,
that I like to write and to share and perform of musicality and rhythm.
And so those are all aspects that I find playful and accessible.
And so being that I was working on more degrees,
I come from a West African family. You haven't arrived
until you have all the degrees. Tiger parents, we got lion parents, right? So I had to figure
out a way to get through it myself. So the poetry came to be.
Mm-hmm. Is there a moment at which that, you know, being a poet of code has been the most meaningful for you?
Is there something you can point to a moment, a day, an incident, a poem?
Yes, I think when I get messages from people about how a certain poem or something that has been performed like the poem, AI Ain't I a Woman,
made them feel seen,
changed the way they view themselves,
they view technology or felt affirmed.
I think for me, that's when I get the nudges
and the glimmers that this is work
that is worthy to continue to pursue.
It reminds me of the authors and role models that I've had,
whose words traveled far beyond their pens and entered me when I needed it.
For me, that's the most rewarding part about being a
writer and being a poet.
Now, show of hands how many have heard of the male gaze, the white gaze, the post-colonial gaze.
Okay, to that lexicon, I add the coded gaze, and it's really a reflection of power.
Who has the power to shape the priorities,
the preferences, and also at times,
maybe not intentionally, the prejudices
that are embedded into technology.
And I first encountered the coded gaze as a grad student
working on an art installation.
I literally had to put on a white mask to have my dark skin detected.
My friend, not so much.
So this was my first encounter with the coded gaze and I
started asking myself what's actually going on here.
So I shared the story of coding in a white mask on the TED platform.
A lot of people saw it, so I thought,
you know what, people might want to check my claims,
let me check myself.
So I took my TED profile image that you see there,
and I started running it through online demos from different companies.
I found that some companies didn't detect my face at all,
and the ones that did misgendered me as male.
So I wondered if this was just my face or other people's faces.
And so it is Black History Month.
I was excited to run some of the cast from Black Panther.
And I found I did have something in common with these women, right, besides the beautiful
dark skin.
And so you'll see in some cases there's no detection and other cases there's misgendering.
But the red column, that's age. Algorithmically verified, black don't crack. You have Angela
Bassett. She's 59 in this photo. IBM is saying 18 to 24. So maybe not all AI bias is the worst. But what got me
concerned was moving beyond fictional characters and thinking about the ways in which AI and
especially AI-filled facial recognition is showing up in the world, right, leading to things like false arrests, non-consensual deep fakes as well
for explicit imagery.
And it impacts everybody, especially when you have companies like Clearview AI that
has scraped billions of photos courtesy of social media platforms.
Not that we gave them permission, but this is what they've done. And so as we think about where we
are in the stage of AI development, I oftentimes think of the X-Coded. And so the X-Coded represents
anyone who's been condemned, convicted, exploited, otherwise harmed by AI systems. And so I think of
people like Portia Woodruff, who was eight months pregnant when she was
falsely arrested due to facial recognition misidentification.
She even reported having contractions while she was being held, right?
What's crazy to me about her story is a few years earlier, the same police department falsely arrested Robert Williams in
front of his two young daughters and his wife here. And so this isn't a case where we didn't know there
were issues, right? But it was willful negligence in some cases to continue to use systems that
have been shown time and time again to have all kinds of harmful biases.
And so these algorithms of discrimination persist,
and that's one way you can be X-coded.
Another way is we have algorithms of surveillance.
And so I'll speak more to this in a little bit,
but some of you, as you are flying home for the holidays or other places,
you're likely starting to see airport face scans creeping up.
And so the hand of surveillance continues to extend.
And then you have algorithms of exploitation.
Celebrity will not save you.
Lighter skin will not save you, right?
And so we've seen what the rise of generative AI systems, the ability
to create deep fakes and so impersonate people, whether it's non-consensual explicit photos
of Taylor Swift or Tom Hanks selling you a dental plan he's never ever heard of. And
there are other ways you can be exploited, right? So this is one study, this is even back as far as 2015,
where there was an analysis that showed that
if you looked at zip codes with the higher proportion
of Asian residents than the average,
in those areas, those people were actually being charged more
for test prep services. There are many types of algorithms you identify of surveillance, discrimination, of exploitation,
and all are grave violations not only of people's privacy but of their minds and bodies.
As you discovered and identified these AI biases, what did that do to your younger self and
your ideal of compassion through computation?
I think the moment that I was at MIT Media Lab as a grad student in my first year, it felt very
much like I had entered the chocolate factory of Willy Wonka And I was just so thrilled to be there and excited to work on all kinds of projects and
take interesting courses.
So one of the courses that I took was called science fabrication.
You read science fiction and then you build something inspired from that that you might
otherwise not explore because it's impractical or you
just didn't think it would be worth pursuing and you had a little bit of time but not that
much time. And so we had six weeks to work on a final project and I wanted to explore
the idea of shape shifting. Inspired actually by, um, that I grew up hearing.
Uh, so I'm from Ghana.
My first language, though faded, is Chi.
So even though I was born in Canada, when I was two, I moved, uh, to Ghana.
So anyways, wanted to do shape shifting, thought it would be cool.
Then I understood we have six weeks.
So I wasn't going to change the laws of physics anytime soon, certainly not in six weeks. So, I wasn't going to change the laws of physics anytime soon, certainly not in six weeks,
but maybe instead of shifting my physical form, I could shift my reflection in a mirror.
And so, that's what led to this Aspire Mirror project. And I write about it in the book Unmasking
AI, figuring out how I could transform my mirrored reflection right into something else.
And so it's in that process that I start playing around with a face detection system so that
a webcam on top of a mirror could start to locate my face in the mirror and then project
something onto it.
So it's in that exploration, which was very fanciful.
It had nothing to do with bias or any of this.
I just wanted to pursue a whimsical idea that I had this experience where the software I
was using, and in particular the computer vision element that was meant to detect my
face. Face detection wasn't really detecting my face until I put on a white mask.
It was Halloween time, happened to have a white mask around.
Pull on the white mask, the face of the white mask is detected.
Take it off, my dark skinned face, the human face, the actual face not detected. And so this is when I said,
hmm, what's going on here?
Was that the only thing that went through your mind? I'm curious what went through you
when you discovered this. Beyond curiosity, which is abundantly clear that you wanted
to pursue more, you know, why is this happening? But what else went through you?
It was kind of like I thought this would be solved by now because it wasn't my first time
encountering the issue.
When I was an undergraduate at the Georgia Institute of Technology in Atlanta, I remember
working on social robots and social robots also use face detection.
And so I was working on a project called Peek-A-Boo Simon, the turn taking game where you cover your
eyes, you uncover your eyes and you say, Peek-A-Boo. And then it's your partner's turn. And in this case,
my partner was a robot. But Peek-A--boo doesn't work if your partner can't see you.
My robot was not able to detect my face consistently.
So this was years prior.
But just to underline what I'm after,
was there an emotional response to recognizing or realizing
that your face is not being detected?
Well, because it had happened before, I think the emotional response for me was,
I might actually need to do something. And then we have algorithms of distortion.
And in this case,
we're looking at a generative AI system called stable diffusion,
which you add a text prompt and it gives you an image.
So here you're seeing prompts for high paying jobs,
low paying jobs, and also criminal stereotypes.
And you're actually seeing an over-representation of people with
dark skin for things like terrorist,
for things like drug, for things like
drug dealer and inmate. There is diversity in AI, it turns out, in some cases. And now
some people say AI is a mirror reflecting society and our biases, right? But what I've
seen with our research and the research of others is what we have is
a kaleidoscope of distortion.
And so here's an example.
When it comes to judges, we don't necessarily have parity, right?
But there's been progress.
In the US, women are about 34% of judges.
In this test that Bloomberg ran on stable diffusion, women were represented as judges less than 3% of the time.
So the technologies that are supposed to be taking us
into the future, actually taking us back,
you know, past the progress that we had already made.
And so, because we have algorithms of exploitation,
of distortion, of surveillance, you name it.
This is part of why I started the Algorithmic Justice League.
And it sounds cool.
So that was also part of the motivation as well.
And what I want to focus the rest of the talk on is the research that was really the core
to establishing the Algorithmic Justice League and also point to the fact that this
was research I did while I was a student, a master's student, and some of the subsequent
research I also did with undergrads as well. So regardless of where you are on your journey,
you can certainly make a major contribution in terms of developing the kinds of technologies we want.
And so we're going to start with, we're going to go to class for just a little bit, right?
Get you up to speed on what I call FRTs, which are facial recognition technologies.
I do it plural because there are many different types of facial recognition technologies you
might hear about, and they're oftentimes answering a number of questions.
So one of the first questions that's often answered
by these technologies is, is there a face, face detection,
or you'll hear facial detection.
Unmasking AI, the white mask, all of that,
face detection failure.
Now you saw the women of Wakanda, right? So
there's this other question, what kind of face? So this can be classification, like
gender classification, age estimation. Some people try to guess your emotions.
Now you know you can fake smile. I'm actually happy right now, but you know
you can fake smile, right? So just because you can guess it doesn't mean
it's right. And then we get to the one that gets a lot of the headlines.
What is technically considered facial recognition?
Whose face is this?
And it comes in two flavors.
So you might think of one-to-one matching,
also known as verification.
So think of unlocking a phone, right?
There's the expected face,
and then your sibling who looks a lot like you who gets into your phone, right? There's the expected face and then your sibling who looks a lot like
you who gets into your phone, right? And then you have facial identification, minority report,
needle in a haystack, person of interest. This is the one that tends to get most of the headlines.
And so I say all of this because my research focused initially when I was a grad
student on gender classification. And that's trying to guess the gender of a face based on
an image. So many problems with that to begin with, but we'll get there, right? And so the research
that I did, Gender Shades, was really one of the early demonstrations of
significant bias in AI systems from leading tech companies.
And to do this work, I saw that existing benchmarks that were testing how well these systems work
oftentimes didn't look like the rest of society.
And so they were destined not to represent
the global majority.
And so when you had data sets that were largely male
or largely lighter skinned individuals,
it didn't even allow me to ask the research questions I wanted.
Dr. Joy Bollamwini, poet of code, pioneering AI researcher, founder of the Algorithmic Justice
League and author of Unmasking AI, My Mission to Protect What is Human in a World of Machines.
She delivered the Rubenstein Lecture at the Sanford School of Public Policy at Duke University
in February February 2025.
Hey there, I'm David Common. If you're like me, there are things you love about living in the GTA
and things that drive you absolutely crazy.
Every day on This is Toronto, we connect you to what matters most about life in the GTA,
the news you gotta know, and the conversations your friends will be talking about.
Whether you listen on a run through your neighbourhood
or while sitting in the parking lot that is the 401,
check out This Is Toronto, wherever you get your podcasts.
You're listening to Ideas.
You can hear us on CBC Radio 1 in Canada,
on U.S. Public Radio, across North America,
on Sirius XM, on World Radio
Paris, in Australia, on ABC Radio National, and around the world at cbc.ca slash ideas.
Find us on the CBC News app and wherever you get your podcasts.
I'm Nala Ayed. Dr. Joy Bualamwini has been studying facial recognition
since she was an undergraduate student
at Georgia Tech 15 years ago.
Back then, the available technologies
could not recognize her face,
but they could detect the faces
of her lighter skinned classmates.
She found the same problem when she began
her graduate studies at MIT years later,
at the dawn of what became known as deep learning.
When I spoke with Dr. Joy, I asked her to explain
why this problem persists,
and to describe what's ultimately at stake.
So when I was an undergraduate at Georgia Tech, the kind of computer vision techniques
being used to detect faces aren't the same types of techniques that were being used when
I came to grad school.
So this is actually why I was surprised that I was coded in a white mask now at MIT,
this epicenter of innovation.
There were breakthroughs and advances when it came to computer science,
and in particular, when it came to machine learning.
You now had enough data and
enough computational power to do some of the explorations that had been
proposed earlier on in the development of AI, computer vision, and so forth, but you
just didn't have enough data or compute resources.
And so by the time I was a graduate student and they're using deep learning approaches, the idea now is you collect a large data set and you use something called a neural network.
Yep.
And you use that to learn different patterns, right?
So in this case, the pattern of a face.
So my research exploration was actually interrogating, wait a minute, the research papers that I've
been reading claim that the deep learning approaches are so much better than the prior
approaches.
So why am I wearing a white mask?
Right?
So that was the curiosity piece.
Because that thing wasn't lining up.
And then when it came to gender classification, which is what my MIT research
was focused on, what we saw was that many of the gender classification benchmarks, when
we actually looked at the composition of those benchmarks, they were mainly male faces and
mainly lighter skin faces.
And we saw that across the board with different types of face data sets, both as benchmarks.
And then you can also look at face data sets that would be used to train the systems as
well.
So there's a part of it that deals with the data sets, both in terms of the data that's being used to train a model,
but also the data that's being used to evaluate the model.
So if they're heavily skewed, it's not so surprising.
They're not going to be as accurate on faces that aren't as
representative of the training data that they were exposed to.
So what I'm trying to say is you can't isolate it
to just one element alone,
but the data makes a huge difference.
And I think the part where the data makes even more
of a difference is the benchmarks can lead
to a false sense of progress.
Because if we have misleading measures of success,
as we're developing different types of AI systems,
now maybe we're not looking at faces, maybe it's pedestrian tracking because we want these self-driving cars not to hit people, right?
Or maybe it's something for oncology, you're trying to detect cancer and so forth.
There are different areas and elements we could be looking at.
Maybe it's women's heart health or cardiovascular disease, and we know that it presents differently in different populations and all of this.
Suddenly, I was like, oh, okay, this is the face space is a canary in the coal mine.
We need to interrogate how we're developing all of these human-centric AI systems because using a data-heavy approach,
which deep learning is, means that the biases of the data sets, not just for the training
but the evaluation themselves, might give us a false sense of progress.
And that's what was happening in the face space when I was interrogating, how did we get this far?
Right?
That now a grad student is showing IBM, Microsoft, later on Amazon, these huge SKUs.
They had the resources, but how they were asking the question and what they were evaluating
for didn't have a wide enough scope.
And so I created my own benchmark.
And in this case, it was the pilot parliament's benchmark.
I looked at the list of top countries
by their representation of women in parliament
in order to create something that wasn't perfect,
but was significantly better than the options
that existed before.
And so finally, with this benchmark in hand,
I could get to my research question,
right? Which was how accurate are gender classification systems? So going back,
what kind of face, guessing the gender when it came to the faces I had collected in this benchmark?
So let's see how the companies did. And in this first study, we looked at IBM, Watson,
we looked at Microsoft, and we also looked at Face++,
which was a billion dollar tech company based in China,
because oftentimes you hear there's a data advantage
and they had access to billions of photos of Chinese faces.
So do that in there for the mix.
All right, so we look at overall accuracy on the pilot parliament's benchmark. And
what do we have? IBM, 88. They get a B. Microsoft, 94. They get an A. 90%.
Face++. I'm a nice professor. I'll give them an A, right? So if you look at it overall, it seems like, okay,
it might not be too bad.
Now we start to break it down a little bit.
So once we start to break it by gender,
we see that overall, most of the systems perform better,
actually all of the systems perform better
on male labeled faces than female labeled
faces.
And then we looked at skin type.
And so we actually broke it down into Fitzpatrick's six skin type classifications, but then for
this particular analysis, we did lighter skin, darker skin with the top three categories
and then the next three.
And so in this situation overall, better performance on lighter skinned individuals
than darker skinned individuals.
Now, where it got interesting was doing an even deeper dive
and this aggregating across these four categories.
And so when we did this and we took a look at Microsoft,
perfection was possible.
Lighter males, 100%. I didn't know I'd see that,
right? And then we saw with, um, followed by, in this case, lighter females, darker males, and then
worse performance was on darker females, around 80%. These were the good numbers.
80%. These were the good numbers.
IBM, lighter males take the lead again, right? Followed by light females, darker males, darker females.
I thought the companies might want to know, you know, these results.
So before I published the results, I actually gave each company a preview
and I showed their peers, but I didn't tell them
who was who.
They only knew their results.
All right.
So here were the range of responses.
Well, one response was no response, but their website also said, do you have fun?
Right.
So we take no accountability. Good luck.
All right. So that was one response.
Another response was,
we've been known.
We already know about bias.
But thank you anyways.
So in this case with Microsoft,
when they made an update saying that they had
addressed some of this bias and so forth.
I'm a scientist, let's check.
So I did a quick spot check.
I had a tech review, MIT tech review photo
that had come out, I think around the same time.
So I was 28 at the time in that photo
and I was labeled male and 50.
IBM, IBM really did their homework. They replicated the study, invited me
to talk to their engineers, and they had a self-reported result so that when I went to
present the initial paper we could actually say here's the self-reported progress. Now I emphasize
self-reported because you have to verify.
So we did the follow-up study, right?
And in the follow-up study, we did see marked improvement.
Our methods were a little bit different based on the threshold that we chose, but overall,
there was definitely a change.
What was surprising in that follow-up study, which I authored with Deb Rajee,
who at the time was an undergraduate student at University of Toronto, was we included Amazon.
And we were surprised to see that for darker females, Amazon was actually where their peers
had been the year before. Now imagine having a test where
the answers are public and there have been headlines,
and you're Amazon.
We were really surprised.
In this case, we ended up getting a bit of blowback from Amazon.
I bring this up because sometimes when you're doing this kind of research, it's
celebrated now. But when we started, it wasn't always well embraced. And so this was actually
a critical moment for me and my own development as a student. would I continue doing work that might risk my future career options, right?
If I wanted to go work at a tech company,
if I wanted to go to a university
that oftentimes has funding from tech giants.
And so I was discouraged to say the least
when you poke dragons, sometimes the fire does come back.
But I was also so excited when I had the support
of very prominent researchers, including the lead AI research scientist at IVM. And these people
actually put their own careers on the line and actually did suffer some professional blowback
for saying something that was really important, which is this kind of research is necessary their own careers on the line and actually did suffer some professional blowback for
saying something that was really important, which is this kind of research is necessary
because it makes us better to understand our limitations, right?
That's how we can actually start mitigating it.
And I will say, everybody seemed to come around eventually in 2020.
All of the US-based companies we audited made commitments and actually
stopped selling facial recognition to law enforcement. When the book came out, Unmasking AI became an
Amazon editor's best pick, which for me, given all of the blowback, was something I didn't expect. So here's what I'm wondering as someone who's not an expert, a subject matter expert, is
that we've come this far, as you say, with these, you know, flaws, there's really no
other way to describe it in the way that faces are recognized or identified by computers.
How big a job is it to undo this is what I'm wondering.
How much work is involved or how much effort needs
to be put into actually changing the formula here?
I think the question I have is how big of a cost will society face if we don't?
Something that I've actually called for is deep data deletion,
which is to say, what would it look like to start over on building some of these models?
And also, before we're even building new models, for what purpose?
Because accurate facial recognition,
think of drones, a gun,
and accurate facial recognition.
Inaccurate facial recognition,
you're targeting the wrong person,
killing the wrong person,
but accurate facial recognition in the hands of
an authoritarian government, an adversary, and so forth. It's
not just about how accurate these systems are, but how the systems are being used. I
actually have a poem related to this called Precisely Who Will Die, if you'd permit me
to share it.
I would love to hear that, please. Precisely who will die.
Some say AI is an existential risk.
We see AI is an exterminating reality.
Accelerating annihilation.
Augmenting destruction.
I heard of a new gospel delivering death with the promise of precision,
the code name for
an old aim to target your enemy the other reduced to rubble.
Face erased, name displaced, as the drones carry on in a formation that spells last shadow.
AI wars first fought at the doors of our neighbors.
Next the bombs drop on your private chambers.
Cease to believe fire on fire will deliver peace. Precisely who will die, zeroed out by ones and guns.
Thank you so much for that. Was there a particular
incident or story that prompted you to write that?
I think the, let me see when this was written, December 16th, 2023. So I think in 2023,
the context here is there is so much AI hype. So OpenAI releases chat GPT in November 2022.
So by the time we're at the end of 2023, right?
Oh, AI is here.
I wanted to highlight that we don't have to make things up
about the potential risk of AI systems in the future.
It's real.
It's real, it's here.
And also I was thinking about, yes,
there's ways that people envision AI killing people immediately,
but I was also very concerned about the slow ways in which AI can kill people as well.
The slow ways.
The slow deaths.
What's an example?
The continual denial of opportunity, not getting the medical help you need because there's a
bias medical algorithm, right? When you're thinking about systems for providing mortgages or housing allocation,
and now you're housed in an area that's in a food desert,
or you're housed in an area that has more pollution and so forth.
This notion of systemic violence,
it's much easier to point to the gun and the soldier. It's a little harder to locate the system of
AI decision-making through humans in the last-minute loop
that then led to a circumstance
where you don't have economic opportunity,
you don't have adequate access to healthcare or education.
AI systems determining who gets into college or not.
And so to answer your question about what was motivating that poem and that moment,
it was in 2023 seeing all of this AI hype around the doomsday scenario
without looking at the current doom many people were living that were also very preventable.
This wasn't theoretical, right?
This was we had enough evidence and the evidence continues to grow.
And so that was part of the motivation.
When you have spoken of the idea of data deletion,
I'm wondering what kind of reception you received.
It depends on who.
So the everyday person, the X-coded,
yay, let's delete data.
From the industry?
From the industry is really interesting.
So you actually had Facebook delete a billion face prints, right?
And why did this happen?
Because Metta had a change of heart, lawsuits.
So you had, I think it was a $1.2 billion in settlements.
You did have two states that had laws on the books for
biometric technologies and there were consent laws.
There were real consequences with holding on to
certain types of data when you have laws.
It was to the interest of Facebook meta now to delete those face prints.
But the reason I talk about not just data deletion, but deep data deletion is,
while they deleted the data that was used to train their models,
they didn't delete the models.
So when I'm talking about deep deletion,
it's not just the data,
it's also the models that are created from that data and
the products that those models are integrated into, right?
So we're talking about a full-on excavation.
We got to get to the roots.
And how realistic is that?
When laws come in and when consequences come in, then people move, right?
So for example, you had Claireview AI scrape billions of photos.
You even had Facebook telling Clareview to stop.
And for example, the Italians, their regulatory body had fees and a data deletion requirement.
And so, I think if what you're asking is will companies delete data voluntarily,
quite unlikely unless if there's some kind of legal liability.
So after a long meandering approach of being a computer scientist,
being an artist and so forth, I realized the advocacy piece was necessary
because without some laws, without
some regulations, right, then to your point, there is really very little motivation for
companies to delete any data that they've collected or correct any models. So that's an example of real world change.
And that's what the Algorithmic Justice League is about.
How do we put this research in action?
And so that might look like advocacy, it can be hearings, it can also look like art, right,
through various exhibitions.
There's a fellowship I'll be doing that will be soon announced that will include more art
exhibitions.
And then I'm a fighter and we keep fighting back.
So one of the things we've been fighting for with the Algorithmic Justice League is around
biometric rights and particularly with the expansion in the
United States of airport face scans.
So we started the Freedom Flyer Summit, where we have TSA scorecards for people to fill
out.
So literally, you just share what have your experiences been.
Did you see signs?
Was there notice?
Did they just tell you to step up?
Right?
What happened? We've been collecting these responses.
And now that they have plans to expand to over 400 airports
is actually the time for us to speak up
because it is quote unquote in the pilot phase,
though a lot of people don't know you can opt out.
And so we've been having a little bit of fun with this,
with the Opt Out Club and so forth.
And every time you opt out, not only is it a vote for consent culture, but you're also
standing up for people who've been robbed of that choice. When I opt out, I think of people like
Portia Woodruff. I think of people like Robert Williams, because these technologies don't just
stay at the airport. We even saw it at the
Super Bowl, right, and in stadiums and bringing it into classrooms and so forth. And so it's
important that we resist where we can and these are little steps you can take but together make a difference. Unstable desire.
Prompted to competition, where be the guardrails now?
Threat in sight will might make right.
Hallucinations taken as prophecy.
Destabilized on a middling journey.
To outpace, to open chase, to claim supremacy,
to reign indefinitely.
Haste and pace control altering deletion.
Unstable desire remains undefeated. the fate of AI still uncompleted.
Responding with fear, responsible AI beware.
Profits do snare, people still dare, to believe our humanity is more than neural nets and
transformations of collected muses, More than data and errata.
More than transactional diffusions.
Are we not transcendent beings bound in transient forms?
Can this power be guided with care,
augmenting the light alongside economic destitution?
Temporary band-Aids cannot hold the wind
when the task ahead is to transform
the atmosphere of innovation.
Poet of Code, certified, human, made.
Thank you so much, Dr. Joy.
Thank you.
It's a pleasure speaking with you. Joy. Thank you. It's a pleasure speaking with you.
Likewise.
Thank you. Take care.
Thank you. Bye-bye.
On Ideas, you've been listening to Unmasking AI featuring Joy Boalamwini, poet of code.
She delivered the 2025 Rubinstein lecture at the Sanford School of Public Policy at
Duke University.
This episode was produced by Sean Foley.
Special thanks to Carol Jackson
at the Sanford School of Public Policy
and to Glenn Alexander at WBUR Boston.
Our technical producer is Danielle Duval.
Our web producer, Lisa Ayuso.
Senior producer, Nikola Lukcic.
The executive producer of Ideas is Greg Kelly
and I'm Nala Ayed.