This Is Woman's Work with Nicole Kalil - Unmasking AI with Dr. Joy Buolamwini | 259
Episode Date: December 9, 2024What happens when technology isn’t held accountable? Dr. Joy Buolamwini, founder of the Algorithmic Justice League, is here to guide us through AI’s power, pitfalls, and potential. From exposing b...ias in facial recognition to championing ethical AI, Dr. Joy is leading the charge to protect what makes us human in a world dominated by machines. As a Rhodes Scholar, MIT researcher, and author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Dr. Joy’s groundbreaking work has reshaped the conversation on AI ethics. Her viral TED Talk and the Emmy-nominated documentary Coded Bias highlight the real-world consequences of unchecked technology and why ethical AI is essential for everyone. AI isn’t inherently good or evil—it’s a tool. How we use it defines its impact, and being human isn’t just a feature—it’s the whole point. Connect with Dr. Joy: Website: www.Unmasking.ai Algorithmic Justice League: https://www.ajl.org/ Book: https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-dr-joy-buolamwini/ Poet of Code: https://www.poetofcode.com/ TSA Facial Scan Opt Out: https://www.ajl.org/campaigns/fly Related Podcast Episodes: 202 / Building Your Email Lists & Websites with Brittni Schroeder 172 / Boomers to Gen Z - Understanding Generational Differences with Kim Lear Share the Love: If you found this episode insightful, please share it with a friend, tag us on social media, and leave a review on your favorite podcast platform! 🔗 Subscribe & Review: Apple Podcasts | Spotify | Amazon Music
Transcript
Discussion (0)
I am Nicole Kalil, and let me say up front that I used ChatGPT to write the intro to
this episode of This is Woman's Work, because if we're going to talk about AI on the show,
I figured I might as well use it.
So here's what ChatGPT wrote after I shared 10
examples of other episode introductions and politely asked it to write this one, because I
figure if computers take over the world, they will remember that I said please and thank you.
So here it is. AI is everywhere, whether we like it or not. It's showing up in obvious places like
chatbots and Netflix recommendations, but also in ways we may not even think about, like deciding who qualifies for a
loan or who gets flagged at airport security. And while AI promises to make life easier,
let's be honest, it also has the potential to make things really, really messy. As someone
who's recently asked AI to write a podcast intro because, hey, why not?
I can tell you that AI can feel both impressive and unsettling.
It can sound almost human, but there's always that sneaky little feeling of, wait, is this
helping or are we all just participating in the machine takeover?
And that brings me to today's conversation, because if anyone can tell us how to make
sense of AI's power
and pitfalls, it's Dr. Joy Boulamwini.
And let me just say, Dr. Joy is not here to let AI run amok unchecked.
She's here to show us what happens when technology isn't held accountable and to remind us that
being human isn't just a feature.
It's the whole point.
And that, my friend, was my AI introduction. Not bad,
but not totally me, right? So let me go ahead and introduce our guests, and I'll take back
over from here so we can dive into this conversation about understanding and unmasking
AI. Dr. Joy is the founder of the Algorithmic Justice League, an MIT researcher, artist, and author of Unmasking AI,
My Mission to Protect What is Human in a World of Machines. Her groundbreaking research on facial
recognition tech blew the lid off the industry, exposing the biases embedded in the algorithms
that claim to be neutral. Her Gender Shades paper is one of the most cited works in AI ethics.
Her TED Talk has racked up millions of views. She's the star of the Emmy-nominated documentary
Coded Bias, and she's also a Rhodes Scholar. All that to say she understands AI way more
than most of us. So, Dr. Joy, I'd love to start by asking you to share from your knowledge, experience, and expertise a quick sort of pros and cons list for AI.
What are some of the best uses or ways that it's really helping us?
And what are a few of the worst uses or ways that it might be hurting us?
Oh, I love that you start that way because I talk about being the daughter of an artist and a scientist.
And when I was a little girl, I used to go to my dad's lab. He would have me feed cancer cells. He
was working on drug development and he would actually use different types of computers to
support his research. So this leads me to one of the most exciting uses for me when it comes to
AI, the way it can help support
scientific advancement. You have releases like AlphaFold from DeepMind, which allows scientists
to look at the ways proteins structures are formed, and that actually helps to inform research. And so
I think when AI is being used as a tool to help with discovery, there's a lot of excitement and opportunity there.
AI and in general, computer science tends to be about optimization and efficiency.
So sometimes they'll also have AI systems that are being used to identify maybe areas where you can cut cooling costs, right? And so that can have a positive impact
on the environment. So again, when I think you're using it as a way to help with optimization,
summarizing a lot of data, that kind of thing, sure. Now my specialty is where AI goes awry.
So even when we're talking about potentially helping with the environmental
piece of things, we have to remember that AI products like ChatGPT actually take so much
energy and money to create. We're talking hundreds of millions of dollars. And people
also have started to look at investigations where they'll say, what is the water impact of each
prompt you type in? Maybe think of each prompt being taking a glass of water, but now this can
be taking water from a community, right, where data centers have been put in. And on my end,
really, what I've seen is this technology, AI, that's meant to take us into the future can sometimes actually
take us back to the discrimination of the past while costing us our humanity in the present.
So ableism, ageism, sexism, think of all of these isms, you're seeing it embedded in AI systems.
As women and as women working, you've actually had tools from companies like Amazon.
Amazon had to scrap an internal AI recruiting tool because it systematically cut out women
who applied. And so if you had a woman's college on your resume, that would be a reason for your
app not to actually go through. Meanwhile, if you played
lacrosse, or I believe he had a name like Jared, right, then you were more likely to have higher
points. And part of that is because when we're thinking about how the popular forms of AI are
created today, it's based on something called machine learning. And as it might sound, right,
machine learning from what? Machine learning from data. So in the hiring context, if you're training AI systems to learn
based on the data of the former hires, this is a case where I like to say the past dwells within
our algorithms. So if in the past, you've mainly hired men named Jared who like to play lacrosse for a particular role.
And that then becomes the model. Somebody who's outside of that model doesn't fit this very narrow
scope that's been encoded into the system. So that's just a quick overview, but hopefully
it gives you a sense. Yes, when we're talking about optimization, when we're talking about supporting scientific discovery in certain types of way,
especially when there's a lot of data to crunch, AI is your tool. When we're talking about life
opportunities, if you get hired, if you get fired, if you get a loan, if you even have access to healthcare. In all of these areas, we've seen harmful AI discrimination.
So it's obviously a little scary or a lot scary.
As you were talking, it had me thinking, you know, I think AI can probably learn values,
but the question is whose values?
If you think about, in the example you said, it's looking at historical
data and whoever has been hired up to that point has probably been hired within biases that exist,
whether you're aware or unaware of them. And AI is just catching onto that, but not filtering
through any sort of conscious value proposition. So my question is, are we moving too
fast? I think we are not moving fast enough when it comes to guardrails for AI. And so in that case,
we are always going to see that when technologies come out, particularly if you have open source
tools and so forth, then it will proliferate. But now we're in a moment, right,
where at least in this year, you're having so many elections around the world. And because we haven't
moved fast enough around safeguards, AI systems that are moving fast, that are producing deep
fakes, where you're not even sure if what you see with your eyes or hear with your ears, you can trust, then we're not keeping pace.
And so I certainly think because there are so many harms and risk associated with AI and not
enough guardrails in place, we are in a moment where the tech is outpacing the guardrails.
Okay, great answer. And you may have answered this question, but I'm going
to ask it again because I really want us to spend some time considering this. And I say us as the
listeners. What should we be most wary of? That's my first question. So you talked about deep fakes.
When we're using AI, if we're going to put our thinking hats on, if we're going to practice curiosity,
if we're going to try to be discerning, what should we be wary of and looking for?
Is my question making sense?
It does make a lot of sense, right?
We're out here trying to survive.
You have these different tools.
What do you do?
How do you keep yourself and your kids safe?
That kind of thing. I think one kind of use of AI
that's really been on my mind lately is the use of AI companions. And so we know that we're
experiencing a huge amount of people going through loneliness at rates that haven't been seen before.
And the pandemic did not help that along. Social media, which was
supposed to connect us, also did not help that along. And so what you're now seeing are some AI
companies saying, we have a solution for your human loneliness with a machine surrogate that
acts as a companion. And I'm putting up quotes in that way. And so the reason this is on my mind
is not too long ago, you had a headline about a 14-year-old boy who committed suicide after
engaging in a pseudo relationship with the chatbot.
And he's not alone.
Actually, in the book Unmasking AI, I write about a man in Belgium who also had a similar experience of committing suicide
shortly after talking to a chatbot.
And his widow, much like the mother of the boy I mentioned,
they both honestly don't believe that their loved
one would still be alive if not for those interactions with these quote-unquote AI companions
that are neither human, do not care, and are regurgitating what seems like plausible texts. And so for me, for any listener who is tuning in, I think it's really important
to check with your family members, you know, if they're engaging with any of these sorts of
AI companions, because they can create these deep emotional attachments, which then can be used in really harmful ways, regardless of if that's the
intent of the company that created it. So that would be one of my first areas of caution. It
might seem fun and games and entertainment until literally somebody gets hurt. And so I would put a blog on that for sure. Another thing I think about when using any AI tool
is when we're talking about machine learning, which is the approach that's fueling this
particular moment in AI, the machine is learning from data. So in some ways, data is destiny and also data is value, valuable. And so what I always
check are what's happening with the data I put into the system, right? So sometimes you'll see
a free demo or a free version of a tool. And when you dig deeper, what you come to find is that
it's free because it's taking your data and using your
data in ways you might not even be aware of. In the book, I talk about how there was an app that
was meant to be a photo album sharing app for families. See your grandkids, that kind of thing.
Keep up with their photos. Another company bought the app and they were looking for a business model and they
thought, okay, well, we have a lot of faces, so let's use it for facial recognition. And
those photos meant for one context ended up moving to another one because in those terms of service,
you can use them. If you submitted the data, they can use it however they want. And so I would be really cautious when using any of these tools. And if you do look into the settings and you can toggle, in some cases you can, in some cases you can't. mindful because where that data ends up might end up being really problematic in ways we don't even
know just yet because the technology keeps evolving. So I hope those are two very concrete
things anybody tuning in can be aware of, the AI companions, the emotional connections that can go awry, particularly with young people. So
check with your kids, your loved ones, nieces, nephews, all of that. And when it comes to data,
which is fueling so much of this AI evolution, make sure you're protecting your data as much
as you can. Another thing we promote with the Algorithmic Justice League for people who fly often and we're
around the holiday season is actually opting out of airport face scans. And most people don't know
that for domestic flights, you can say no. I mean, it can be hard to see the signage. You have the
power dynamics and officers telling you to step up to the camera, but you can actually step away from the camera so your face
isn't ever on view and ask for the standard ID. That is your right. You should be able to do that.
If there's pushback, we have an ongoing campaign. If you go to tsa.ajl.org, you can actually tell
us what was your experience because then we're able to hold TSA accountable and say
by your own priorities, right, your own principles, this is where you're falling short. And that
actually makes a big difference. And also when you opt out, you let people behind you know that
it's an option because again, a lot of people aren't seeing those signs or they might feel
a bit intimidated in that kind of situation. So now I have three things.
It's great. And I travel a lot and I had no idea that that was an option to opt out of. So thank
you for that. Even if you have PreCheck, you can opt out with PreCheck as well. Some people use
Clare. You can reach out to Clare and ask them to delete your biometric information.
But this is an area where I find even though it's supposed to be optional, most people
assume that this is just how it's done.
And then the story becomes the traveling public wants this.
But you've had data breaches from TSA, just thinking about a Department of Homeland Security to be specific.
So it's not even a case where you know for sure your data is going to be protected.
And then you also have to think about this growing surveillance apparatus.
So unlike if your credit card gets stolen, right, you change the number, password gets compromised, new password, your face gets stolen. That's a little bit harder to change. And so this face data, also known as biometric
data, is really valuable. And as facial recognition grows, we're starting to see it now
at NFL stadiums, right? I even saw a use case where they had facial recognition on a vending machine for
ammunition, all kinds of things wrong there. But the point I'm saying is it doesn't just stop in
the airport. So if you don't resist it in the spaces where you can, then others use that as
justification to say, look, the public wants this. So I'm sure we are all feeling very uncomfortable and all of that is
really scary. I want to, I don't know, maybe add fuel to the fire here. So when I went into chat
GPT, there was a very small print at the bottom that said, chat GPT can make mistakes, check important info.
And it just sort of reminded me of, and Dr. Joy, you can tell me if this is accurate or inaccurate,
but somebody had told me once that if I Google something and somebody who's my opposite Googles
the exact same thing, that we'll get wildly different information based on our preferences
and search history because of
the algorithm. So like I search something very specific and a young white male conservative
searches the exact same thing, we're going to get two totally different results. My question is, how do we find factual information in a world of algorithms and AI and inaccurate
data and all of that? How do we know what's real and what's not? There's several questions in there.
First, let's get at this notion of AI, quote unquote, hallucinations or making mistakes.
You can think of chat GPT like spicy autocomplete, as Dr. Ruman Chandri likes to say.
And so what these systems are doing are learning patterns of language.
And so they're trained on newspapers, Wikipedia articles, all of that.
And over time, the systems can learn the pattern of a language and learn how to reproduce that pattern in convincing ways as you're seeing.
But just because you can spit out information convincingly, we've met people who are full of BS.
They sound confident.
They flow coherently.
But it doesn't mean what they're saying is true.
And so you always have to keep that in mind. And there've been some real consequences.
A lawyer was debarred. They used a chat GPT and it turned out that the information that came out actually cited case studies that didn't exist, right? And we saw this on 60 Minutes.
Google was doing a demo, I believe it was BARD, and it looked very impressive. It'd been asked
around a list of books and things of that nature. After the show, they actually went to go look for those books, the books didn't exist, right? And so that's where it's dangerous.
If you are maybe playing around with some creative ideas or something fictional or something of that
nature, that's one thing. But once you're in the realm of fact, there's a reason these companies
are putting these disclaimers because they know, right, that it's
highly likely you're getting the wrong information. So that's a bit of why you're getting the wrong
information. These systems, they are pattern reproducers. So they can BS. They can BS
confidently. So then this other question, right, even when you're talking about search
engines, now when you have search engines using the AI summaries, you need to be careful. We've
always needed to be careful online when it comes to critical thinking and thinking through sources,
even before we had generative AI systems. this just makes it that much more important where you are
checking your sources. Sometimes it can be, let's say for a government website or information about
election and so forth, is the information coming from a.gov website, right? Where what you're looking at are other indicators of authority that go beyond an AI
system. So if you're searching with an AI tool or on a platform that's adopted AI, I would say you
need to be very skeptical and check those sources. And so AI tools that do provide sources are more helpful in that case, because you can then go and verify at least where we are now.
But if we aren't actually putting together more safeguards, we're going to have a pollution of our information networks, because then you'll have false AI documents cited as other citations and so forth. So I do think we have to be really
careful there. But we've always needed to be critical consumers of information online,
and generative AI is making that even more so the case. So I would start with skepticism
for sure, and not think that these generative AI systems are going to be
factual. Okay. So you said earlier in our conversation to think about like when we use
chat GPT or something as a, the same as drinking a glass of water, What are the ways we should avoid using AI so that we don't fall into
these traps, but also so that we retain our humanity and our curiosity and our critical
thinking skills? What ways do you think we should be wary or to stop using it?
I think when it comes to creative work or when it comes to deep thinking, those are areas where I'm really cautious about how much to use AI. So if I want to write a book, right, if I'm thinking about an essay, maybe where you use AI is to summarize past ideas you've had or help you with managing
the data. But when it comes to that exercise of let me tell a story in a certain kind of way,
I'm worried that if you cede all of that to an AI system, you lose that capacity. So just like working out, you know, it's, I mean, I don't mind doing Netflix
and chill all day long, but at some point I probably need to go take a walk, take a break.
So I think it actually makes it, it puts the imperative on us to make sure that we're not becoming so reliant on these systems that we don't know how to do
our creative work anymore, starting from the blank page. Okay. So what are your thoughts? Sorry,
I'm spit firing questions at you because I have a lot. What are your thoughts for people who are
concerned about whether or not AI is going to replace them in their jobs or their careers? Is that a valid concern? Not valid? Anything we should know there? hell about AI. So one I like to share is the National Eating Disorder Association.
They bought into the AI hype. And so they had a situation where some of their workers were
looking at unionizing the call center workers. They decided, you know what, we don't want to
deal with this. They fired them. They put in a chatbot. And I think I saw the headline on a Monday. By Friday or Saturday, they had to say chatbot shut down. Why? That chatbot was actually giving those've compromised two things, their workforce, people who were intimately familiar with these sorts of issues, and their reputation and their mission, right? And this didn't come because the AI was better than the humans. In fact, you're seeing it was worse. But the belief in its capabilities, right, led to those decisions.
So I think in this case, it's really important for anyone thinking about using AI to replace
any kind of function, especially as we're talking about hallucinations or BS systems,
is, is it fit for purpose?
Because sometimes it's really easy to assume what you're not familiar with can be automated.
How hard can it be?
Podcast, how hard can it be until you're actually doing it?
Then you realize, oh, there are known unknowns and unknown unknowns and all of those pieces.
So I think that's the first part, right? To be cautious about rushing to replace humans
and to maybe give a bit more respect to other jobs or roles you might not be familiar with.
That being said, AI is replacing jobs. You are seeing within call centers and so forth, reduction of staff, or even a change of hiring plans. We saw this happening
within big tech companies, right, where they've slowed down their hiring in certain areas,
because they're making the long term plan to say, we anticipate AI will be able to reduce this amount of the workforce. So I think that is a true concern for sure.
And I think part of thinking through how to be ready for that is always being a continuous
learner. So what are the new tools that you can be familiar with? Know what AI is capable of and also know where the limitations are so that
you yourself are honing in on the elements that give you more value.
Great advice. I kind of feel like in this case, it's a little bit of an advantage for me that I
am not an early adopter with these types of things. Like technology freaks me out. I
don't understand it. And so I have a tendency because I don't understand it to stay away from
it a little bit. And obviously there are bazillion ways, as you said early on, that it could probably
help me with efficiency or optimization or with discovery. But I'm kind of glad that I've held out
a little bit because it also scares the
shit out of me, maybe because I don't understand it. But you said early on that you are both an
artist and a scientist, and you are an expert in AI and also a poet. How does that come together? And can you share a little bit about why that's important to you
and how poetry fits in? No, that's a great question. So I mean, I am the daughter of an
artist and a scientist. So I mentioned earlier growing up from being a little kid going to my
dad's lab, feeding cancer cells, that kind of thing.
And then going with my mom as she had her art shows. And I didn't realize she was pitching
when we were going to galleries. That's just what we did on the weekend. And so I literally grew up
with art and science as constant companions, as a model. Then when I got to school, you're told to
choose, right? You got the STEM kids over here,
humanities, the arts, social, you know, that kind of divide. And so I went down the tech path.
And in part, because my mom, the artist, she told me, you're, she's like, you're an artist,
no one can take that away from you. And because that is inherently part of who you are, explore other things as well,
right? And so I took that advice. But once I got to grad school, I come from an academic family.
By the time I'm working on my fourth degree, right, to get a PhD from MIT, I actually had
a challenge to myself because I knew I could get an academic degree,
you know, in the STEM area. I wanted to see if I could do a poetic PhD. And part of that was just a way of challenging myself and also saying that our humanity is important and the tools and
the methods and the fields that we say are more important than others aren't necessarily so. And so that was my challenge to myself. And as I started doing research showing gender bias, skin type bias in different AI systems from some of the biggest tech companies in the world, I decided to not just test on the faces of parliament members,
which is how I started, but I also started to test on the faces of the women of Wakanda from
the Black Panther film. And I saw that like my face, sometimes their faces weren't detected,
or they were mis-gendered male. And when I had that observation, I thought, you know what, this might be an
opportunity to move from the performance metrics of my AI research papers, which were for an
academic audience, and move from performance metrics to performance arts. Like, how do you
go beyond the numbers and really have people sense what you mean when you're talking about
some of these AI systems? So this was my challenge to myself,
and it resulted in this poem called AI, Ain't I a Woman, which is very much an AI audit,
but it's done in spoken word with examples of some of the biggest tech companies failing on
some of the most iconic women of the past and present. And when I shared AI Ain't I,
a woman in places like the EU Global Tech Panel, right, or when I was testifying in front of
Congress and I would share those screenshots, I saw how powerful that was. So maybe it was the MIT credentials that got me into the room,
but what made the idea stick was the storytelling and was the poetry. And so in my exploration,
it became very clear to me that I wanted an organization that put both the poetry,
the storytelling with the research. And that's what I do with the
Algorithmic Justice League. That's why I go by a poet of code. And so if you'd permit me,
I'd like to share that first poem that really made me think, okay, this idea that I have about
bringing the two together, maybe it's possible. Maybe I can actually be
a poet of code. Yeah, I would love that so much. I was going to ask you to, so thank you for
offering. Of course. So this one is called AI Ain't I a Woman? It was actually inspired by
Sojourner Truth's 19th century speech in Akron, Ohio around the women's movement actually saying,
you know what? Everything
we're talking about is great, but don't forget us too, the women of color, right? If we're saying
it's a women's movement needs to be for all of the women, not just the privileged ones at the time.
And so here we are, AI ain't I a woman. My heart smiles as I bask in their legacies, knowing their lives have altered
many destinies. In her eyes, I see my mother's poise. In her face, I glimpse my auntie's grace.
In this case of deja vu, a 19th century question comes into view in a time when Sojourner Truth
asked, ain't I a woman? Today, we pose this question to new powers making bets on
artificial intelligence hope towers the amazonians peek through windows blocking deep blues as faces
increment scars old burns new urns collecting data chronicling our past often forgetting to deal with gender, race, and class. Again, I ask, ain't I a woman?
Face by face, the answers seem uncertain.
Young and old, proud icons are dismissed.
Can machines ever see my queens as I view them?
Can machines ever see our grandmothers as we knew them?
Ida B. Wells, data science pioneer,
hanging facts, stacking stats on the lynching
of humanity, teaching truths hidden in data, each entry and omission, a person worthy of respect.
Shirley Chisholm, unbought and unbossed, the first Black congresswoman, but not the first to be
misunderstood by machines well-versed in data-driven mistakes. Michelle Obama, unabashed and unafraid
to wear her crown of history.
Yet her crown seems a mystery
to systems unsure of her hair.
A wig, a buffon, a toupee, maybe not.
Are there no words for our braids and our locks?
The sunny skin and relaxed hair
make Oprah the first lady.
Even for her face well-known,
some algorithms fault her, echoing sentiments
that strong women are men. We laugh, celebrating the successes of our sisters with Serena smiles.
No label is worthy of our beauty. Beautiful. Thank you so much for sharing. And I know,
myself included, so many people are going to want to get their
hands on your book. Unmasking AI is the book. We want to support those local bookstores,
if not wherever it is that you buy books. But Dr. Joy, thank you, thank you, thank you for
your very important work, for being here today, and for sharing your heart with us at the end there. It's much celebrated.
Yes. And to support the local bookstores, you can go to www.unmasking.ai. It'll take you to
a bookshop, which lets you support some all over because sometimes they won't always have it in
stock. Perfect. We're going to put that link and all the other links and ways to find Dr. Joy and follow her in show notes. Okay,
friend, as is true with so many things in our lives, AI is neither all good nor all evil.
It's both a gift and a complete mess. It can do amazing things like cut down our to-do list,
crank out quick answers, maybe even bring some creative hacks into our lives, but it is also dangerously good at overstepping boundaries.
Are we moving too fast? Maybe not fast enough. So our opportunity with AI is to remember that
we have at least some power to choose how we engage and how much we rely on this technology,
and that it's exactly as smart as the people who create it, which, let's be honest, should give us a little pause.
So just like woman's work, you are the decider.
I say we hold on to our humanity,
continue to keep thinking and practicing curiosity,
and refuse to blindly follow anything or anyone,
especially a machine.
Let's make sure we're the ones steering,
not just sitting in the backseat hoping for the best.
After all, that is woman's work.