Your Undivided Attention - No One is Immune to AI Harms with Dr. Joy Buolamwini
Episode Date: October 26, 2023In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses risks to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” ...and explains why some tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we’ve arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya’s film explores the fallout of Dr. Joy’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I’m fighting bias in algorithmsDr. Joy’s 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey everyone, this is Tristan.
And this is Aza.
One of the things that makes AI so vexing is the multiple horizons of harm that it affects
simultaneously.
We sometimes hear about this divide or schism in the responses to the immediate risks that
AI poses today and the longer term and emerging risks that AI can pose tomorrow.
In those camps, there's the AI bias and AI ethics community,
which is typically focused on the immediate risks, and there's the AI
safety community, which is typically focused on the longer-term risks.
But is there really a divide between these concerns?
About this notion of schism, it makes for good headlines.
That's Dr. Joy Blumweeney,
founder of the Algorithmic Justice League and author of a new book called Unmasking
AI, my mission to protect what is human in a world of machines.
I've heard this, there are camps, we got AI safety on one end,
we got AI ethics on the other hand.
We got the doomers, the gloomers.
all of these things. I think it makes for interesting headlines, and I see it less as a schism
and more as a spectrum of concerns. I think there are immediate harms, emerging harms, and
longer-term harms. And I think the way you address the longer-term harms is by attending to what
is immediate.
yet.
Dr. Joy conducted the breakthrough research that demonstrated to the world how gender and
racial bias gets embedded into machine learning models.
Her work has been incredibly influential.
She's helped set the agenda in the halls of power, and you may have seen the documentary
on her work called Coded Bias, which is available to stream on Netflix.
So we are absolutely thrilled to have her on the podcast.
And I think the thing we all agree on is the urgency of all these risks
makes it imperative that the people who bring different perspectives
can come together and talk and find common ground.
Dr. Joy, welcome.
Thank you so much for having me.
So I want to get into your research,
but I actually want to bring listeners to where you and I got to meet,
which was at a little meeting we had in San Francisco with President Biden.
And I really want to say that I think that you had more impact in that meeting than everyone else
because you told the most compelling story about Robert Williams.
So I thought we'd start there with what was the story you told President Biden that relates to your work?
Oh, well, that's so kind of you to say.
There were many impactful people in conversations that we had.
And I was really grateful for the administration, really taking the time to deep dive.
So for that meeting, I brought in two photos, and I, you know, passed it around at President Biden,
Governor Newsom. I think you also got a few of those photos. I did. I still have it in my desk.
And so one of those photos shows a man named Robert Williams, and he has two small girls
who are his daughters. And he was actually arrested in front of his daughters and his wife
because of a false facial recognition match. And then he was held in a holding cell for over
30 hours. And this was also around his birthday. I mean, it just gets worse.
And worse. And so it was really making sure that we were putting a face on AI harms because it's so easy to talk about it in broad terms. You'll say a sentence, right? AI can be biased or there could be racial discrimination, right? Or it's being used in the criminal legal system in harmful ways. And so I really wanted to say who are the people who are being harmed by AI, those who are convicted or condemned due to algorithmic systems.
and what impact do these types of interactions that involve AI as a witness have on people's lived experience?
And so sharing that photo around to humanize AI harms in the conversation was a launching point for President Biden to then ask is the reason he was falsely identified because he was black.
And that really got to the heart of the matter.
And it was also an opportunity to share that not only do we have documented racial bias,
we have documented gender bias, documented age bias, documented ability bias, colorism,
when it comes to facial recognition technologies.
And then if we're thinking about AI systems more broadly, think of an ism.
It's probably been encoded in some type of AI system being deployed,
and it could be deployed near you, right?
So at your child's school, at a hospital.
And I think this notion that no one is immune is really important.
I just want to say, actually, it's a testament to your work.
It's so true that, you know, we had someone from our team
actually give a talk at a global semiconductor conference
about the risks of AI.
And the first thing that everybody asked about was algorithmic bias.
And it really speaks to, like, you've created an agenda
of concerns that, like you said,
really wasn't there maybe six years ago
and now really is. And I think that's just
something really to celebrate. We should probably
get into the core here.
If we're just sort of setting the table for listeners,
people think computers are
running on just code and so therefore
the system's got to be more objective or more neutral.
How do gendered racial
and other biases find their way into
AI? Sure. So the approach
to AI that is currently
very popular uses machine
learning. And so machines are
learning from what? Large data sets that are used to learn various patterns around the world,
patterns of what a human face looks like, patterns of what a sentence or an essay looks like. And so
you can have large language models like the kind of AI models that power chat GPT, or you can
have AI systems like the type that power facial recognition systems used by law enforcement.
And so where does the bias come in?
Where does the discrimination come in?
When we've looked at data sets that are open for scrutiny,
when it comes to face datasets, as I was doing my research,
I encountered so many data sets that were overwhelmingly of lighter-skinned individuals
and overwhelmingly male individuals.
One of the gold standard databases,
if you permit me to get into some technical weeds a little bit, right?
labeled faces in the wild. LFW was the gold standard. And when you looked at it, it was about,
I want to say, over 80 percent lighter skin individuals, 70 percent or more male. And so it wasn't so
surprising if the measures of success themselves were skewed. It meant that the field as a whole
had a false sense of progress. So was facial recognition advancing in
some domains? Yes. Was it advancing the same across all demographics and populations? No. And at the time I was
doing the research, it was very rare to find any papers that would disaggregate numbers. So typically,
you would say, here's the gold standard data set, and here's the overall performance on that
data set. So what I did with my MIT research was say, what would happen if we changed the test?
Right? If the test included more women, if the test included more people of color, would the result change? And so I decided to focus on gender classification, binary gender classification to be more specific, not because gender is binary, but most of the gender classifiers we were looking at had that gender binary. And it turned out that with a more inclusive data set, which I called the pilot parliament's benchmark, when we tested
systems from IBM, Microsoft, Face Plus Plus, later on Amazon, and Clarify.
We found that there was indeed substantial bias along skin type, along gender, and very
importantly, at the intersection.
And so the gold standards turned out to be pyrite, you know, full's gold.
And it wasn't just a lesson for facial analysis systems like gender, class,
classification or age estimation, but really any human-centered AI model.
So think about AI models being developed to detect cancer, to predict the formation of plaque
for heart disease.
You know, one of the things I think your work does so incredibly well is it makes these
invisible things visible.
Actually, it does more, I think.
The way you communicate, it makes them visceral.
People care.
They can see it.
They can feel it.
And so far we've been focusing on the harms of, I think, what Danielle Allen,
who's the political scientist and hard professor, calls Generation 1, or 1.0, AI.
You know, Dario, the CEO of Anthropics, says in the next two to three years, we'll hit AGI.
And I think what he means by that is it can do the economic work of a normal human being across most tasks.
And Sam Altman says, superintelligence in four years, et cetera.
So I'm just curious.
I want to make a big distinction with what you just said there because I want to be sure I'm clear with what you're saying.
You're saying AGI and then you're saying AGI being AI systems being able to do a lot of the economic labor that's currently done with humans.
Though AGI can also be understood and is often said to be as you were going forward, right, with super-indexamined.
So I want to make sure we are very precise about what we are talking about, because I don't see
super intelligence the way maybe some of the people you've described see it.
And I worry about that with algorithmic systems, and we don't have to have AI systems flagging you
as a terrorist suspect or flagging you imagine a drone with a gun and facial recognition, right?
and you don't necessarily have to have super intelligence, right,
for military applications of AI to be immediately deadly.
So I want to be careful in the conversation to not necessarily accept all of the premises,
but still have the conversation.
And so to me, this notion of super intelligence,
I'm very cautious about buying into the notion of sentient systems, which we do not have,
and I do not see us having in the next few years.
And that being said, we can still acknowledge that they're very powerful AI systems
that can absolutely do economic labor.
One of the questions that we ask ourselves is,
Can you have an aligned AI in a misaligned system?
Of course, the answer is no, no matter how well you align your AI,
if it's in a misaligned system, it's going to cause harm from that misalignment.
And it reminds me of that phrase of, you know,
if you make life better for women, you make life better for everyone.
If you make life better for black and brown people,
you make life better for everyone.
As the power of AI continues to increase, the cost of our misaligned system will also increase.
And so maybe this is the wake-up call for humanity.
I will say the term alignment and misalignment, I find difficult to use.
Because if by misalignment, you mean AI that's discriminatory.
If by misalignment, right, you mean that AI that is spewing hate speech.
Right? If by misalignment, you are using a somewhat safe word to describe very harmful things.
I lean towards saying the more harmful things that aren't aligned because alignment can also look like what type of goals you wanted to achieve.
And I've seen the evolution of discourse in this space, right?
We didn't always talk about responsible AI or AI safety, or I've been hearing more recently beneficial AI, but I've seen that those terms feel more comfortable for some people to engage in a conversation than saying AI racism, AI discrimination, or misogyny.
And so when I hear the term AI alignment, I'm always asking, what do you mean?
Is alignment a softer way of talking about algorithmic harms, algorithms of oppression,
algorithms of erasure, algorithms of exploitation?
Let's be clear about what we're talking about.
So if we can be more specific in our language, I think that also helps us to be more specific
with the types of guardrails we put in place.
So when I hear something like alignment and I look at where that type of language is being
used, it concerns me that it is used to remove oneself from the more challenging societal conversations.
I'm curious how you see this in relationship to incentives, because the more we hype the tech,
the more quickly there is an incentive for companies to replace people on their staff with tech that is overhyped in
terms of its ability, which would then accelerate all the places where it's biased and has all these
problems. Because I think just to say one last thing, I think there's a unifying frame here actually
in a lot of your thinking and our thinking, which is noticing that social media was also
AI that had all of these harmful effects. And we haven't yet even gone back and fixed first contact
social media with AI, just like we have not gone back and fixed many of the systems that you're
talking about that are not safe and effective, that are causing harms right now. And now we're
racing to scale and deploy these even more powerful systems without actually going back
and fixing them. And I'm curious, your reaction is that and how you see incentives playing a role
here. I do think at the end of the day, we cannot rely on self-regulation. I do think this is why
laws are necessary and this is why legislation is necessary and this is why litigation will be
ongoing. As somebody from a computer science background initially, I very very, very much, I very
much dismissed policy and advocacy. And my initial approach to some of these issues was a very
technical approach, which is, okay, we got biased data sets. Let's make the data sets more
inclusive. That might address algorithmic bias, but it doesn't get to the algorithmic harm
of accurate systems can be abused. Right now, I see a contradiction.
saying some companies, not all, but many leading companies, you know, saying that we should
pause AI. AI poses an existential risk more than climate change even so. And yet, despite
claiming that the risk are so high, nonetheless moving full speed ahead as if they have no
choice. There are choices and there are also profit incentives, right? And so there is a benefit
to hyping AI because you then get to be the creators of this powerful system that even you
do not necessarily fully understand. So there is a bit of a marketing mystique that also helps
where it's almost trying to, it's more saying, we don't know what's going to happen. It could be
harmful, yes, regulate, etc. But then when it comes to putting in safeguards, putting in guard
rails that could cut into potential profits, you're going to be misaligned, right, with what the
profit interest is and where the public interest is. And so I do fear corporate capture
of regulatory processes and also legislative processes.
Do I think corporations shouldn't be part of the conversation? No. We need all of the stakeholders in the room.
Should they hold the pen of legislation? Absolutely not. And I think we also shouldn't be so bought into the idea that only they can understand these systems to then make themselves the only people who can then propose the quote unquote solution.
which is another system they sell.
So I think we have to be really mindful of that.
So the first thing I relate to is what you said of as a computer scientist.
You know, you saw these problems and your instinct was I'm going to grab the tools that I know how to grab,
which is like, oh, it's a code and data problem.
Let's grab the code and data solution tools and, like, let's put the tools in.
You know, identifying a lot of the social media problems as very much a design problem.
like, oh, it's designed in this way that I can clearly see
is drawing up social validation and addiction
and sort of social proof and social pressure
and it's just playing with all these psychological levers
and I'm like, oh, that's a design problem.
And I'm a design thinker.
So I'm going to pull out my design tool and say,
hey, Google, when I was inside of Google,
why don't you just change the design?
Because it could just work so much better
if we don't do those things
and if we set some standards inside of the Google
design standard code base.
But then what I think we both came to here,
when I'm hearing you say at least,
is it's really about the insane
incentives and you need to get out of your tool set and say we have to have regulation that
changes the incentives or something that changes what the incentives are because the incentives
are what drive the action. But there's an interesting place here where actually I'm not sure
that we do agree, which is, or at least I'm interested to dig deeper, which is the belief that
the companies are hyping the risk because, as I sort of heard you saying, is like, surely
they don't believe it's this risky because if they did, why would they be building it?
But I think there is an explanation for that, which is that they are worried that that power does confer advantage to those who do, if they can control it.
And up until the point where you, you know, if you can control it, then continuing to build it so that you have access to that power before they worry the, you know, the non-good guys have access to that power.
I'm not saying they're the good guys, but I think that there is a good faith interpretation of their fear about the power of what they're building.
and we do talk to a lot of the AI safety people inside the labs
and the ones that we talk to at least,
I have viewed as good faith in their concern
that what they're doing could be really, really catastrophic.
But then it comes back to the incentives
where they don't have a way to stop everyone else from building it.
And so it's not so much that they're super powerful,
more like they're helpless and caught.
And then there's this last part which you're naming,
which is correct, which is the regulation that they're proposing
would cement their concentration of power
where only they have access to build these systems.
sophisticated systems, which would have done to be a whole other set of problems.
Yes, we do disagree sometimes, and that's fine.
So to the point where I think there is a bit of disagreement, I separate institutions from
individuals, and my experience with the Gender Shades project, right?
So we audited AI systems from leading tech companies, and I had an opportunity to talk
with the people who created some of these systems, and the conversations I would have with
the tech teams and AI researchers within companies were very different conversations than
what I would have with executives or people from the communications team or people from
the legal team, right? And so I think I can agree with you that there are people who have
true and legitimate concerns about the risk of AI within companies and outside of companies.
I do not necessarily see the institution that houses all of these individuals, the companies,
then actually taking the steps that would put more belief behind what they're saying.
So if you're saying there's a pause letter and you have 30,000 people,
sign it, but they're not pausing, right? I'm saying your actions are literally not matching your
stated concerns. Point blank. And the counter for that is, well, if we don't do it, someone else
is going to do it, as if we are somehow helpless, which we are not, right? The motivation is the
profit. If we have that power first, right? Yes, you can say maybe you can prevent other
bad actors if you are the arbiter of determining who's bad or good, and it's more complex than
that. But there's also the power in having tools that you will sell to others, and that is,
not surprisingly, these are for-profit companies. So I still think there's a contradiction in what
companies are saying and the actions they are taking, and the contradiction is because they want to
make the profit. I'm curious what you would say. I'll take the opposite side that I, that I
normally I'm on and I'll sort of be one of the people that we talk to inside of the AI companies
and they'll say, look, I am really worried about this technology, but I'm aware that in the
end of the day, it's just matrix multiplication, just a whole bunch of it. So we can't stop people
yet from building it. So I have an obligation to build and build it in a safer way than those
others might. Also, there are other countries that are going to be building it that don't share
our values, like a Russia or a China or North Korea. And because this new technology confers new
power, if we don't build it, then we will be beaten by those that do. And also, I think I have
good values. And if I don't build it, then I don't even have a seat at the table. So then it's
irrelevant. Therefore, I don't have a choice. The best I can do is tell people how
dangerous it is because that way I can get someone else like the government to help me
coordinate because we all have to stop at the same time. If we don't all stop, it doesn't work.
There's so many assumptions there that you have to think through.
Great, great. Yeah, let's break it down. Let's break it down. This is so, so critically
important, Dr. John. I'm so excited we're talking about this.
Again, I spoke to many people in the biometrics industry, right? Where I'm trying to
think of the ones I'm able to share. We have bias in the wild stories, etc. So when our research
came out, we had people saying, I worked on quality assurance. And I knew that these issues existed,
right? But it would have made my job harder to actually address it. Right. So I don't necessarily
buy that building tools that would remove racism and sexism or minimize.
it or address it, right,
then would somehow compromise the ability
to do other types of AI innovation.
I do think that sometimes this notion of
if we're building responsibly,
that means we slow down,
so that means the other people get an advantage.
I think they get a short-term advantage,
but the long-time societal impacts do not outweigh
those short-term gains.
in the first place. So I still think it goes back to the profit motivation. For example, you could
have R&D and still not release models. That was what was happening for a very long time. It's because
CHATGPT got 100 million users in a very short time historic. And that shifted the market dynamics and
the market power. That's what happened, right? So I don't completely buy some of these rationalizations
after the fact. And then we had meta release Lama too, right? And so now you have open source
available to many people in a very dangerous way ahead of elections where we're getting
more powerful systems. These were choices that were made that did.
didn't have to be, it didn't have to go down this direction and you could still have those
same arguments. Those choices, I believe, came back to how do we assert some type of market
dominance or market opposition because we realize whoever has supposed supremacy with AI will
hold a lot of power in the world. These tech companies could power one government or another,
Right? So even when you have people inside companies saying other nations might move forward,
the companies themselves are not tied to individual nations. They got clients everywhere.
That's true, but you could argue, and I think the people we talked to you with, the Western AI labs,
are worried about China building artificial general intelligence level systems faster than the U.S. is.
But I totally agree.
I mean, I think what we're coming to is actually a deep agreement of it is really this race for market dominance.
And when market dominance was prior to chat GPT launching, when it was just the race to develop internal capabilities, it was a slower, calmer race.
When they published chat GPT publicly and got to 100 million users in like two months, that changed the form that the incentive was, where now it's about actually, if I don't.
release this thing that I've had in the lab to show the world that I also have a system that's as
powerful as chat GPT because if they have 100 million users, people aren't going to switch back
and forth between different big public AI systems. Businesses are going to start building.
Right. So was that release because of China?
No, no, no. So we agree. I think we agree that it was unwise to hit the gas pedal and set
a new clock rate for releasing systems by publicly launching chat GPT and integrating it into Bing
and having Satina Della say,
we want to make Google dance,
and all of that drove up this race
because literally they got a huge stock market boost
from dropping that stuff.
What I'd love to see is how do we bring all of our communities together
who care about all this going well
for bias, for discrimination, for misinformation,
for democracies, and for bio-weapons,
and for some of the bigger cognition risks of our AGI,
that we actually all want the same thing,
which is to move at a pace that we can get this right,
rather than move at a pace that we keep
just shoving harms onto the balance sheet of society,
The question is, are you willing to lose something?
This is the question of power and privilege always, right?
So it's very easy to pay lip service to inclusion, right?
It's very easy to pay lip service to we don't, who's going to say we want discrimination usually?
I mean, dialogue has changed, right?
You know, I think it really comes down to when it cost you something to do
the thing that's better for society than the thing that is better for you as an individual or for you as a
company. So that's where that global coordination does require regulations. I was at a UN-related
event a few weeks ago and Professor Virginia Dingham said something that I thought was really interesting
because in that room we were starting to have the false dichotomy of innovation versus guardrails,
And she was saying AI is like a car that hasn't gone under rigorous safety checks being driven by a driver without a license on roads that are barely paved without even traffic lights, you know.
And so back to some of what we've been discussing, it is up to governments and also up to people to agitate to say, no, we do not accept this wild, wild west that we're seeing.
And the way in which you're describing the conversations within the tech companies absolutely shows why they are not the ones to lead this because they want to be first.
And they do not want to give up something that could potentially compromise that position.
They have shareholders.
They have quarterly profits to think about.
And I'm not seeing very long-term thinking right now.
And sometimes you have to go outside of those who are incentivized for the short term,
which also makes it difficult for governments, right, as well, because election cycles also
lead to a lot of short-term thinking, what can be the quick gains made.
So we are not going to change the fabric of society if we don't address power differentials.
and if nobody is willing to lose something.
I really love that because wisdom so often is knowing when you should say no to something.
Our friend Mustafa Soleiman, who just had at the podcast, the co-founder of Deep Mind,
says that in the age of AI, progress will be defined more by what we say no to than what we say yes to.
I heard you say, you know, the release of Meta's Lama 2, open source, is dangerous.
And of course, after that came out, the United Arab Emirates released Falcon,
mistral AI just released their open source.
So now there's a race for more and more powerful open systems.
And I'd imagine some of our listeners might say, like,
oh, it's surprising to hear you say that open source is dangerous, Dr. Joy,
because isn't that like democratizing access to power?
We should want to get it into as many people's hands.
It's not what I believe, but I love to hear your take on that
because that's sort of what Mark Zuckerberg and Mark Andreessen,
apparently just Marks, will make this kind of argument,
and I think he'll have a very powerful rebuttal,
and so I'd love to hear it.
Overall, I remember learning Drupal when I was a kid
when I was a high schooler, Drupal, open source, content management system.
And then I built a little web development company off of that.
And then that meant that I could make websites for all kinds of people,
even had an opportunity to make a website for Ethiopian embassy in West African Nation, blah, blah, blah.
And I was doing this as a high school student.
When I was in undergraduate at Georgia Tech,
I led the development of mobile surveying tools with the Carter Center for a project we were doing on neglected.
tropical diseases. And because of the open source nature of Android, we were actually able to
build bespoke tools. At the time, Google Android did not come with an Amharic keyboard. And so because
of the openness of that system, we were able to build the type of keyboard that was necessary,
et cetera, load in the Amharic font, et cetera. Later on, when I taught to Google engineers,
who would have been part of those teams, and I described what we did, right? You know,
They didn't have the market incentive to do that.
So do I believe in open source in terms of its power to democratize access to the tools of creation?
Absolutely.
Have I benefited from it?
Yes.
Has the rest of the software industry benefited from open source?
Absolutely.
But even with all of that, I think with AI capabilities, we have to be extremely careful.
When it comes to data and consent and privacy, it would be one thing to open source data sets where people had agreed to even be part of those data sets in the first place.
I dodged so many subpoenas while I was in grad school because big tech companies had scraped many face data sets without people's permission.
And in areas where you had laws like BIPA,
the Biometric Information and Privacy Act of Illinois,
they actually had a case to be made
and there were many lawsuits filed.
And so part of my pushback on what is being open-sourced
is was there permission and consent in the first place, right?
Because we are seeing the open-sourcing of models
that were built on, some would say, stolen data,
but certainly data collected without consent and compensation.
So it's one thing for me to open source something I built completely myself.
It's another thing to open source something I built based on what I took,
and now there's still lawsuits happening,
and before we've even resolved that,
now we are creating these chains of bias and discrimination.
It's another thing for me to open source something
where I have clarity about the risk and limitations.
We've spent some time, right, in this conversation,
talking about the various harms that are introduced
and especially with these large language models being so large
that there wasn't necessary vetting and accounting
of what's even included in the first place, right?
So I think it's one thing to share a meal
where you know where the ingredients have been sourced versus inviting people to a table where
you're like, no labels. Good luck. I hope you don't get food poisoning. We're open sourcing,
right? So I really think what is being open source? Who had a decision or a choice in the matter?
Because I do feel in some ways some companies are attempting to get a pass over the original sin.
a sin I was a part of because it's literally how I learn computer vision if the data is there is for the taking this is when I was a grad student and I was doing the IRB process institutional review board right to make sure that what we were doing was ethical when it comes to human subjects research there are additional steps that have to be taken as I was going through the process because I was doing computer vision research I had an
It wasn't considered human subjects, even though I was using people's faces, right? And because it
wasn't in a medical context that I was using it, I really didn't have to do more than just
say that it had the exemption. And when I talked to my peers and older scholars, etc., people were
just looking at me like, why would I make things harder? But why are you asking all these questions?
You know, like get the data and go.
This is like this is just how we do it.
But now there are many data sets that are being challenged
because once people realize, oh, this data set I created has immense value.
How do we know it has immense value?
This company just raised $10 billion.
Where are my data residuals, you know?
So on one hand, that's happening.
and then you see open source, right?
And I do believe there are a lot of people
who from the general concept of open source,
I support this idea that we don't want certain tools
to only be in the hands of a few.
And I do believe that overall,
when you have many more minds working on various problems,
you're likely to find more robust solutions.
And I also believe that if you only had a few major tech companies, you know, in control of what's possible with the platform, it could be extremely constrained.
That doesn't allow possibilities so that, for example, if Google decides Amharic is not a priority language at that particular time in the development, it doesn't get done.
And now the system is closed where you can't do anything about it.
Coming back to the AI space, I think there are different ways of open sourcing.
being thoughtful about it. I cannot say with what I saw with the release of Lama 2 and also where we are
with the lack of regulations, that it was a responsible release. It's not in a context, right,
where we've established the rules of the road. So for me, this is putting out the car that doesn't have
the safety checks where drivers don't have license, where we don't have rules of the road.
road. Am I against vehicles? No. Am I against getting a point A to point B a little bit faster than
what I could do walking in general? No, but I think this stage in the development of AI, because of
where we are and actually safeguarding it, it was too soon. That's my perspective.
One thing we've noticed is there's this schism between people who work more on
AI bias, AI discrimination, AI ethics issues is kind of the common term of art. And then people
who work on, say, AI safety, concerned about different catastrophic threats, whether they're
biological, chemical, all the way to definitely more of the sci-fi ones, which not everybody
believes. I'm curious, what do you think about this schism? Is there a schism? Does there
need to be a schism? Because I think what I'm hearing you say over the last 30 minutes or so is
we agree that we need some kind of top-down rules, you know, a driver's license for the car,
safety checks for the car, test reviews, a safe road, traffic lights, and I think that's what we all want.
And so I'm curious just to kind of go there and ask you what you think.
About this notion of schism, it makes for good headlines.
I've heard this, there are camps, we got AI safety on one end, we got AI ethics, on the other hand.
We got the doomers, the gloomers, all of these things.
I think it makes for interesting headlines.
and I see it less as a schism and more as a spectrum of concerns.
I think there are immediate harms, emerging harms, and longer term harms.
And I think the way you address the longer term harms is by attending to what is immediate.
And so when I think, again, of existential risk and I think of the campaign of Stop Killer Robots,
that's been around for some time.
What does it look like when we look at, someone say the future of peace, someone say the future of war, right?
What it looks like with putting in different types of AI capabilities with various sorts of militaries.
I think where I see a lot of frustration is the airtime that is given for the most extreme views and what some would call AI doomerism.
This is the end of the world.
Super intelligence will emerge.
And we are here to warn you and say, even if we were part of creating these systems, we told you.
In most conversations I have that aren't Internet mediated conversations, but real conversations with most folks,
even folks who would be identified within the AI safety community.
And it might be because they're talking to me.
So they're changing what it is.
that they're saying there tends to be more of the we do know there are immediate harms.
I think what has been very frustrating for many people who look at AI bias and discrimination is when those harms are categorically placed as lesser.
Like, sure, people could face discrimination or of oppression, but to be honest, they've already been facing all of these things.
And maybe what is more of a threat to some people is those who are used to being in power
are now at risk of being marginalized by their own creation to then face the oppression
that many other groups of people have dealt with for centuries, right?
And so I think it's really thinking through the power positioning and who those narratives serve
because those narratives about existential risk when we're really talking about AI destroying the world,
I think it's interesting how the way we're using language, when I think of X risks for AI,
I think of the X coded, the people who are being harmed by AI systems because we can help them now.
We don't have to wait until there are trillions of future humans, right?
What does it say about us as a society if we don't help the people who are drowning in front of us saying we hope to one day help centuries down somebody who could hypothetically drown?
And this isn't to say we shouldn't be forward-looking, but I do think we have, one, an opportunity that's a real opportunity to mitigate more of these immediate harms.
I think about Portia Woodruff, who was arrested for a carjacking, eight months.
pregnant, sitting in a holding cell. No one's jacking a car eight months pregnant, right?
She reported having contractions, and then she had to be rushed to a hospital after finally
being released, you know, and then just that disregard of life, because this happened in
2003, Detroit Police Department, same police department that falsely arrested Robert Williams in
2022. So I absolutely see the frustration of saying we're talking about all of these hypothetical risk
and we're not seeing acute known risk being addressed. And I absolutely think that is a mismatch
of priority. Can we walk and chew gum at the same time? Absolutely think we can think about
acute risk, near-term risk, emerging risk. For sure, I don't agree with the domerism type of framing
of existential risk, but there are others who do. It's when that kind of framing takes away
resources, takes away regulatory attention from actually building the safety checks, getting the
driver's license, and putting the streetlights on, which are things we,
We can do. Those things are hard, and they require compromise, and they require negotiation. But overall, it's not that Google wins or anthropic wins, et cetera. It's that humanity gets to win.
Dr. Joy, I thought this whole conversation was just incredible. Thank you so much for coming on your undivided attention.
Thank you so much for having me.
Dr. Joy, Bollamweeney's book is called Unmasking AI, my mission to protect what is human in a world of machines, and it's out now.
And before we go, we wanted to play you Dr. Joy's spoken word poem that she wrote,
which touches on a lot of the themes we've talked about today.
The title of the poem is Unstable Desire.
Prompted to competition, where be the guardrails now.
Threat in sight will might make right.
Hallucinations taken as prophecy.
Destabilized on a middling journey, to outpace, to open chase,
to claim supremacy, to run.
reign indefinitely. Haste and paced, control-altering deletion, unstable desire remains undefeated.
The fate of AI, still uncompleted. Responding with fear, responsible AI beware.
Prophets do snare. People still dare. To believe our humanity is more than neural nets and transformations
of collected muses. More than Dada and Errata.
more than transactional diffusions,
are we not transcendent beings bound in transient forms?
Can this power be guided with care,
augmenting delight alongside economic destitution?
Temporary Band-Aids cannot hold the wind
when the task ahead is to transform the atmosphere of innovation.
The Android dreams entice, the nightmare schemes of vice.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Kirsten McMurray and Sarah McRae are our associate producers.
Sasha Fegan is our managing editor.
Mixing on this episode by Jeff Sudaken,
original music and sound design by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more at Humane
And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.
